Loading...
Bimonthly    Since 1986
ISSN 1004-9037
/
Indexed in:
SCIE, Ei, INSPEC, JST, AJ, MR, CA, DBLP, etc.
Publication Details
Edited by: Editorial Board of Journal of Data Acquisition and Processing
P.O. Box 2704, Beijing 100190, P.R. China
Sponsored by: Institute of Computing Technology, CAS & China Computer Federation
Undertaken by: Institute of Computing Technology, CAS
Published by: SCIENCE PRESS, BEIJING, CHINA
Distributed by:
China: All Local Post Offices
 
  • Table of Content
      05 September 2012, Volume 27 Issue 5   
    For Selected: View Abstracts Toggle Thumbnails
    Special Issue on Evolutionary Computation
    Preface
    Pietro S. Oliveto, Xin Yao
    Journal of Data Acquisition and Processing, 2012, 27 (5): 903-906. 
    Abstract   PDF(192KB) ( 1500 )  
    Evolutionary computation refers to the study of computational systems that borrow ideas and get inspirations from natural evolution. It covers areas of evolutionary optimisation, evolutionary learning, evolutionary design, and fundamental theories.
    Evolutionary Optimization: Pitfalls and Booby Traps
    Thomas Weise, Member, IEEE, Raymond Chiong, Member, IEEE, and Ke Tang, Member, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (5): 907-936. 
    Abstract   PDF(4128KB) ( 2814 )  
    Evolutionary computation (EC), a collective name for a range of metaheuristic black-box optimization algo-rithms, is one of the fastest-growing areas in computer science. Many manuals and "how-to"s on the use of different EC methods as well as a variety of free or commercial software libraries are widely available nowadays. However, when one of these methods is applied to a real-world task, there can be many pitfalls and booby traps lurking | certain aspects of the optimization problem that may lead to unsatisfactory results even if the algorithm appears to be correctly implemented and executed. These include the convergence issues, ruggedness, deceptiveness, and neutrality in the fitness landscape, epistasis, non-separability, noise leading to the need for robustness, as well as dimensionality and scalability issues, among others. In this article, we systematically discuss these related hindrances and present some possible remedies. The goal is to equip practitioners and researchers alike with a clear picture and understanding of what kind of problems can render EC applications unsuccessful and how to avoid them from the start.
    A Puzzle-Based Genetic Algorithm with Block Mining and Recombination Heuristic for the Traveling Salesman Problem
    Pei-Chann Chang, Wei-Hsiu Huang, and Zhen-Zhen Zhang
    Journal of Data Acquisition and Processing, 2012, 27 (5): 937-949. 
    Abstract   PDF(1128KB) ( 2011 )  
    In this research, we introduce a new heuristic approach using the concept of ant colony optimization (ACO) to extract patterns from the chromosomes generated by previous generations for solving the generalized traveling salesman problem. The proposed heuristic is composed of two phases. In the first phase the ACO technique is adopted to establish an archive consisting of a set of non-overlapping blocks and of a set of remaining cities (nodes) to be visited. The second phase is a block recombination phase where the set of blocks and the rest of cities are combined to form an artificial chromosome. The generated artificial chromosomes (ACs) will then be injected into a standard genetic algorithm (SGA) to speed up the convergence. The proposed method is called "Puzzle-Based Genetic Algorithm" or "p-ACGA". We demonstrate that p-ACGA performs very well on all TSPLIB problems, which have been solved to optimality by other researchers. The proposed approach can prevent the early convergence of the genetic algorithm (GA) and lead the algorithm to explore and exploit the search space by taking advantage of the artificial chromosomes.
    Scheduling Multi-Mode Projects under Uncertainty to Optimize Cash Flows: A Monte Carlo Ant Colony System Approach
    Wei-Neng Chen, Member IEEE, and Jun Zhang, Senior Member, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (5): 950-965. 
    Abstract   PDF(610KB) ( 3155 )  
    Project scheduling under uncertainty is a challenging field of research that has attracted increasing attention. While most existing studies only consider the single-mode project scheduling problem under uncertainty, this paper aims to deal with a more realistic model called the stochastic multi-mode resource constrained project scheduling problem with discounted cash flows (S-MRCPSPDCF). In the model, activity durations and costs are given by random variables. The objective is to find an optimal baseline schedule so that the expected net present value (NPV) of cash flows is maximized. To solve the problem, an ant colony system (ACS) based approach is designed. The algorithm dispatches a group of ants to build baseline schedules iteratively using pheromones and an expected discounted cost (EDC) heuristic. Since it is impossible to evaluate the expected NPV directly due to the presence of random variables, the algorithm adopts the Monte Carlo (MC) simulation technique. As the ACS algorithm only uses the best-so-far solution to update pheromone values, it is found that a rough simulation with a small number of random scenarios is enough for evaluation. Thus the computational cost is reduced. Experimental results on 33 instances demonstrate the effectiveness of the proposed model and the ACS approach.
    An Improved Evolvable Oscillator and Basis Function Set for Control of an Insect-Scale Flapping-Wing Micro Air Vehicle
    John C. Gallagher, Senior Member, IEEE, and Michael W. Oppenheimer, Senior Member, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (5): 966-978. 
    Abstract   PDF(1739KB) ( 2031 )  
    This paper introduces an improved evolvable and adaptive hardware oscillator design capable of supporting adaptation intended to restore control precision in damaged or imperfectly manufactured insect-scale flapping-wing micro air vehicles. It will also present preliminary experimental results demonstrating that previously used basis function sets may have been too large and that significantly improved learning times may be achieved by judiciously culling the oscillator search space. The paper will conclude with a discussion of the application of this adaptive, evolvable oscillator to full vehicle control as well as the consideration of longer term goals and requirements.
    Coverage Optimization for Defect-Tolerance Logic Mapping on Nanoelectronic Crossbar Architectures
    Bo Yuan, Student Member, IEEE, and Bin Li, Member, ACM, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (5): 979-988. 
    Abstract   PDF(789KB) ( 1544 )  
    Emerging nano-devices with the corresponding nano-architectures are expected to supplement or even replace conventional lithography-based CMOS integrated circuits, while, they are also facing the serious challenge of high defect rates. In this paper, a new weighted coverage is defined as one of the most important evaluation criteria of various defect- tolerance logic mapping algorithms for nanoelectronic crossbar architectures functional design. This new criterion is proved by experiments that it can calculate the number of crossbar modules required by the given logic function more accurately than the previous one presented by Yellambalase et al. Based on the new criterion, a new effective mapping algorithm based on genetic algorithm (GA) is proposed. Compared with the state-of-the-art greedy mapping algorithm, the proposed algorithm shows pretty good effectiveness and robustness in experiments on testing problems of various scales and defect rates, and superior performances are observed on problems of large scales and high defect rates.
    Particle Swarm Optimization Based Support Vector Regression for Blind Image Restoration
    Ratnakar Dash, Pankaj Kumar Sa, Member, IEEE, and Banshidhar Majhi, Member, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (5): 989-995. 
    Abstract   PDF(4888KB) ( 1907 )  
    This paper presents a swarm intelligence based parameter optimization of the support vector machine (SVM) for blind image restoration. In this work, SVM is used to solve a regression problem. Support vector regression (SVR) has been utilized to obtain a true mapping of images from the observed noisy blurred images. The parameters of SVR are optimized through particle swarm optimization (PSO) technique. The restoration error function has been utilized as the fitness function for PSO. The suggested scheme tries to adapt the SVM parameters depending on the type of blur and noise strength and the experimental results validate its effectiveness. The results show that the parameter optimization of the SVR model gives better performance than conventional SVR model as well as other competent schemes for blind image restoration.
    Effect of Look-Ahead Depth in Evolutionary Checkers
    Belal Al-Khateeb and Graham Kendall, Senior Member, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (5): 996-1006. 
    Abstract   PDF(436KB) ( 1900 )  
    It is intuitive that allowing a deeper search into a game tree will result in a superior player to one that is restricted in the depth of the search that it is allowed to make. Of course, searching deeper into the tree comes at increased computational cost and this is one of the trade-offs that has to be considered in developing a tree-based search algorithm. There has been some discussion as to whether the evaluation function, or the depth of the search, is the main contributory factor in the performance of an evolved checkers player. Some previous research has investigated this question (on Chess and Othello), with differing conclusions. This suggests that different games have different emphases, with respect to these two factors. This paper provides the evidence for evolutionary checkers, and shows that the look-ahead depth (like Chess, perhaps unsurprisingly) is important. This is the first time that such an intensive study has been carried out for evolutionary checkers and given the evidence provided for Chess and Othello this is an important study that provides the evidence for another game. We arrived at our conclusion by evolving various checkers players at different ply depths and by playing them against one another, again at different ply depths. This was combined with the two-move ballot (enabling more games against the evolved players to take place) which provides strong evidence that depth of the look-ahead is important for evolved checkers players.
    Effect of Noisy Fitness in Real-Time Strategy Games Player Behaviour Optimisation Using Evolutionary Algorithms
    Antonio M. Mora, Antonio Fernández-Ares, Juan J. Merelo, Pablo García-Sánchez, and Carlos M. Fernandes
    Journal of Data Acquisition and Processing, 2012, 27 (5): 1007-1023. 
    Abstract   PDF(6806KB) ( 2013 )  
    This paper investigates the performance and the results of an evolutionary algorithm (EA) specifically designed for evolving the decision engine of a program (which, in this context, is called bot) that plays Planet Wars. This game, which was chosen for the Google Artificial Intelligence Challenge in 2010, requires the bot to deal with multiple target planets, while achieving a certain degree of adaptability in order to defeat different opponents in different scenarios. The decision engine of the bot is initially based on a set of rules that have been defined after an empirical study, and a genetic algorithm (GA) is used for tuning the set of constants, weights and probabilities that those rules include, and therefore, the general behaviour of the bot. Then, the bot is supplied with the evolved decision engine and the results obtained when competing with other bots (a bot offered by Google as a sparring partner, and a scripted bot with a pre-established behaviour) are thoroughly analysed. The evaluation of the candidate solutions is based on the result of non-deterministic battles (and environmental interactions) against other bots, whose outcome depends on random draws as well as on the opponents' actions. Therefore, the proposed GA is dealing with a noisy fitness function. After analysing the effects of the noisy fitness, we conclude that tackling randomness via repeated combats and reevaluations reduces this effect and makes the GA a highly valuable approach for solving this problem.
    Classification- and Regression-Assisted Differential Evolution for Computationally Expensive Problems
    Xiao-Fen Lu and Ke Tang, Member, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (5): 1024-1034. 
    Abstract   PDF(467KB) ( 2985 )  
    Differential Evolution (DE) has been well accepted as an effective evolutionary optimization technique. However, it usually involves a large number of fltness evaluations to obtain a satisfactory solution. This disadvantage severely restricts its application to computationally expensive problems, for which a single fltness evaluation can be highly time-consuming. In the past decade, a lot of investigations have been conducted to incorporate a surrogate model into an evolutionary algorithm (EA) to alleviate its computational burden in this scenario. However, only limited work was devoted to DE. More importantly, although various types of surrogate models, such as regression, ranking, and classiflcation models, have been investigated separately, none of them consistently outperforms others. In this paper, we propose to construct a surrogate model by combining both regression and classiflcation techniques. It is shown that due to the speciflc selection strategy of DE, a synergy can be established between these two types of models, and leads to a surrogate model that is more appropriate for DE. A novel surrogate model-assisted DE, named Classiflcation-and Regression-Assisted DE (CRADE) is proposed on this basis. Experimental studies are carried out on a set of 16 benchmark functions, and CRADE has shown signiflcant superiority over DE-assisted with only regression or classiflcation models. Further comparison to three state-of-the-art DE variants, i.e., DE with global and local neighborhoods (DEGL), JADE, and composite DE (CoDE), also demonstrates the superiority of CRADE.
    Differential Evolution with Adaptive Mutation and Parameter Control Using Lévy Probability Distribution
    Ren-Jie He and Zhen-Yu Yang, Member, CCF, ACM, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (5): 1035-1055. 
    Abstract   PDF(557KB) ( 2045 )  
    Differential evolution (DE) has become a very popular and effective global optimization algorithm in the area of evolutionary computation. In spite of many advantages such as conceptual simplicity, high e眂iency and ease of use, DE has two main components, i.e., mutation scheme and parameter control, which significantly influence its performance. In this paper we intend to improve the performance of DE by using carefully considered strategies for both of the two components. We first design an adaptive mutation scheme, which adaptively makes use of the bias of superior individuals when generating new solutions. Although introducing such a bias is not a new idea, existing methods often use heuristic rules to control the bias. They can hardly maintain the appropriate balance between exploration and exploitation during the search process, because the preferred bias is often problem and evolution-stage dependent. Instead of using any fixed rule, a novel strategy is adopted in the new adaptive mutation scheme to adjust the bias dynamically based on the identified local fitness landscape captured by the current population. As for the other component, i.e., parameter control, we propose a mechanism by using the Lévy probability distribution to adaptively control the scale factor F of DE. For every mutation in each generation, an Fi is produced from one of four different Lévy distributions according to their historical performance. With the adaptive mutation scheme and parameter control using Lévy distribution as the main components, we present a new DE variant called Lévy DE (LDE). Experimental studies were carried out on a broad range of benchmark functions in global numerical optimization. The results show that LDE is very competitive, and both of the two main components have contributed to its overall performance. The scalability of LDE is also discussed by conducting experiments on some selected benchmark functions with dimensions from 30 to 200.
    Compact Differential Evolution Light: High Performance Despite Limited Memory Requirement and Modest Computational Overhead
    Giovanni Iacca, Student Member, IEEE, Fabio Caraffini, and Ferrante Neri, Member, IEEE
    Journal of Data Acquisition and Processing, 2012, 27 (5): 1056-1076. 
    Abstract   PDF(2143KB) ( 1829 )  
    Compact algorithms are Estimation of Distribution Algorithms which mimic the behavior of population-based algorithms by means of a probabilistic representation of the population of candidate solutions. These algorithms have a similar behaviour with respect to population-based algorithms but require a much smaller memory. This feature is crucially important in some engineering applications, especially in robotics. A high performance compact algorithm is the compact Differential Evolution (cDE) algorithm. This paper proposes a novel implementation of cDE, namely compact Differential Evolution light (cDElight), to address not only the memory saving necessities but also real-time requirements. cDElight employs two novel algorithmic modifications for employing a smaller computational overhead without a performance loss, with respect to cDE. Numerical results, carried out on a broad set of test problems, show that cDElight, despite its minimal hardware requirements, does not deteriorate the performance of cDE and thus is competitive with other memory saving and population-based algorithms. An application in the field of mobile robotics highlights the usability and advantages of the proposed approach.
    Exploiting Bivariate Dependencies to Speedup Structure Learning in Bayesian Optimization Algorithm
    Amin Nikanjam and Adel Rahmani
    Journal of Data Acquisition and Processing, 2012, 27 (5): 1077-1090. 
    Abstract   PDF(312KB) ( 1600 )  
    Bayesian optimization algorithm (BOA) is one of the successful and widely used estimation of distribution algorithms (EDAs) which have been employed to solve different optimization problems. In EDAs, a model is learned from the selected population that encodes interactions among problem variables. New individuals are generated by sampling the model and incorporated into the population. Different probabilistic models have been used in EDAs to learn interactions. Bayesian network (BN) is a well-known graphical model which is used in BOA. Learning a proper model in EDAs and particularly in BOA is distinguished as a computationally expensive task. Different methods have been proposed in the literature to improve the complexity of model building in EDAs. This paper employs bivariate dependencies to learn accurate BNs in BOA efficiently. The proposed approach extracts the bivariate dependencies using an appropriate pairwise interaction-detection metric. Due to the static structure of the underlying problems, these dependencies are used in each generation of BOA to learn an accurate network. By using this approach, the computational cost of model building is reduced dramatically. Various optimization problems are selected to be solved by the algorithm. The experimental results show that the proposed approach successfully finds the optimum in problems with different types of interactions efficiently. Significant speedups are observed in the model building procedure as well.
SCImago Journal & Country Rank
 

ISSN 1004-9037

         

Home
Editorial Board
Author Guidelines
Subscription
Journal of Data Acquisition and Processing
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China

E-mail: info@sjcjycl.cn
 
  Copyright ©2015 JCST, All Rights Reserved