An Empirical Study of Control Parameters for Generalized Differential Evolution

Similar documents
Constrained Real-Parameter Optimization with Generalized Differential Evolution

GDE3: The third Evolution Step of Generalized Differential Evolution

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems

Performance Assessment of Generalized Differential Evolution 3 (GDE3) with a Given Set of Problems

Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives

Monotonicity Analysis, Evolutionary Multi-Objective Optimization, and Discovery of Design Principles

Differential Evolution: Competitive Setting of Control Parameters

This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.

Crossover and the Different Faces of Differential Evolution Searches

2 Differential Evolution and its Control Parameters

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules

Differential Evolution: a stochastic nonlinear optimization algorithm by Storn and Price, 1996

An Introduction to Differential Evolution. Kelly Fleetwood

Multi-objective approaches in a single-objective optimization environment

CONVERGENCE ANALYSIS OF DIFFERENTIAL EVOLUTION VARIANTS ON UNCONSTRAINED GLOBAL OPTIMIZATION FUNCTIONS

A Brief Introduction to Multiobjective Optimization Techniques

Running Time Analysis of Multi-objective Evolutionary Algorithms on a Simple Discrete Optimization Problem

Plateaus Can Be Harder in Multi-Objective Optimization

DESIGN OF OPTIMUM CROSS-SECTIONS FOR LOAD-CARRYING MEMBERS USING MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS

Constraint-Handling in Evolutionary Algorithms. Czech Institute of Informatics, Robotics and Cybernetics CTU Prague

A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms

Quad-trees: A Data Structure for Storing Pareto-sets in Multi-objective Evolutionary Algorithms with Elitism

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems

Robust Multi-Objective Optimization in High Dimensional Spaces

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

Constrained Optimization by the Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites

Research Article A Novel Ranking Method Based on Subjective Probability Theory for Evolutionary Multiobjective Optimization

WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY

Population Variance Based Empirical Analysis of. the Behavior of Differential Evolution Variants

Weight minimization of trusses with natural frequency constraints

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Comparison between cut and try approach and automated optimization procedure when modelling the medium-voltage insulator

Adaptive Differential Evolution and Exponential Crossover

Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure

New Reference-Neighbourhood Scalarization Problem for Multiobjective Integer Programming

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method

Evolutionary Multiobjective. Optimization Methods for the Shape Design of Industrial Electromagnetic Devices. P. Di Barba, University of Pavia, Italy

Lecture 9 Evolutionary Computation: Genetic algorithms

Multiobjective Optimization

Evolutionary Computation and Convergence to a Pareto Front

On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study

Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress

THE objective of global optimization is to find the

A COMPARISON OF PARTICLE SWARM OPTIMIZATION AND DIFFERENTIAL EVOLUTION

Multiobjective Optimisation An Overview

Expected Running Time Analysis of a Multiobjective Evolutionary Algorithm on Pseudo-boolean Functions

Prediction-based Population Re-initialization for Evolutionary Dynamic Multi-objective Optimization

TIES598 Nonlinear Multiobjective Optimization A priori and a posteriori methods spring 2017

Dynamic Optimization using Self-Adaptive Differential Evolution

Analyses of Guide Update Approaches for Vector Evaluated Particle Swarm Optimisation on Dynamic Multi-Objective Optimisation Problems

Constrained Multi-objective Optimization Algorithm Based on ε Adaptive Weighted Constraint Violation

Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design

Beta Damping Quantum Behaved Particle Swarm Optimization

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem.

Robust Pareto Design of GMDH-type Neural Networks for Systems with Probabilistic Uncertainties

PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems

TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531

Looking Under the EA Hood with Price s Equation

Effects of the Use of Non-Geometric Binary Crossover on Evolutionary Multiobjective Optimization

Program Evolution by Integrating EDP and GP

Finding Robust Solutions to Dynamic Optimization Problems

Towards Automatic Design of Adaptive Evolutionary Algorithms. Ayman Srour - Patrick De Causmaecker

AMULTIOBJECTIVE optimization problem (MOP) can

On Performance Metrics and Particle Swarm Methods for Dynamic Multiobjective Optimization Problems

x 2 i 10 cos(2πx i ). i=1

Integer weight training by differential evolution algorithms

Solving High-Dimensional Multi-Objective Optimization Problems with Low Effective Dimensions

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata

Genetic Algorithm. Outline

Evolutionary Algorithms

A Genetic Algorithm Approach for Doing Misuse Detection in Audit Trail Files

Ensemble determination using the TOPSIS decision support system in multi-objective evolutionary neural network classifiers

Genetic Algorithms. Seth Bacon. 4/25/2005 Seth Bacon 1

A Multiobjective GO based Approach to Protein Complex Detection

Optimization of Threshold for Energy Based Spectrum Sensing Using Differential Evolution

An Analysis on Recombination in Multi-Objective Evolutionary Optimization

Population-Based Incremental Learning with Immigrants Schemes in Changing Environments

A Parallel Evolutionary Approach to Multi-objective Optimization

Department of Artificial Complex Systems Engineering, Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima , Japan

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO.X, XXXX 1

CSC 4510 Machine Learning

A Study on Non-Correspondence in Spread between Objective Space and Design Variable Space in Pareto Solutions

Usefulness of infeasible solutions in evolutionary search: an empirical and mathematical study

An Improved Differential Evolution Trained Neural Network Scheme for Nonlinear System Identification

DE/BBO: A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization

Program Evolution by Integrating EDP and GP

Optimizing Surface Profiles during Hot Rolling: A Genetic Algorithms Based Multi-objective Optimization

Estimating Traffic Accidents in Turkey Using Differential Evolution Algorithm

Inter-Relationship Based Selection for Decomposition Multiobjective Optimization

Multiobjective Optimization of Cement-bonded Sand Mould System with Differential Evolution

Toward Effective Initialization for Large-Scale Search Spaces

Design of Higher Order LP and HP Digital IIR Filter Using the Concept of Teaching-Learning Based Optimization

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning

Center-based initialization for large-scale blackbox

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution

A Generalized Quantum-Inspired Evolutionary Algorithm for Combinatorial Optimization Problems

Transcription:

An Empirical tudy of Control Parameters for Generalized Differential Evolution aku Kukkonen Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute of Technology Kanpur Kanpur, PIN 8 6, India saku@iitk.ac.in, saku.kukkonen@lut.fi Jouni Lampinen Department of Information Technology Lappeenranta University of Technology P.O. Box, IN-585 Lappeenranta, inland jouni.lampinen@lut.fi KanGAL Report Number 54 Abstract In this paper the influence of control parameters to the search process of previously introduced Generalized Differential Evolution is empirically studied. Besides number of generations and size of the population, Generalized Differential Evolution has two control parameters, which have to be set before applying the method for solving a problem. The effect of these two control parameters is studied with a set of common bi-objective test problems and performance metrics. Objectives of this study are to investigate sensitivity of the control parameters according to different performance metrics, understand better the effect and tuning of the control parameters on the search process, and to find rules for the initial settings of the control parameters in the case of multi-objective optimization. It seems that in the case of multi-objective optimization, there exists same kind of relationship between the control parameters and the progress of population variance as in the case of single-objective optimization with Differential Evolution. Based on the results, recommendations choosing control parameter values are given.

Introduction Many practical problems have multiple objectives and several aspects cause constraints to problems. Generalized Differential Evolution (E) [ 4] is an extension of Differential Evolution (DE) [5, 6] for constrained multi-objective (Pareto-)optimization. DE is a relatively new real-coded Evolutionary Algorithm (EA), which has been demonstrated to be efficient for solving many real-world problems. In the earlier studies [4, 7, 8] E is compared with other multi-objective EAs (MOEAs). In these studies it has been noted that E performs better with different control parameter values from those usually used for the basic DE algorithm in singleobjective optimization. It has been also noted that the control parameter values should be drawn from a rather narrow range for some problems. However, more complete tests with different control parameter values has not been performed and reported before. Differential Evolution The DE algorithm [5, 6] was introduced by torn and Price in 995. Design principles of DE are simplicity, efficiency, and use of floating-point encoding instead of binary numbers. As a typical EA, DE has a random initial population that is then improved using selection, mutation, and crossover operations. everal ways exist to determine a stopping criterion for EAs but usually a predefined upper limit G max for the number of generations to be computed provides an appropriate stopping condition. Other control parameters for DE are crossover control parameter, mutation factor, and the population size NP. In each generation G, DE goes through each D dimensional decision vector x i,g of the population and creates corresponding trial vector u i,g as follows [9]: r, r, r {,,..., NP }, (randomly selected, except mutually different and different from i) j rand = floor (rand i [, ) D) + for(j = ; j D; j = j + ) { if(rand j [, ) < j = j rand ) u j,i,g = x j,r,g + (x j,r,g x j,r,g) else u j,i,g = x j,i,g } () Both and remain fixed during the whole execution of the algorithm. Parameter, controlling the crossover operation, represents the probability that an element for the trial vector is chosen from a linear combination of three randomly chosen vectors instead of from old vector x i,g. The condition j = j rand is to make sure that at least one element

is different compared to elements of the old vector. Parameter is a scaling factor for mutation and its value is typically (, +]. In practice, controls the rotational invariance of the search, and its small value (e.g..) is practicable with separable problems while larger values (e.g..9) are for non-separable problems. Control parameter controls the speed and robustness of the search, i.e., a lower value for increases the convergence rate but also the risk of stacking into a local optimum. Parameters and NP have the same kind of effect on the convergence rate as has. After the mutation and crossover operations trial vector u i,g is compared to old vector x i,g. If the trial vector has equal or better objective value, then it replaces the old vector in the next generation. DE is an elitist method since the best population member is always preserved and the average objective value of the population will never worsen. Generalized Differential Evolution everal extensions of DE for multi-objective optimization and for constrained optimization has been proposed [ 8]. Generalized Differential Evolution (E) [ 4, 7, 8] extends the basic DE algorithm for constrained multi-objective optimization by modifying the selection rule of the basic DE algorithm. Compared to the other DE extensions for multi-objective optimization, E makes DE suitable for constrained multi-objective optimization with noticeable fewer changes to the original DE algorithm. The modified selection operation for solving a constrained multi-objective optimization problem x i,g+ = u i,g minimize subject to {f ( x), f ( x),..., f M ( x)} (g ( x), g ( x),..., g K ( x)) T with M objectives f m and K constraint functions g k is presented formally as follows []: k {,..., K} : g k ( u i,g ) > x i,g if k {,..., K} : g k ( u i,g) g k ( x i,g) k {,..., K} : g k ( u i,g ) k {,..., K} : g k ( x i,g ) > k {,..., K} : g k ( u i,g ) g k ( x i,g ) m {,..., M} : f m ( u i,g ) f m ( x i,g ) otherwise (), () where g k( x i,g ) = max (g k ( x i,g ), ) and g k( u i,g ) = max (g k ( u i,g ), ) are representing the constraint violations.

The selection rule given in Eq. selects trial vector the next generation in the following cases: u i,g to replace old vector x i,g in. Both trial vector u i,g and old vector x i,g violate at least one constraint but the trial vector does not violate any constraint more than the old vector.. Old vector x i,g violates at least one constraint whereas trial vector u i,g is feasible.. Both vectors are feasible and trial vector u i,g has at least as good value for each objective than old vector x i,g has. Otherwise old vector x i,g is preserved. The basic idea in the selection rule given in Eq. is that trial vector u i,g is required to dominate compared old population member x i,g in a constraint violation space or in an objective function space, or at least provide an equally good solution as x i,g. After the selected number of generations the final population represents the solution for the optimization problem. Unique non-dominated vectors can be separated from the final population if desired. There is no sorting of non-dominated vectors during the optimization process or any mechanism for maintaining diversity of the solution. In the earlier studies [4,7,8] E was compared with other MOEAs. In these studies, it was observed that relatively small values for crossover control parameter and mutation factor should be used in the case of used multi-objective test problems. It was also noticed that suitable values for these control parameters should be drawn from a rather narrow range for some problems to obtain a converged solution along the Pareto front. However, performance was not thoroughly tested with commonly adopted metrics for MOEAs by applying different control parameter combinations. 4 Experiments In this investigation E was tested with a set of bi-objective benchmark problems commonly used in the MOEA literature. These problems are known as CH, CH, ON, KUR, POL, ZDT, ZDT, ZDT, ZDT4, and ZDT6 [9, pp. 8 6] and they all have two objectives to be minimized. With all the test problems the size of the population and the number of generations were kept fixed. rom the final population unique non-dominated solutions were extracted to form the final solution, an approximation of the Pareto front. Characteristics of the problems, the size of the population, and the number of generations are collected to Table. Relatively small number of generations was used for some problems to observe the effect of the control parameters. Different control parameter value combinations in the range [, ] and [, ] with a resolution of.5 were tested for all the test problems. Tests were repeated times with each control parameter combination and the results were evaluated using 4

Problem D eparability Pf, decision space Pf, objective space NP G max CH N/A line curve, convex CH N/A lines curves 5 ON separable line curve, non-convex 5 KUR separable points & 4 curves point & curves 5 POL non-separable curves curves ZDT separable line curve, convex 5 ZDT separable line curve, concave 5 ZDT separable 5 lines 5 curves 5 ZDT4 separable line curve, convex 5 ZDT6 separable line curve, concave 5 Table : Characteristics of the bi-objective optimization test problems (Pf = Pareto front) [9, pp. 8 6] [, pp. 6 8, 4 5], the size of the population, and the number of generations. convergence and diversity metrics. Closeness to the Pareto front was measured with a generational distance () [9, pp. 6 7], which measures the average distance of the solution vectors from the Pareto front. Diversity of the obtained solution was measured using a spacing () metric [9, pp. 7 8], which measures the standard deviation of the distances from each vector to the nearest vector in the obtained non-dominated set. maller values for both metrics are preferable, and the optimal values are zero. The number of unique non-dominated vectors () in the final solution was also registered. Results for the different problems are shown in igs. as surfaces in the control parameter space. The results for all the problems show that with a small value the final solution has greatest amount of unique non-dominated vectors and also values for performance metrics and are best. or the multidimensional (i.e. D > ) test problems, values for the performance metrics are best with low values, whereas for the one-dimensional test problems, does not seem to have notable impact on the solution. The value range for in the tests is larger than usually used for single-objective optimization. Based on the results, a larger value does not give any extra benefit in the case of multi-objective optimization and therefore its value can be chosen from the same value range as in the case of single-objective optimization. rom the results of the multidimensional test problems, one can observe a non-linear relationship between and values, i.e., a larger value can be used with a small value than with a large value, and this relationship is non-linear. An explanation for this can be found from a theory for the single-objective DE algorithm. A formula for the relationship between the control parameters of DE and the evolution of the population variance has been conducted in []. The change of the population variance between 5

CH, Cardinality CH, Generational Distance CH, pacing 8 6. 6 4.5.5.5.5 log ( ) 4.5.5.5.5.5..5..5.5.5 ء.5.5 ز س igure : Mean values of the number of unique non-dominated vectors in the final solution, generational distance (note logarithmic scale), and spacing for CH shown as surfaces in the trange shape of the spacing surface is due to only a few non-dominated vectors when is large. CH, Cardinality CH, Generational Distance CH, pacing 5.4.8.7 4..6.5...5.5.4..5.5.5.5.5.5.5.5.5.5 igure : Mean values of the number of unique non-dominated vectors in the final solution, generational distance, and spacing for CH shown as surfaces in the successive generations due to the crossover and mutation operations is denoted with c and its value is calculated as c = /NP + /NP +. When c <, the crossover and mutation operations decrease the population variance. When c =, the variance do not change, and when c >, the variance increases. ince a selection operation of an EA usually decreases the population variance, c > is recommended to prevent too high convergence rate with a poor search coverage, which typically results in a premature convergence. On the other hand, if c is too large, the search process proceeds reliably, but too slowly. When the size of the population is relatively large (e.g. NP > 5), the value of c depends mainly on the values of and. Curves c =. and c =.5 for NP = in the control parameter space are shown in ig.. These curves are also shown with 6

ON, Cardinality ON, Generational Distance ON, pacing 6 5 4.5.5..5..5..5.5.5..5..5.5.5.5.5.5.5.5.5.5.5 igure : Mean values of the number of unique non-dominated vectors in the final solution, generational distance, and spacing for ON shown as surfaces in the KUR, Cardinality KUR, Generational Distance KUR, pacing 6.5 5 4..5..5..5..5.5.5.5.5.5.5.5.5.5.5.5.5.5 igure 4: Mean values of the number of unique non-dominated vectors in the final solution, generational distance, and spacing for KUR shown as surfaces in the the and surfaces in igs.. As it can be observed, the surface for c in ig. corresponds well with the surfaces for and in most of the test problems proposing that the theory for the population variance between successive generations is applicable also in the case of multi-objective optimization with E. This is natural, since singleobjective optimization can be seen as a special case of multi-objective optimization, and a large c means slow but reliable search while a small c means opposite. In general, the results are good with small and values meaning also a low c value close to one. One reason for the good performance with small control parameter values is that the solution of a multi-objective problem is usually a set of non-dominated vectors instead of one vector, and conflicting objectives of the problem reduce overall selection pressure, maintain the diversity of the population, and prevent the premature convergence. Another reason is that all the multidimensional test problems here are separable or almost separable. With strongly non-separable objective functions, a larger 7

POL, Cardinality POL, Generational Distance POL, pacing 4.4. 5 5 5.5..5..5.8.6.4.5..5.5.5.5.5.5.5.5.5.5.5 igure 5: Mean values of the number of unique non-dominated vectors in the final solution, generational distance, and spacing for POL shown as surfaces in the ZDT, Cardinality ZDT, Generational Distance ZDT, pacing.8. 8 6 4.5.6.4..5.5..5.5.5.5.5.5.5.5.5.5.5 igure 6: Mean values of the number of unique non-dominated vectors in the final solution, generational distance, and spacing for ZDT shown as surfaces in the might work better. Also, most of the problems have relatively easy objective functions to solve leading faster convergence with a small, while a bigger might be needed for harder functions with several local optima. The results for most of the ZDT problems show that the value of control parameter must be kept low (e.g...) to obtain good results. This is due to the fact that in the ZDT problems, the first objective depends on one decision variable while the second objective depends on rest of the decision variables. This leads easily to a situation when the first objective gets optimized faster than the second one and the solution converges to a single point on the Pareto front since in E there is no mechanism for maintaining diversity of the solution. Therefore, if the optimization difficulty for different objectives varies, then the value for must be close to zero to prevent the solution converge to a single point of the Pareto-optimal front. Based on the results, our recommendations for the control parameter values are 8

ZDT, Cardinality ZDT, Generational Distance ZDT, pacing 6 5 4.5.4...5.5.5.5.5.5.5.5.5..5.5.5.5 igure 7: Mean values of the number of unique non-dominated vectors in the final solution, generational distance, and spacing for ZDT shown as surfaces in the ZDT, Cardinality ZDT, Generational Distance ZDT, pacing 8 6.8.6.4.. 4.5.4..5.8.6.4..5.5.5.5.5.5.5.5.5.5 igure 8: Mean values of the number of unique non-dominated vectors in the final solution, generational distance, and spacing for ZDT shown as surfaces in the [.5,.8] and [.,.6] for initial settings, and it is safer to use smaller values for. However, these recommendations are based on the results of the used test functions. In overall, it is wise to select values for control parameters and satisfying a condition. < c <.5. 5 Conclusions An extension of Differential Evolution (DE), Generalized Differential Evolution (E), is described. E has the same control parameters, crossover control parameter and mutation factor, as the basic DE has. uitable values for these control parameters has been reported earlier to be different compared to those usually used with the basic single-objective DE, but no extensive study was performed. 9

ZDT4, Cardinality ZDT4, Generational Distance ZDT4, pacing 8 6 5 4.5..5. 4.5.5.5..5.5.5.5.5.5.5.5.5.5.5 igure 9: Mean values of the number of unique non-dominated vectors in the final solution, generational distance, and spacing for ZDT4 shown as surfaces in the ZDT6, Cardinality ZDT6, Generational Distance ZDT6, pacing 4 8 6 4...5.5.5.5.5.5.5.5..5.5.5.5 igure : Mean values of the number of unique non-dominated vectors in the final solution, generational distance, and spacing for ZDT6 shown as surfaces in the Different control parameter values for E were tested using common bi-objective test problems and metrics measuring the convergence and diversity of the solutions. Based on the empirical results, suitable initial control parameter values are [.5,.8] and [.,.6]. The results also propose that a larger value than usually used in the case of single-objective optimization does not give any extra benefit in the case of multiobjective optimization. It also seems that in some cases it is better to use a smaller value to prevent the solution converge to a single point of the Pareto front. However, these results are based on mostly separable problems with conflicting objectives. Also in the case of multi-objective optimization, the non-linear relationship between and was observed according to the theory of the basic single-objective DE about the relationship between the control parameters and the evolution of the population variance. It is advisable to select values for and satisfying the condition. < c <.5.

.5 4.5.5.5 c =.5 c =...4.6.8 c.5.5.5.5.5.5..5 igure : Curves c =. and c =.5 for NP = in the In the future, the correlation between c and performance metrics should be studied also numerically. urthermore, the further investigation of the generalization of the theory for the evolution of the population variance to multi-objective problems is necessary. Also, extending studies for strongly non-separable multi-objective problems and problems with more than two objectives remains to be studied. pecial cases, as multi-objective problems without conflicting objectives and single-objective problems transformed into multi-objective forms, might be interesting to investigate. Acknowledgments. Kukkonen gratefully acknowledges support from the East inland Graduate chool in Computer cience and Engineering (ECE), the Academy of inland, Technological oundation of inland, and innish Cultural oundation. He also wishes to thank Dr. K. Deb and all the KanGAL people for help during his visit in pring 5. References [] Jouni Lampinen. DE s selection rule for multiobjective optimization. Technical report, Lappeenranta University of Technology, Department of Information Technology,. [Online] Available: http://www.it.lut.fi/kurssit/-4/778/mode.pdf, 5..6. [] Jouni Lampinen. Multi-constrained nonlinear optimization by the Differential Evolution algorithm. Technical report, Lappeenranta University of Technology, Department of Information Technology,. [Online] Available: http://www.it.lut.fi/kurssit/-4/778/decontr.pd, 5..6.

[] Jouni Lampinen. A constraint handling approach for the Differential Evolution algorithm. In Proceedings of the Congress on Evolutionary Computation (CEC ), pages 468 47, Honolulu, Hawaii, May. [4] aku Kukkonen and Jouni Lampinen. Comparison of Generalized Differential Evolution algorithm to other multi-objective evolutionary algorithms. In Proceedings of the 4th European Congress on Computational Methods in Applied ciences and Engineering (ECCOMA4), page 445, Jyväskylä, inland, July 4. [5] Rainer torn and Kenneth V. Price. Differential Evolution - a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, (4):4 59, Dec 997. [6] Kenneth V. Price, Rainer torn, and Jouni Lampinen. Differential Evolution: A Practical Approach to Global Optimization. pringer-verlag, Berlin, 5. [7] aku Kukkonen and Jouni Lampinen. A Differential Evolution algorithm for constrained multi-objective optimization: Initial assessment. In Proceedings of the IATED International Conference on Artificial Intelligence and Applications, pages 96, Innsbruck, Austria, eb 4. [8] aku Kukkonen and Jouni Lampinen. Mechanical component design for multiple objectives using Generalized Differential Evolution. In Proceedings of the 6th International Conference on Adaptive Computing in Design and Manufacture (ACDM4), pages 6 7, Bristol, United Kingdom, April 4. [9] Kenneth V. Price. New Ideas in Optimization, chapter An Introduction to Differential Evolution, pages 79 8. McGraw-Hill, London, 999. [] Rainer torn. ystem design by constraint adaptation and Differential Evolution. IEEE Transactions on Evolutionary Computation, (): 4, April 999. [] eng-heng Wang and Jyh-Woei heu. Multiobjective parameter estimation problems of fermentation processes using a high ethanol tolerance yeast. Chemical Engineering cience, 55(8):685 695, ept. [] Hussein A. Abbass and Ruhul arker. The Pareto Differential Evolution algorithm. International Journal on Artificial Intelligence Tools, (4):5 55,. [] Hussein A. Abbass. The self-adaptive Pareto Differential Evolution algorithm. In Proceedings of the Congress on Evolutionary Computation (CEC ), pages 8 86, Honolulu, Hawaii, May.

[4] Nateri K. Madavan. Multiobjective optimization using a Pareto Differential Evolution approach. In Proceedings of the Congress on Evolutionary Computation (CEC ), pages 45 5, Honolulu, Hawaii, May. [5] B. V. Babu and M. Mathew Leenus Jehan. Differential Evolution for multi-objective optimization. In Proceedings of the Congress on Evolutionary Computation (CEC ), pages 696 7, Canberra, Australia, Dec. [6] Robert J. Graves eng Xue, Arthur C. anderson. Multi-objective Differential Evolution and its application to enterprise planning. In Proceedings of IEEE International Conference on Robotics and Automation, pages 55 54, Taiwan, May. [7] Efrén Mezura-Montes, Carlos A. Coello Coello, and Edy I. Tun-Morales. imple feasibility rules and Differential Evolution for constrained optimization. In Proceedings of the rd Mexican International Conference on Artificial Intelligence (MICAI 4), pages 77 76, Mexico City, Mexico, April 4. [8] Tea Robič and Bogdan ilipič. DEMO: Differential Evolution for multiobjective optimization. In Proceedings of the rd International Conference on Evolutionary Multi- Criterion Optimization (EMO 5), pages 5 5, Guanajuato, Mexico, March 5. [9] Kalyanmoy Deb. Multi-Objective Optimization using Evolutionary Algorithms. John Wiley & ons, Chichester, England,. [] Carlos A. Coello Coello, David A. Van Veldhuizen, and Gary B. Lamont. Evolutionary Algorithms for olving Multi-Objective Problems. Kluwer Academic, New York, United tates of America,. [] Daniela Zaharie. Critical values for the control parameters of Differential Evolution algorithms. In Proceedings of Mendel 4, 8th International Conference on oft Computing, pages 6 67, Brno, Czech Republic, June.