Constrained Optimization by the Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites

Similar documents
Constrained Real-Parameter Optimization with Generalized Differential Evolution

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems

Constraint-Handling in Evolutionary Algorithms. Czech Institute of Informatics, Robotics and Cybernetics CTU Prague

Usefulness of infeasible solutions in evolutionary search: an empirical and mathematical study

THIS paper considers the general nonlinear programming

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

ε Constrained CCPSO with Different Improvement Detection Techniques for Large-Scale Constrained Optimization

Multi-objective approaches in a single-objective optimization environment

A (1+1)-CMA-ES for Constrained Optimisation

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems

WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY

Toward Effective Initialization for Large-Scale Search Spaces

Dynamic Optimization using Self-Adaptive Differential Evolution

Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction

Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES

On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study

Population Variance Based Empirical Analysis of. the Behavior of Differential Evolution Variants

Online Supplement for. Engineering Optimization

Weight minimization of trusses with natural frequency constraints

AMULTIOBJECTIVE optimization problem (MOP) can

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives

Center-based initialization for large-scale blackbox

A Restart CMA Evolution Strategy With Increasing Population Size

Multiobjective Optimization of Cement-bonded Sand Mould System with Differential Evolution

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

THE objective of global optimization is to find the

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules

A COMPARISON OF PARTICLE SWARM OPTIMIZATION AND DIFFERENTIAL EVOLUTION

Crossover and the Different Faces of Differential Evolution Searches

Solving the Constrained Nonlinear Optimization based on Imperialist Competitive Algorithm. 1 Introduction

RESOLUTION OF NONLINEAR OPTIMIZATION PROBLEMS SUBJECT TO BIPOLAR MAX-MIN FUZZY RELATION EQUATION CONSTRAINTS USING GENETIC ALGORITHM

Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Implicit Formae in Genetic Algorithms

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Adaptive Differential Evolution and Exponential Crossover

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Metaheuristics and Local Search

Multi-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms

A Generalization of Principal Component Analysis to the Exponential Family

Finding Robust Solutions to Dynamic Optimization Problems

CHEMICAL Reaction Optimization (CRO) [1] is a simple

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem.

Differential Evolution Based Particle Swarm Optimization

Integer weight training by differential evolution algorithms

Genetic Algorithm: introduction

2 Differential Evolution and its Control Parameters

CONVERGENCE ANALYSIS OF DIFFERENTIAL EVOLUTION VARIANTS ON UNCONSTRAINED GLOBAL OPTIMIZATION FUNCTIONS

An Evolutionary Programming Based Algorithm for HMM training

GDE3: The third Evolution Step of Generalized Differential Evolution

A PARAMETER CONTROL SCHEME FOR DE INSPIRED BY ACO

A Brief Introduction to Multiobjective Optimization Techniques

A Tabu Search Approach based on Strategic Vibration for Competitive Facility Location Problems with Random Demands

Threshold Selection, Hypothesis Tests, and DOE Methods

Genetic Algorithm for Solving the Economic Load Dispatch

A Genetic Algorithm Approach for Doing Misuse Detection in Audit Trail Files

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine

/07/$ IEEE

A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems

Pure Strategy or Mixed Strategy?

Robust Multi-Objective Optimization in High Dimensional Spaces

Margin Maximizing Loss Functions

Beta Damping Quantum Behaved Particle Swarm Optimization

A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape

Lecture 06: Niching and Speciation (Sharing)

Introduction to Black-Box Optimization in Continuous Search Spaces. Definitions, Examples, Difficulties

GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS

Lecture 9 Evolutionary Computation: Genetic algorithms

Department of Artificial Complex Systems Engineering, Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima , Japan

CSC 4510 Machine Learning

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization

Introduction to Optimization

A Language for Task Orchestration and its Semantic Properties

Facility Location Problems with Random Demands in a Competitive Environment

arxiv: v1 [cs.ne] 9 May 2016

A COMPARATIVE STUDY ON OPTIMIZATION METHODS FOR THE CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS

Available online at ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95

Parameter Sensitivity Analysis of Social Spider Algorithm

An Improved Model of Ceramic Grinding Process and its Optimization by Adaptive Quantum inspired Evolutionary Algorithm

Differential Evolution: Competitive Setting of Control Parameters

Program Evolution by Integrating EDP and GP

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

A New Mutation Operator for Evolution Strategies for Constrained Problems

Polynomial Approximation of Survival Probabilities Under Multi-point Crossover

Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem

Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape

A GA Mechanism for Optimizing the Design of attribute-double-sampling-plan

A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem

ON SOLUTION OF NONLINEAR MODELS: A HYBRID ALGORITHM OF ARTIFICIAL BEE COLONY ALGORITHM AND ARTIFICIAL SHOWERING ALGORITHM

Observations on the Stability Properties of Cooperative Systems

Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights

Numerical experiments for inverse analysis of material properties and size in functionally graded materials using the Artificial Bee Colony algorithm

Quad-trees: A Data Structure for Storing Pareto-sets in Multi-objective Evolutionary Algorithms with Elitism

Multi-start JADE with knowledge transfer for numerical optimization

Numerical Optimization: Basic Concepts and Algorithms

A Hybrid Method of CART and Artificial Neural Network for Short-term term Load Forecasting in Power Systems

Transcription:

2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 6-2, 2006 Constrained Optimization by the Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites Tetsuyuki Takahama, Member, IEEE, and Setsuko Sakai, Member, IEEE Abstract While research on constrained optimization using evolutionary algorithms has been actively pursued, it has had to face the problem that the ability to solve multi-modal problems, which have many local solutions within a feasible region, is insufficient, that the ability to solve problems with equality constraints is inadequate, and that the stability and efficiency of searches is low. We proposed the DE, defined by applying the constrained method to a differential evolution (DE). DE is a simple, fast and stable population based search algorithm that is robust to multi-modal problems. The DE is improved to solve problems with many equality constraints by introducing a gradient-based mutation that finds feasible point using the gradient of constraints at an infeasible point. Also the DE is improved to find feasible solutions faster by introducing elitism where more feasible points are preserved as feasible elites. The improved DE realizes stable and efficient searches that can solve multi-modal problems and those with equality constraints. The advantage of the DE is shown by applying it to twenty four constrained problems of various types. I. INTRODUCTION There are many studies on solving constrained optimization problems using evolutionary algorithms (EAs) [], [2]. However these studies have had to address some problems: ) The ability to solve multi-modal problems is insufficient. When multi-modal problems that have many local solutions in a feasible region are solved, even if the EAs can locate the feasible region, they are sometimes trapped to a local solution and cannot search for an optimal solution. 2) Obtained solutions in problems with equality constraints are inadequate. Many EAs for constrained optimization solve problems with equality constraints by converting equality constraints into relaxed inequality constraints. As a result, the feasibility of the obtained solutions is inadequate. Also, it is difficult for them to solve problems with many equality constraints. 3) The stability and efficiency of searches is low. Sometimes EAs cannot overcome the effect of randomness in the search process of some problems. Thus, the stability of the search becomes low. Also, many EAs need rank-based selection or replacement, stochastic selection and mutations based on Gaussian or Cauchy distributions that incur high computational costs. To overcome these problems, we propose the DE, defined by applying the constrained method [3] to a differential T. Takahama is with the Department of Intelligent Systems, Hiroshima City University, Asaminami-ku, Hiroshima, 73-394 Japan (email: takahama@its.hiroshima-cu.ac.jp). S. Sakai is with the Faculty of Commercial Sciences, Hiroshima Shudo University, Asaminami-ku, Hiroshima, 73-395 Japan (e-mail: setuko@shudo-u.ac.jp). evolution (DE) [4]. By incorporating DE, problems () and (3) can be solved; DE is a simple, fast and stable search algorithm that is robust to multi-modal problems. The DE is stable because it uses a simple and stable selection and replacement mechanism excluding stochastic operations. The DE is also efficient because it uses a simple arithmetic operation and does not use any rank-based operations or mutations based on Gaussian and Cauchy distributions. The problem (2) is solved by using a simple way of controlling the relaxation of equality constraints for the constrained method to directly solve problems with equality constraints. But it is difficult for the DE to solve problems with many equality constraints. The DE is improved to solve the problems by introducing a gradient-based mutation that finds a feasible point from an infeasible point using the gradient of constraints at the infeasible point. Also, it is difficult for the DE to find feasible points in earlier generation, because it finds infeasible points with better objective values until the relaxation of equality constraints is reduced completely or becomes zero. The DE is improved to find feasible solutions faster by introducing elitism where more feasible points are preserved as feasible elites. The improved DE realizes stable and efficient searches that can solve multi-modal problems and those with many equality constraints. The advantage of the DE is shown by applying it to twenty four constrained problems of various types. II. PREVIOUS WORKS EAs for constrained optimization can be classified into several categories by the way the constraints are treated: ) Constraints are only used to see whether a search point is feasible or not [5], [6]. 2) The constraint violation, which is the sum of the violation of all constraint functions, is combined with the objective function. Approaches in this category are called penalty function methods. 3) The constraint violation and the objective function are used separately and are optimized separately [7] [0]. For example, Takahama and Sakai proposed the «constrained method [], [2] and the constrained method [3]. These methods adopt a lexicographic order, in which the constraint violation precedes the objective function, with relaxation of the constraints. The methods can effectively optimize problems with equality constraints through the relaxation of the constraints. Deb [7] proposed a method using an extended 0-7803-9487-9/06/$20.00/ 2006 IEEE Authorized licensed use limited to: Nanyang Technological University. Downloaded on March 24,200 at 2:34:29 EDT from IEEE Xplore. Restrictions apply.

objective function that realizes the lexicographic ordering. Runarsson and Yao [8] proposed a stochastic ranking method based on ES and using a stochastic lexicographic order that ignores constraint violations with some probability. These methods have been successfully applied to various problems. 4) The constraints and the objective function are optimized by multiobjective optimization methods [3] [6]. In category ), generating initial feasible points is difficult and computationally demanding when the feasible region is very small. Especially if the problem has equality constraints, it is almost impossible to find initial feasible points. In category 2), it is difficult to select an appropriate value for the penalty coefficient that adjusts the strength of the penalty. Several methods that dynamically control the penalty coefficient have been proposed [7] [9]. However, ideal control of the coefficient is problem dependent [20] and it is difficult to determine a general control scheme. In category 4), the difficulty is that solving multiobjective optimization problems is a more difficult and expensive task than solving single objective optimization problems. In this paper, we investigate the constrained method in the promising category 3). The «and the constrained methods are new types of transformation methods that convert an algorithm for unconstrained optimization into an algorithm for constrained optimization by replacing the ordinal comparisons with the «and the level comparisons in direct search methods. We call the methods algorithm transformation methods. The «constrained method was applied to Powell s method, the nonlinear simplex method by Nelder and Mead, a genetic algorithm and particle swarm optimization (PSO); and the «constrained Powell s method [], [2], the «constrained simplex method [2], [22], the «GA [23] and the «PSO [24] were proposed. The constrained method was applied to PSO, and the constrained PSO [3] and hybrid algorithm [25] were proposed. Of these methods, the «constrained simplex method with mutations [22] is the best as it can stably and efficiently search solutions. However, the method is often trapped by local solutions in multi-modal problems and an optimal solution cannot be found. Contrarily, the DE can stably solve multi-modal problems. III. THE CONSTRAINED METHOD In this section, the constrained method [3] is described. A. Constrained optimization problems The general constrained optimization problem (P) with inequality, equality, upper bound and lower bound constraints is defined as follows: ȵ minimize subject to ܵ ܵ ¼ ½ Õ Üµ ¼ Õ ½ Ñ Ð Ü Ù ½ Ò () where Ü Ü ½ Ü ¾ Ü Ò µ is an Ò dimensional vector of decision variables, ܵ is an objective function, ܵ ¼ are Õ inequality constraints and ܵ ¼ are Ñ Õ equality constraints. The functions and are linear or nonlinear real-valued functions. The values Ù and Ð are the upper and lower bounds of Ü, respectively. The upper and lower bounds define the search space Ë. Inequality and equality constraints define the feasible region. Feasible solutions exist in Ë. B. The constraint violation We introduce the constraint violation ܵ to indicate by how much a search point Ü violates the constraints. The constraint violation ܵ is the following function: ܵ ¼ Ü ¾ µ (2) ܵ ¼ Ü ¾ µ Some types of constraint violations, which are adopted as a penalty in penalty function methods, can be defined as follows: ܵ ÑÜÑܼ ܵ ÑÜ Üµ (3) ܵ Ñܼ ܵ Ô Üµ Ô (4) where Ô is a positive number. In this paper, the simple sum of constraints, Ô ½, is used. C. The level comparison The level comparison is defined as an order relation on the set of ܵ ܵµ. The level comparisons are defined by a lexicographic order in which ܵ precedes ܵ, because the feasibility of Ü is more important than the minimization of ܵ. Let ½ ¾ µ and ½ ¾ µ be the function value and the constraint violation respectively, at a point Ü ½ Ü ¾ µ. Then, for any satisfying ¼, the level comparisons and between ½ ½ µ and ¾ ¾ µ are defined as follows: ½ ½ µ ¾ ¾ µ ½ ½ µ ¾ ¾ µ ½ ¾ if ½ ¾ ½ ¾ if ½ ¾ ½ ¾ otherwise ½ ¾ if ½ ¾ ½ ¾ if ½ ¾ ½ ¾ otherwise In the case of ¼, ¼ and ¼ are equivalent to the lexicographic order in which the constraint violation ܵ precedes the function value ܵ. Also, in the case of ½, the level comparisons ½ and ½ are equivalent to the ordinal comparisons and between function values. D. The properties of the constrained method An optimization problem solved by the constrained method, that is, a problem in which ordinary comparisons are replaced with level comparisons, (P ), is defined as follows: (5) (6) (P ) minimize ܵ (7) 2 Authorized licensed use limited to: Nanyang Technological University. Downloaded on March 24,200 at 2:34:29 EDT from IEEE Xplore. Restrictions apply.

where minimize denotes the minimization based on the level comparison. Also, a problem (P ) is defined such that the constraint of (P), that is, ܵ ¼, is relaxed and replaced with ܵ : (P ) minimize ܵ subject to ܵ It was shown that a constrained optimization problem (P ¼ ) can be transformed into an equivalent unconstrained optimization problem (P ¼ ) using level comparisons. So, if level comparisons are incorporated into an existing unconstrained optimization method, constrained optimization problems can be solved. It is thought that the constrained method converts an algorithm for unconstrained optimization into an algorithm for constrained optimization by replacing ordinary comparisons with level comparisons. Also, it was shown that, in the constrained method, an optimal solution of (P ¼ ) can be obtained by converging to 0, in a fashion similar to increasing the penalty coefficient to infinity in the penalty function method. A. Differential Evolution IV. THE IMPROVED DE Differential evolution is a variant of ES proposed by Storn and Price [4]. DE is a stochastic direct search method using population or multiple search points. DE has been successfully applied to the optimization problems including non-linear, non-differentiable, non-convex and multi-modal functions. It has been shown that DE is fast and robust to these functions. In DE, initial individuals are randomly generated within the search space and form an initial population. Each individual contains Ò genes as decision variables or a decision vector. At each generation or iteration, all individuals are selected as parents. Each parent is processed as follows: The mutation process begins by choosing ½ ¾ ÒÙÑ individuals from the parents except for the parent in the processing. The first individual is a base vector. All subsequent individuals are paired to create ÒÙÑ difference vectors. The difference vectors are scaled by the scaling factor and added to the base vector. The resulting vector is then mated or recombined with the parent. The probability of recombination at an element is controlled by the crossover factor Ê. This crossover process produces a trial vector. Finally, for survivor selection, the trial vector is accepted for the next generation if the trial vector is better than the parent. In this study, DE/rand//exp variant, where exponential crossover is adopted and the number of difference vector is or ÒÙÑ ½, is used. Other variant DE/rand//bin, where binomial crossover is adopted, has been studied well. However, it seems that exponential crossover can optimize more types of problems using a same value of crossover factor than binomial crossover in our experience and we decided to adopt exponential crossover. (8) B. The DE with gradient-based mutation and feasible elites The DE is an algorithm where the constrained method is applied into DE, or ordinary comparisons in DE are replaced with the level comparisons. The gradient-based mutation is an operation similar to the gradient-based repair method proposed by Chootinan and Chen [26]. The vector of constraint functions ܵ, the vector of constraint violations ܵ and the increment of a point Ü to satisfy constraints Ü are defined as follows: Ö Üµ Ü Üµ (9) Ü Ö Üµ ½ ܵ (0) where ܵ = ½ ܵ Õ Üµ Õ ½ ܵ Ñ Üµµ Ì, ܵ= ½ ܵ Õ Üµ Õ ½ ܵ Ñ Üµµ Ì, ܵ = Ñܼ ܵ. Although the gradient matrix Ö is not invertible in general, the Moore-Penrose inverse or pseudoinverse Ö Üµ [27], which gives an approximate or best (least squares) solution to a system of linear equations, can be used instead in Eq. (0). This mutation or repair operation, Ü ÒÛ Ü Ü, is executed with gradient-based mutation probability È. In [26], only non-zero elements of ܵ are repaired and the repair operation is repeated with some probability while amount of repair is not small. In this study, however, nonzero inequality constraints and all equality constraints are considered to keep the feasibility of equality constraints. The mutation operation is repeated fixed times Ê while the point is infeasible. In feasible elites preserving strategy, Æ most feasible points, which have minimum constraint violation, are preserved as elites in the initial generation. At each generation, a base vector and difference vectors are selected from not only a population but also feasible elites. When a trial vector is generated, it is compared with feasible elites. If the trial vector has smaller constraint violation than the worst feasible elite, the elite is replaced with the vector. This strategy is applied only when the level is not zero. When the level is zero, more feasible individuals can survive naturally and it does not need to use elites preserving strategy. C. Controlling the level Usually, the level does not need to be controlled. Many constrained problems can be solved based on the lexicographic order where the level is constantly 0. However for problems with equality constraints, the level should be controlled properly to obtain high quality solutions. In this study, a simple way of controlling the level is defined according to the equation (). The level is updated until the number of iterations Ø becomes the control generation Ì. After the number of iterations exceeds Ì, the level is set to 0 to obtain solutions with minimum constraint Pseudoinverse of a matrix can be obtained using the singular value decomposition; =Í Î Ì and Î Í Ì, where is a matrix given by inverting each non-zero element on the diagonal of. 3 Authorized licensed use limited to: Nanyang Technological University. Downloaded on March 24,200 at 2:34:29 EDT from IEEE Xplore. Restrictions apply.

violation. ¼µ Ü µ () ¼µ ½ Ø Øµ Ì µ Ô ¼ Ø Ì ¼ Ø Ì where Ü is the top -th individual and ¼¾Æ, and Ô is a parameter to control the speed of reducing relaxation of constraints. D. The algorithm of the improved DE The algorithm of the improved DE based on DE/ rand//exp variant, which is used in this study, is as follows: Step0 Initialization. Initial Æ individuals Ü are generated as the initial search points, where there is an initial population È ¼µ Ü ½ ¾ Æ. An initial level is given by the level control function ¼µ. If ¼µ is not zero, the Æ most feasible points, which have minimum constraint violation, are preserved as elites È ¼µ È ¼µµ. Step Termination condition. If the number of generations (iterations) exceeds the maximum generation Ì ÑÜ, the algorithm is terminated. Step2 Mutation. For each individual Ü in È Øµ, three different individuals Ü Ô½, Ü Ô¾ and Ü Ô are chosen from population È Øµ È Øµ without overlapping Ü. A new vector Ü ¼ is generated by the base vector Ü Ô½ and the difference vector Ü Ô¾ Ü Ô as follows: Ü ¼ Ü Ô½ Ü Ô¾ Ü Ô µ (2) where is a scaling factor. Step3 Crossover. The vector Ü ¼ is recombined with the parent Ü. A crossover point is chosen randomly from all dimensions ½ Ò. The element at the -th dimension of the trial vector Ü ÒÛ is inherited from the -th element of the vector Ü ¼. The elements of subsequent dimensions are inherited from Ü ¼ with exponentially decreasing probability defined by a crossover factor Ê. In real processing, Step2 and Step3 are integrated as one operation. Step4 Gradient-based mutation. If Ü is not -feasible, or Ü µ ص, Ü ÒÛ is changed by the gradientbased mutation with probability È. In this case, Ü ÒÛ is replaced by Ü ÒÛ Ö Ü ÒÛ µ Ü ÒÛ µ until the number of the replacement becomes Ê or Ü ÒÛ becomes -feasible, or Ü ÒÛ µ ص. Step5 Survivor selection. The trial vector Ü ÒÛ is accepted for the next generation if the trial vector is better than the parent Ü. Also, if the trial vector has smaller constraint violation than the least feasible elite in È Øµ, the elite is replaced with the vector. Step6 Controlling the level. The level is updated by the level control function ص. If ص is zero, the population for elites is made empty, È Øµ. Step7 Go back to Step. V. EXPERIMENTAL RESULTS Twenty four benchmark problems and format for reporting results are specified in Problem Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-Parameter Optimization [28]. A. PC Configure The constrained differential evolution with gradientbased mutation and feasible elites was executed on the following system. System: Cygwin on Dynabook SS3500, CPU: Mobile Pentium III.33GHz, RAM: 240MB, Language: C. B. Parameters Setting a) All parameters to be adjusted. Parameters for DE are population size (Æ), scaling factor ( ) and crossover factor (Ê). Parameters for constrained method are control generations (Ì ) and control factor (Ô). The others are gradient-based mutation rate (È ), the number of repeating mutation (Ê ) and the number of feasible elites (Æ ). b) Corresponding dynamic ranges. Dynamic ranges are not studied enough, but we recommend the following ranges: Æ ¾ ¾Ò ¾¼Ò, ¾ ¼ ¼, Ê ¾ ¼ ¼, Ì ¾ ¼½Ì ÑÜ ¼Ì ÑÜ, Ô ¾ ¾ ½¼, È ¾ ¼¼½ ¼½, Ê ¾ ½, Æ ¾ ½ ¼¾Æ. c) Guidelines on how to adjust the parameters. Higher Æ, higher, higher Ê, higher Ì, lower Ô, higher È, higher Ê and lower Æ make search process more robust, but less fast. d) Estimated cost of parameter tuning in terms of number of function evaluations (). The cost of parameter tuning is not high, because the following settings works well in many problems. e) Actual parameter values used. Æ ¼, ¼, Ê ¼, Ì ¼¾Ì ÑÜ ¾¼¼ (Ì ÑÜ ½¾¼¼), Ô, È ¼¼½, Ê and Æ. C. Results Achieved Tables I, II, III and IV show best, worst, median, mean, and standard deviation of the difference between the best value found Ø and the optimal value, or Ø, after ½¼, ½¼ and ½¼ for independent 25 runs. Numbers in parenthesis after objective function values show the corresponding number of violated constraints. Number of constraints, which are infeasible at a median solution by more than, 0.0, and 0.000, are shown in, respectively. Mean violation for the median solution is shown in Ú. Table V shows the number of needed for satisfying the success condition of Ø ¼¼¼¼½ and Ü Ø being feasible. The ratio of runs where feasible solutions or successful solutions can be found, and the estimated to find successful solutions for the problems are shown, too. In the DE, the objective function and the constraint violation 4 Authorized licensed use limited to: Nanyang Technological University. Downloaded on March 24,200 at 2:34:29 EDT from IEEE Xplore. Restrictions apply.

are treated separately, and the evaluation of the objective function can often be omitted when the -level comparison can be calculated only by the constraint violation. Numbers in parenthesis after the estimated show the substantial number of evaluations for objective function and the number of evaluations for gradient matrix to find successful solutions, respectively. Tables I IV show that the DE succeeded to find feasible and (near) optimal solutions for all problems in all runs except for g20 and g22. For g22, the DE succeeded to find feasible solutions in all runs and find better objective values than old best known objective value ¾¼¾¾¼ in all runs. Although the DE could not find the new best known solution, the DE found the objective values less than 240 at 4 times and less than 250 at 0 times. Also, the new best known objective value ¾ ¼¼¼¼½¼ in [28] was found by the DE. For g20, the DE could not find any feasible solutions. However, the DE improved the mean constraint violation (Ú) of the old best known solution (0.5498) and found the more feasible solution (0.0078768), which appears in [28]. So, the DE succeeded in finding more feasible solutions. Figures, 2, 3 and 4 show the graphs of Ø and Ú over at the median run. Table V and Figures 4 show that the DE could find near optimal solutions very fast; Success performance is less than 5,000 for 3 problems, is less than 50,000 for 9 problems, is less than 00,000 for 6 problems and is less than 50,000 for 20 problems. Especially, the substantial number of evaluations for objective function is less than 5,000 for 5 problems, is less than 50,000 for 2 problems. Also, although the DE needs the calculation of gradient matrix, the number of evaluations for the matrix is fairly small. Table VI shows the time of 0000 evaluations for all functions (Ì ½), the time of algorithm for 0000 (Ì ¾) and their ratio. The value Ì ¾ Ì ½µÌ ½ shows that the algorithm of DE is very efficient and the evaluation of the gradient matrix is not time-consuming. TABLE VI COMPUTATIONAL COMPLEXITY Ì ½ Ì ¾ Ì ¾ Ì ½µÌ ½ 0.7554.093 0.4685 VI. CONCLUSIONS Differential evolution is a recently proposed variant of an evolutionary strategy. In this study, we proposed the DE by applying the constrained method to DE and showed that the DE can solve constrained optimization problems. Also, to solve problems with many equality constraints faster, which are very difficult problems for numerical optimization, we proposed gradient-based mutation and feasible elites preserving strategy. We showed that the DE could solve 23 problems out of 24 benchmark problems very fast. In the future, we will apply the DE to various real world problems that have large numbers of decision variables and constraints. ACKNOWLEDGMENTS This research is supported in part by Grant-in-Aid for Scientific Research (C) (No. 6500083, 75039) of Japan society for the promotion of science. REFERENCES [] Z. Michalewicz, A survey of constraint handling techniques in evolutionary computation methods, in Proceedings of the 4th Annual Conference on Evolutionary Programming. Cambridge, Massachusetts: The MIT Press, 995, pp. 35 55. [2] C. A. C. Coello, Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art, Computer Methods in Applied Mechanics and Engineering, vol. 9, no. -2, pp. 245 287, Jan. 2002. [3] T. Takahama and S. Sakai, Constrained optimization by constrained particle swarm optimizer with -level control, in Proc. of the 4th IEEE International Workshop on Soft Computing as Transdisciplinary Science and Technology (WSTST 05), Muroran, Japan, May 2005, pp. 09 029. [4] R. Storn and K. Price, Differential evolution a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization, vol., pp. 34 359, 997. [5] Z. Michalewicz and G. Nazhiyath, Genocop III: A co-evolutionary algorithm for numerical optimization with nonlinear constraints, in Proceedings of the Second IEEE International Conference on Evolutionary Computation. IEEE Press, 995, pp. 647 65. [6] A. I. El-Gallad, M. E. El-Hawary, and A. A. Sallam, Swarming of intelligent particles for solving the nonlinear constrained optimization problem, Engineering Intelligent Systems for Electrical Engineering and Communications, vol. 9, no. 3, pp. 55 63, Sept. 200. [7] K. Deb, An efficient constraint handling method for genetic algorithms, Computer Methods in Applied Mechanics and Engineering, vol. 86, no. 2/4, pp. 3 338, 2000. [8] T. P. Runarsson and X. Yao, Stochastic ranking for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation, vol. 4, no. 3, pp. 284 294, Sept. 2000. [9] E. Mezura-Montes and C. A. C. Coello, A simple multimembered evolution strategy to solve constrained optimization problems, IEEE Trans. on Evolutionary Computation, vol. 9, no., pp. 7, Feb. 2005. [0] S. Venkatraman and G. G. Yen, A generic framework for constrained optimization using genetic algorithms, IEEE Trans. on Evolutionary Computation, vol. 9, no. 4, pp. 424 435, Aug. 2005. [] T. Takahama and S. Sakai, Learning fuzzy control rules by «constrained powell s method, in Proceedings of 999 IEEE International Fuzzy Systems Conference, vol. 2, Seoul, Korea, Aug. 999, pp. 650 655. [2], Tuning fuzzy control rules by the «constrained method which solves constrained nonlinear optimization problems, Electronics and Communications in Japan, Part3: Fundamental Electronic Science, vol. 83, no. 9, pp. 2, Sept. 2000. [3] E. Camponogara and S. N. Talukdar, A genetic algorithm for constrained and multiobjective optimization, in 3rd Nordic Workshop on Genetic Algorithms and Their Applications (3NWGA), J. T. Alander, Ed. Vaasa, Finland: University of Vaasa, Aug. 997, pp. 49 62. [4] P. D. Surry and N. J. Radcliffe, The COMOGA method: Constrained optimisation by multiobjective genetic algorithms, Control and Cybernetics, vol. 26, no. 3, pp. 39 42, 997. [5] T. P. Runarsson and X. Yao, Evolutionary search and constraint violations, in Proceedings of the 2003 Congress on Evolutionary Computation, vol. 2. Piscataway, New Jersey: IEEE Service Center, Dec. 2003, pp. 44 49. [6] A. H. Aguirre, S. B. Rionda, C. A. C. Coello, G. L. Lizárraga, and E. M. Montes, Handling constraints using multiobjective optimization concepts, International Journal for Numerical Methods in Engineering, vol. 59, no. 5, pp. 989 207, Apr. 2004. [7] J. Joines and C. Houck, On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GAs, in Proceedings of the first IEEE Conference on Evolutionary Computation, D. Fogel, Ed. Orlando, Florida: IEEE Press, 994, pp. 579 584. 5 Authorized licensed use limited to: Nanyang Technological University. Downloaded on March 24,200 at 2:34:29 EDT from IEEE Xplore. Restrictions apply.

TABLE I ERROR VALUES ACHIEVED WHEN = ½¼, = ½¼, = ½¼ FOR PROBLEMS -6. ¼½ ¼¾ ¼ ¼ ¼ ¼ Best 9.694e+00(0) 4.3267e-0(0) 2.244e-0(0) 2.4820e+0(0) 4.0399e-02(0).229e-02(0) Median.0488e+0(0) 5.0e-0(0) 9.656e-0(0) 5.73e+0(0).2674e+00(0).2382e-0(0) Worst.3370e+0(0) 5.5870e-0(0).0005e+00(0).40e+02(0) 2.5976e+0(0) 2.706e+00(0) ½¼ 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 Mean.0536e+0 4.9947e-0 8.4655e-0 5.9547e+0 4.756e+00 3.969e-0 Std.0087e+00 3.94e-02 2.969e-0 2.2260e+0 7.3865e+00 6.3579e-0 Best 3.8990e-04(0) 2.4480e-02(0).3997e-02(0).094e-(0) 8.33e-04(0).823e-(0) Median 7.7833e-04(0) 7.277e-02(0) 5.722e-02(0) 3.6380e-(0).755e-02(0).823e-(0) Worst.4376e-03(0).2255e-0(0).363e-0(0).488e-0(0).762e-0(0).823e-(0) ½¼ 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 Mean 8.4426e-04 6.936e-02 5.4388e-02 4.985e- 2.266e-02.823e- Std 2.585e-04 2.768e-02 2.2945e-02 3.4020e- 3.425e-02 0.0000e+00 Best 0.0000e+00(0) 4.0394e-09(0) -4.4409e-6(0) 0.0000e+00(0) 0.0000e+00(0).823e-(0) Median 0.0000e+00(0) 3.0933e-08(0) -4.4409e-6(0) 0.0000e+00(0) 0.0000e+00(0).823e-(0) Worst 0.0000e+00(0) 7.363e-08(0) -4.4409e-6(0) 0.0000e+00(0) 0.0000e+00(0).823e-(0) ½¼ 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 Mean 0.0000e+00 3.0333e-08-4.4409e-6 0.0000e+00 0.0000e+00.823e- Std 0.0000e+00.7523e-08 2.9582e-3 0.0000e+00 0.0000e+00 0.0000e+00 TABLE II ERROR VALUES ACHIEVED WHEN = ½¼, = ½¼, = ½¼ FOR PROBLEMS 7-2. ¼ ¼ ¼ ½¼ ½½ ½¾ Best 4.6886e+0(0) 5.55e-7(0) 2.696e+00(0) 3.808e+03(0).028e-04(0) 7.727e-09(0) Median 7.976e+0(0) 5.55e-7(0) 7.0678e+00(0) 6.769e+03(0) 9.6709e-04(0) 3.7804e-06(0) Worst.972e+02(0) 6.9389e-7(0).5734e+0(0).338e+04() 2.735e-02(0) 2.620e-04(0) ½¼ 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 Mean 9.4585e+0 5.776e-7 7.479e+00 7.067e+03 5.297e-03 4.230e-05 Std 4.266e+0 4.5097e-8 2.8734e+00 2.834e+03 7.5607e-03 6.8340e-05 Best.8787e-03(0) 4.633e-7(0) 6.0254e-2(0).8756e-0(0) 8.5377e-07(0) 0.0000e+00(0) Median 3.499e-03(0) 4.633e-7(0) 2.7626e-(0).47e+00(0) 4.492e-06(0) 0.0000e+00(0) Worst 8.6878e-03(0) 4.633e-7(0) 5.2864e-(0) 4.8867e+00(0).800e-05(0) 0.0000e+00(0) ½¼ 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 Mean 3.882e-03 4.633e-7 2.4074e-.6720e+00 5.666e-06 0.0000e+00 Std.5955e-03.2326e-32.4547e-.794e+00 4.3725e-06 0.0000e+00 Best -.8474e-3(0) 4.633e-7(0) 0.0000e+00(0) -.890e-2(0) 0.0000e+00(0) 0.0000e+00(0) Median -.8474e-3(0) 4.633e-7(0) 0.0000e+00(0) -9.0949e-3(0) 0.0000e+00(0) 0.0000e+00(0) Worst -.7764e-3(0) 4.633e-7(0) 0.0000e+00(0) -9.0949e-3(0) 0.0000e+00(0) 0.0000e+00(0) ½¼ 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 Mean -.8360e-3 4.633e-7 0.0000e+00 -.2005e-2 0.0000e+00 0.0000e+00 Std 2.83e-5.2326e-32 0.0000e+00 4.2426e-3 0.0000e+00 0.0000e+00 TABLE III ERROR VALUES ACHIEVED WHEN = ½¼, = ½¼, = ½¼ FOR PROBLEMS 3-8. ½ ½ ½ ½ ½ ½ Best 6.8682e-03(0) -5.5835e+0(3).7087e-03(0).5655e-02(0) -.258e+04(4) 3.05e-0(0) Median 9.4527e-0(0) -6.3934e+00(3) 5.333e+00(0) 3.744e-02(0) 2.2086e+0(4) 5.0207e-0(0) Worst.9008e+0(3) 3.467e+0(3).0583e+0(0) 5.342e-02(0) 5.393e+03(4) 7.435e-0(0) ½¼ 0,0,0,2,0 0,0,0 0,0,0 4,0,0 0,0,0 Ú 0 0.537825 0 0 7.2838 0 Mean.6460e+00 7.255e+00 3.5957e+00 3.936e-02 9.4350e+0 5.2434e-0 Std 3.6392e+00.8956e+0 3.4024e+00 9.8520e-03 2.877e+03.80e-0 Best 9.6609e-06(0).079e+00(2).2786e-04(0) 4.4409e-5(0) 3.90e+00(0).0482e-04(0) Median 2.9700e-05(0) 3.665e+00(2) 2.844e-0(0) 4.8850e-5(0) 2.885e+0(0) 3.3936e-04(0) Worst 2.4225e-04(0) 9.028e+00(3) 8.906e+00(0) 8.257e-5(0) 9.6000e+0(0) 9.9006e-04(0) ½¼ 0,0,0 0,0,2 0,0,0 0,0,0 0,0,0 0,0,0 Ú 0 0.00009 0 0 0 0 Mean 7.430e-05 3.967e+00.3269e+00 5.027e-5 4.3680e+0 3.9829e-04 Std 6.9894e-05.6546e+00 2.209e+00 8.66e-6 3.6996e+0 2.2200e-04 Best -9.745e-7(0).42e-4(0) 0.0000e+00(0) 4.4409e-5(0).890e-2(0) 3.3307e-6(0) Median -9.745e-7(0) 2.36e-4(0) 0.0000e+00(0) 4.4409e-5(0).890e-2(0) 3.3307e-6(0) Worst -9.745e-7(0) 2.36e-4(0) 0.0000e+00(0) 4.4409e-5(0).890e-2(0) 4.4409e-6(0) ½¼ 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 0,0,0 Mean -9.745e-7 2.032e-4 0.0000e+00 4.4409e-5.890e-2 3.375e-6 Std 0.0000e+00.3924e-5 0.0000e+00.5777e-30.27e-27 2.756e-7 6 Authorized licensed use limited to: Nanyang Technological University. Downloaded on March 24,200 at 2:34:29 EDT from IEEE Xplore. Restrictions apply.

TABLE IV ERROR VALUES ACHIEVED WHEN = ½¼, = ½¼, = ½¼ FOR PROBLEMS 9-24. ½ ¾¼ ¾½ ¾¾ ¾ ¾ Best 5.9492e+02(0) -.5484e-0() 3.7492e+00(0).9260e+03(8) -.2860e+02(5) 2.3436e-08(0) Median 9.435e+02(0) -.052e-0(9).5609e+02(0) 4.4288e+03(9) 6.6472e+02(2) 5.5794e-08(0) Worst.6073e+03(0) -2.7685e-02(8) 8.0628e+02(0).9764e+04(9) 7.7755e+02(4).5085e-06(0) ½¼ 0,0,0 0,8, 0,0,0 6,3,0,,0 0,0,0 Ú 0 0.0570694 0 44697 0.206346 0 Mean 9.638e+02 -.0437e-0 2.3846e+02.2538e+04 4.2495e+02 2.4962e-07 Std 2.6696e+02 3.455e-02 2.6674e+02 5.7869e+03 2.3660e+02 3.250e-07 Best 7.662e+00(0) -.5484e-0().7963e+00(0) 4.336e+00(4).946e+02(0) 5.7732e-4(0) Median.604e+0(0) -.5484e-0() 2.0297e+0(0) 2.6074e+02(3) 3.7298e+02(0) 5.7732e-4(0) Worst.9302e+0(0) -2.326e-02(2) 2.2767e+02(0).6530e+04(2) 7.725e+02(0) 5.7732e-4(0) ½¼ 0,0,0 0,7,0 0,0,0 3,0,0 0,0,0 0,0,0 Ú 0 0.05565 0 3.33359 0 0 Mean.76e+0 -.0463e-0 3.0442e+0 5.282e+03 3.8749e+02 5.7732e-4 Std 2.9576e+00 3.8246e-02 4.259e+0 5.4645e+03.4470e+02 2.5244e-29 Best 5.262e-08(0) -6.6522e-02(9) -2.8422e-4(0).958e+00(0) 0.0000e+00(0) 5.7732e-4(0) Median 5.262e-08(0) -2.4674e-02(8) -2.8422e-4(0).2332e+0(0) 0.0000e+00(0) 5.7732e-4(0) Worst 5.9840e-05(0).4007e-0(9).42e-3(0) 6.8922e+0(0) 5.6843e-4(0) 5.7732e-4(0) ½¼ 0,0,0 0,,0 0,0,0 0,0,0 0,0,0 0,0,0 Ú 0 0.00007 0 0 0 0 Mean 5.3860e-06 -.6626e-02-2.600e-4.8369e+0 2.2737e-5 5.7732e-4 Std.2568e-05 4.2362e-02 3.347e-4.5690e+0.39e-4 2.5244e-29 NUMBER OF TO ACHIEVE THE FIXED ACCURACY LEVEL ( ܵ TABLE V PERFORMANCE. Ü µµ ¼¼¼¼½), SUCCESS RATE, FEASIBLE RATE AND SUCCESS Prob. Best Median Worst Mean Std Feasible Rate Success Rate Success Performance g0 5722 59345 672 59308 56.2937 00% 00% 59308 (6979,0) g02 2652 469 75206 49825 579.252 00% 00% 49825 (5935,0) g03 86748 89473 9328 89407 077.3246 00% 00% 89407 (4027,88) g04 24800 26098 28206 2626 909.2025 00% 00% 2626 (025,0) g05 9682 97379 98589 9743 402.5792 00% 00% 9743 (43998,445) g06 6499 736 8382 738 460.6339 00% 00% 738 (3544,0) g07 69506 74476 78963 74303 2409.369 00% 00% 74303 (20430,0) g08 327 82 334 39 94.0499 00% 00% 39 (490,0) g09 9530 2372 24790 232 52.443 00% 00% 232 (079,0) g0 93743 05799 22387 05234 6776.9647 00% 00% 05234 (7743,0) g 5407 682 2950 6420 6570.75 00% 00% 6420 (064,67) g2 645 455 5540 424 82.5536 00% 00% 424 (366,0) g3 8287 33594 68608 34738 5958.0883 00% 00% 34738 (552,3) g4 0686 2526 2656 3439 495.0850 00% 00% 3439 (37864,470) g5 57729 8785 90593 8426 747.924 00% 00% 8426 (40857,423) g6 2347 300 3923 2986 400.6285 00% 00% 2986 (4770,0) g7 97274 9902 0044 9886 79.6359 00% 00% 9886 (36786,246) g8 5035 59232 722 5953 432.6982 00% 00% 5953 (936,0) g9 39636 354060 45685 356350 2825.659 00% 00% 356350 (4045,0) g20 0% 0% g2 2694 33224 26905 3543 6958.7554 00% 00% 3543 (34633,483) g22 00% 0% g23 58742 93593 2807 200765 2767.060 00% 00% 200765 (34437,533) g24 266 2928 3474 2952 84.8424 00% 00% 2952 (925,0) 0000 000 00 0 g0 g02 g03 g04 g05 g06 0000 000 00 0 g0 g02 g03 g04 g05 g06 f(x)-f(x*) 0. v 0. 0.0 0.0 0.00 0.00 0.000 0.000 0 00000 200000 300000 400000 500000 0 00000 200000 300000 400000 500000 Fig.. Convergence Graph for Problems -6 7 Authorized licensed use limited to: Nanyang Technological University. Downloaded on March 24,200 at 2:34:29 EDT from IEEE Xplore. Restrictions apply.

0000 000 00 0 g07 g08 g09 g0 g g2 0000 000 00 0 g07 g08 g09 g0 g g2 f(x)-f(x*) 0. v 0. 0.0 0.0 0.00 0.00 0.000 0.000 0 00000 200000 300000 400000 500000 0 00000 200000 300000 400000 500000 Fig. 2. Convergence Graph for Problems 7-2 0000 000 00 0 g3 g4 g5 g6 g7 g8 0000 000 00 0 g3 g4 g5 g6 g7 g8 f(x)-f(x*) 0. v 0. 0.0 0.0 0.00 0.00 0.000 0.000 0 00000 200000 300000 400000 500000 0 00000 200000 300000 400000 500000 Fig. 3. Convergence Graph for Problems 3-8 0000 000 00 0 g9 g20 g2 g22 g23 g24 0000 000 00 0 g9 g20 g2 g22 g23 g24 f(x)-f(x*) 0. v 0. 0.0 0.0 0.00 0.00 0.000 0.000 0 00000 200000 300000 400000 500000 0 00000 200000 300000 400000 500000 Fig. 4. Convergence Graph for Problems 9-24 [8] S. Kazarlis and V. Petridis, Varying fitness functions in genetic algorithms: Studying the rate of increase of the dynamic penalty terms, in Proceedings of the 5th Parallel Problem Solving from Nature (PPSN V), Amsterdan, The Netherlands. Heidelberg, Germany: Springer-Verlag, Sept. 998, pp. 2 220. [9] R. Farmani and J. A. Wright, Self-adaptive fitness formulation for constrained optimization, IEEE Transactions on Evolutionary Computation, vol. 7, no. 5, pp. 445 455, Oct. 2003. [20] C.Reeves, Genetic algorithms for the operations researcher, IN- FORMS Journal on Computing, vol. 9, no. 3, pp. 23 250, 997. [2] T. Takahama and S. Sakai, Learning fuzzy control rules by «- constrained simplex method, Systems and Computers in Japan, vol. 34, no. 6, pp. 80 90, June 2003. [22], Constrained optimization by applying the «constrained method to the nonlinear simplex method with mutations, IEEE Transactions on Evolutionary Computation, vol. 9, no. 5, pp. 437 45, Oct. 2005. [23], Constrained optimization by «constrained genetic algorithm («GA), Systems and Computers in Japan, vol. 35, no. 5, pp. 22, May 2004. [24], Constrained optimization by the «constrained particle swarm optimizer, Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 9, no. 3, pp. 282 289, May 2005. [25] T. Takahama, S. Sakai, and N. Iwane, Constrained optimization by the constrained hybrid algorithm of particle swarm optimization and genetic algorithm, in Proc. of the 8th Australian Joint Conference on Artificial Intelligence. Springer-Verlag, Dec. 2005, pp. 389 400. [26] P. Chootinan and A. Chen, Constraint handling in genetic algorithms using gradient-based repair method, Computers & Operations Research, vol. 33, no. 8, pp. 2263 228, Aug. 2005, to appear. [27] S.L.Campbell and C. J. Meyer, Generalized Inverses of Linear Transformations. Dover Publications, 979. [28] J. J. Liang, T. P. Runarsson, E. Mezura-Montes, M. Clerc, P. N. Suganthan, C. A. C. Coello, and K. Deb, Problem definitions and evaluation criteria for the CEC2006 special session on constrained real-parameter optimization, 2006. [Online]. Available: http://www.ntu.edu.sg/home/epnsugan/cec2006/technical report.pdf 8 Authorized licensed use limited to: Nanyang Technological University. Downloaded on March 24,200 at 2:34:29 EDT from IEEE Xplore. Restrictions apply.