Self-adaptive Differential Evolution Algorithm for Constrained Real-Parameter Optimization

Similar documents
Differential Evolution Algorithm with a Modified Archiving-based Adaptive Tradeoff Model for Optimal Power Flow

The Study of Teaching-learning-based Optimization Algorithm

Solving of Single-objective Problems based on a Modified Multiple-crossover Genetic Algorithm: Test Function Study

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Markov Chain Monte Carlo Lecture 6

Multi-agent system based on self-adaptive differential evolution for solving dynamic optimization problems

Computing Correlated Equilibria in Multi-Player Games

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Kernel Methods and SVMs Extension

Chapter 2 Real-Coded Adaptive Range Genetic Algorithm

Solving Nonlinear Differential Equations by a Neural Network Method

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

Particle Swarm Optimization with Adaptive Mutation in Local Best of Particles

COS 521: Advanced Algorithms Game Theory and Linear Programming

MMA and GCMMA two methods for nonlinear optimization

MODIFIED PREDATOR-PREY (MPP) ALGORITHM FOR CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

Utilizing cumulative population distribution information in differential evolution

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Global Optimization Using Hybrid Approach

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

An Adaptive Learning Particle Swarm Optimizer for Function Optimization

On a direct solver for linear least squares problems

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

Real Parameter Single Objective Optimization using Self-Adaptive Differential Evolution Algorithm with more Strategies

A Hybrid Co-evolutionary Particle Swarm Optimization Algorithm for Solving Constrained Engineering Design Problems

Thin-Walled Structures Group

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Lecture 14: Bandits with Budget Constraints

VQ widely used in coding speech, image, and video

Some modelling aspects for the Matlab implementation of MMA

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Difference Equations

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Feature Selection: Part 1

Lecture Notes on Linear Regression

The Minimum Universal Cost Flow in an Infeasible Flow Network

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

A new Approach for Solving Linear Ordinary Differential Equations

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

DETERMINATION OF TEMPERATURE DISTRIBUTION FOR ANNULAR FINS WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY BY HPM

Numerical Heat and Mass Transfer

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

4DVAR, according to the name, is a four-dimensional variational method.

Supporting Information

Lecture 20: November 7

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

Structure and Drive Paul A. Jensen Copyright July 20, 2003

A Novel Evolutionary Algorithm for Capacitor Placement in Distribution Systems

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

Second Order Analysis

A Hybrid Differential Evolution Algorithm Game Theory for the Berth Allocation Problem

The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based Optimization Problems

Appendix B: Resampling Algorithms

Errors for Linear Systems

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Lecture 10 Support Vector Machines II

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

Journal of Applied Research and Technology ISSN: Centro de Ciencias Aplicadas y Desarrollo Tecnológico.

COEFFICIENT DIAGRAM: A NOVEL TOOL IN POLYNOMIAL CONTROLLER DESIGN

A New Evolutionary Computation Based Approach for Learning Bayesian Network

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor

THE ROBUSTNESS OF GENETIC ALGORITHMS IN SOLVING UNCONSTRAINED BUILDING OPTIMIZATION PROBLEMS

Chapter - 2. Distribution System Power Flow Analysis

Economic dispatch solution using efficient heuristic search approach

Optimum Design of Steel Frames Considering Uncertainty of Parameters

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

A HYBRID DIFFERENTIAL EVOLUTION -ITERATIVE GREEDY SEARCH ALGORITHM FOR CAPACITATED VEHICLE ROUTING PROBLEM

Maximizing Overlap of Large Primary Sampling Units in Repeated Sampling: A comparison of Ernst s Method with Ohlsson s Method

CHAPTER 2 MULTI-OBJECTIVE GENETIC ALGORITHM (MOGA) FOR OPTIMAL POWER FLOW PROBLEM INCLUDING VOLTAGE STABILITY

Homework Assignment 3 Due in class, Thursday October 15

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Curve Fitting with the Least Square Method

An Admission Control Algorithm in Cloud Computing Systems

Linear Approximation with Regularization and Moving Least Squares

10.34 Fall 2015 Metropolis Monte Carlo Algorithm

Primer on High-Order Moment Estimators

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

Research Article Green s Theorem for Sign Data

Constrained Evolutionary Programming Approaches to Power System Economic Dispatch

Artificial neural network regression as a local search heuristic for ensemble strategies in differential evolution

DUE: WEDS FEB 21ST 2018

CSC 411 / CSC D11 / CSC C11

On the Multicriteria Integer Network Flow Problem

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Which Separator? Spring 1

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

A discrete differential evolution algorithm for multi-objective permutation flowshop scheduling

THE general problem tackled using an optimization

Notes on Frequency Estimation in Data Streams

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Transcription:

26 IEEE Congress on Evolutonary Computaton Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 26 Self-adaptve Dfferental Evoluton Algorthm for Constraned Real-Parameter Optmzaton V. L. Huang, A. K. Qn, Member, IEEE and P. N. Suganthan, Senor Member, IEEE Abstract In ths paper, we propose an extenson of Self-adaptve Dfferental Evoluton algorthm (SaDE) to solve optmzaton problems wth constrants. In comparson wth the orgnal SaDE algorthm, the replacement crteron was modfed for handlng constrants. The performance of the proposed method s reported on the set of 24 benchmark problems provded by CEC26 specal sesson on constraned real parameter optmzaton. M I. INTRODUCTION ANY optmzaton problems n scence and engneerng have a number of constrants. Evolutonary algorthms have been successful n a wde range of applcatons. However, evolutonary algorthms naturally perform unconstraned search. Therefore, when used for solvng constraned optmzaton problems, they requre addtonal mechansms to handle constrants n ther ftness functon. In the lterature, several constrants handlng technques have been suggested for solvng constraned optmzaton by usng evolutonary algorthms. Mchalewcz and Schoenauer [3] grouped the methods for handlng constrants by evolutonary algorthm nto four categores: ) preservng feasblty of solutons, ) penalty functons, ) make a separaton between feasble and nfeasble solutons, and v) other hybrd methods. The most common approach to deal wth constrants s the method based on penalty functons, whch penalze nfeasble solutons. However, penalty functons have, n general, several lmtatons. They requre a careful tunng to determne the most approprate penalty factors. Also, they tend to behave ll when tryng to solve a problem n whch the optmum s at the boundary between the feasble and the nfeasble regons or when the feasble regon s dsjont. For some dffcult problems n whch t s extremely dffcult to locate a feasble soluton due to napproprate representaton scheme, researchers desgned specal representatons and operators to preserve the feasblty of solutons at all the tme. Recently there are a few methods whch emphasze the dstncton between feasble and nfeasble solutons n the search space, such as behavoral memory method, superorty of feasble solutons to nfeasble solutons, and reparng nfeasble solutons. Also researchers developed hybrd methods whch combne evolutonary algorthm wth another technque (normally a numercal optmzaton approach) to handle constrants. The Self-adaptve Dfferental Evoluton algorthm (SaDE) was ntroduced n [1], n whch the choce of learnng strategy and the two control parameters F and CR are not requred to be pre-specfed. Durng evoluton, the sutable learnng strategy and parameter settngs are gradually self-adapted accordng to the learnng experence. In [1], SaDE was tested on a set of benchmark functons wthout constrants. In ths work, we generalze SaDE to handle problems wth constrants and nvestgate the performance on 24 constraned problems. II. DIFFERENTIAL EVOLUTION ALORITHM Dfferental evoluton (DE) algorthm, proposed by Storn and Prce [4], s a smple but powerful populaton-based stochastc search technque for solvng global optmzaton problems. The orgnal DE algorthm s descrbed n detal as n follows: Let S R be the n-dmensonal search space of the problem under consderaton. The DE evolves a populaton of NP n-dmensonal ndvdual vectors,.e. soluton canddates, X = ( x,, x ) S, = 1,, NP, 1 n from one generaton to the next. The ntal populaton should deally cover the entre parameter space by randomly dstrbutng each parameter of an ndvdual vector wth unform dstrbuton between the prescrbed upper and lower u parameter bounds x and x l. j j At each generaton, DE employs mutaton and crossover operatons to produce a tral vector U for each, ndvdual vector X, also called target vector, n the current, populaton. A. Mutaton operaton For each target vector X at, generaton, an assocated mutated vector V =, { v, v,..., v 1, 2, n, } can usually be generated by usng one of the followng strateges as shown n the onlne codes avalable at http://www.cs.berkeley.edu/~storn/code.html: The authors are wth School of Electrcal and Electronc Engneerng, Nanyang Technologcal Unversty, Nanyang Ave., 639798 Sngapore (emal: huanglng@pmal.ntu.edu.sg, qnka@pmal.ntu.edu.sg, epnsugan@ntu.edu.sg) -783-9487-9/6/$2./ 26 IEEE 17 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.

DE/rand/1 : V, = Xr ( ) 1, + F Xr 2, X r3, DE/best/1 : V ( ), = Xbest, + F Xr 1, X r2, DE/current to best/1 : ( ) F ( ) 1 2, V = X + F X X + X X DE/best/2 :,, best,, r, r V = X + F ( X X, best, r ) + F ( X X ) 1, r2, r3, r4, DE/rand/2 : V = X + F ( X X, ) + F r ( X X ) 1, r2, r3, r4, r, where ndces r, r, r, r, r are random and 1 2 3 4 mutually dfferent ntegers generated n the range [ 1, NP ], whch should also be dfferent from the current tral vector s ndex. F s a factor n (, 1+) for scalng dfferental vectors and X s the ndvdual vector wth best ftness best, value n the populaton at generaton. B. Crossover operaton After the mutaton phase, the bnomnal crossover operaton s appled to each par of the generated mutant vector V, and ts correspondng target vector X, to U = u, u,..., u. generate a tral vector:, ( 1, 2, n, ) vj,,, f ( rand j[, 1] CR) or ( j = jrand) u j,, = x j,,, otherwse j = 1, 2,..., n where CR s a user-specfed crossover constant n the range [,1) and j rand s a randomly chosen nteger n the range [ 1, n ] to ensure that the tral vector U, wll dffer from ts correspondng target vector X, by at least one parameter. C. Selecton operaton If the values of some parameters of a newly generated tral vector exceed the correspondng upper and lower bounds, we randomly and unformly rentalze t wthn the search range. Then the ftness values of all tral vectors are evaluated. After that, a selecton operaton s performed. The f U s compared to f X n the ftness value of each tral vector (, ) that of ts correspondng target vector ( ) current populaton. If the tral vector has smaller or equal ftness value (for mnmzaton problem) than the correspondng target vector, the tral vector wll replace the target vector and enter the populaton of the next generaton. Otherwse, the target vector wll reman n the populaton for the next generaton. The operaton s expressed as follows: X, + 1 {,, f f ( ) f ( ),, = U U X X otherwse,, The above 3 steps are repeated generaton after generaton untl some specfc stoppng crtera are satsfed. III. SADE ALORITHM To acheve good performance on a specfc problem by usng the orgnal DE algorthm, we need to try all avalable (usually ) learnng strateges n the mutaton phase and fne-tune the correspondng crtcal control parameters CR, F and NP. The performance of the orgnal DE algorthm s hghly dependent on the strateges and parameter settngs. Although we may fnd the most sutable strategy and the correspondng control parameters for a specfc problem, t may requre a huge amount of computaton tme. Also, durng dfferent evoluton stages, dfferent strateges and dfferent parameter settngs wth dfferent global and local search capabltes mght be preferred. Therefore, we developed SaDE algorthm that can automatcally adapt the learnng strateges and the parameters settngs durng evoluton. The man deas of the SaDE algorthm are summarzed below. A. Strategy Adaptaton SaDE probablstcally selects one out of several avalable learnng strateges for each ndvdual n the current populaton. Hence, we should have several canddate learnng strateges avalable to be chosen and also we need to develop a procedure to determne the probablty of applyng each learnng strategy. In the prelmnary SaDE verson [1], only two canddate strateges are employed,.e. rand/1/bn and current to best/2/bn. Our recent work suggests that ncorporatng more strateges can further mprove the performance of the SaDE. Here, we use 4 strateges nstead of the orgnal two to enhance the SaDE. DE/rand/1: V = X + F, r ( X X ) 1, r2, r 3, DE/current to best/2: ( ) ( ) ( ) 1 2, 3 4, V = X + F X X + F X X + F X X,, best,, r, r r, r DE/rand/2: ( ) F ( ) V = X + F X X + X X DE/current-to-rand/1:, r1, r2, r3, r4, r, ( ) ( ) U = X + K X X + F X X, r1, r3,, r1, r2, In strategy DE/current-to-rand/1, K s the coeffcent of combnaton n[.,1.]. Snce here we have four canddate strateges nstead of two strateges n [1], assumng that the probablty of applyng the four dfferent strateges to each ndvdual n the current populaton s p, = 1, 2,3, 4. The ntal probabltes are set to be equal to.2,.e., p = p = p = p =.2. 1 2 3 4 Therefore, each strategy has equal probablty to be appled 18 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.

to every ndvdual n the ntal populaton. Accordng to the probablty, we apply Roulette Wheel selecton to select the strategy for each ndvdual n the current populaton. After evaluaton of all newly generated tral vectors, the number of tral vectors successfully enterng the next generaton whle generated by each strategy s recorded as ns, = 1,2,3,4 respectvely, and the numbers of tral vectors dscarded whle generated by each strategy s recorded as nf, = 1, 2,3, 4. ns and nf are accumulated wthn a specfed number of generatons (2 n our experments), called the learnng perod. Then, the probablty of p s updated as: ns p = ns + nf The above expresson represents the percentage of the success rate of tral vectors generated by each strategy durng the learnng perod. Therefore, the probabltes of applyng those four strateges are updated every generaton, after the learnng perod. We only accumulate the value of ns and nf n recent 2 generatons to avod the possble sde-effect accumulated n the far prevous learnng stage. Ths adaptaton procedure can gradually evolve the most sutable learnng strategy at dfferent stages durng the evoluton for the problem under consderaton. B. Parameter Adaptaton In the orgnal DE, the 3 control parameters CR, F and NP are closely related to the problem under consderaton. Here, we keep NP as a user-specfed value as n the orgnal DE, so as to deal wth problems wth dfferent dmensonaltes. Between the two parameters CR and F, CR s much more senstve to the problem s property and complexty such as the mult-modalty, whle F s more related to the convergence speed. Here, we allow F to take dfferent random values n the range (, 2] wth normal dstrbutons of mean. and standard devaton.3 for dfferent ndvduals n the current populaton. Ths scheme can keep both local (wth small F values) and global (wth large F values) search ablty to generate the potental good mutant vector throughout the evoluton process. For the control parameter K n strategy DE/current-to-rand/1, experments show that t s always successful to optmze a functon usng a normally dstrbuted random value for K. So here we set K = F to reduce one more tunng parameter. The control parameter CR plays an essental role n the orgnal DE algorthm. The proper choce of CR may lead to good performance under several learnng strateges whle a wrong choce may result n performance deteroraton under any learnng strategy. Also, the good CR parameter value usually falls wthn a small range, n whch the algorthm can perform consstently well on a complex problem. Therefore, we consder accumulatng the prevous learnng experence wthn a certan generatonal nterval so as to dynamcally adapt the value of CR to a sutable range. We assume that CR s normally dstrbuted n a range wth mean CRm and standard devaton.1. Intally, CRm s set at. and dfferent CR values conformng ths normal dstrbuton are generated for each ndvdual n the current populaton. These CR values for all ndvduals reman for generatons and then a new set of CR values s generated under the same normal dstrbuton. Durng every generaton, the CR values assocated wth tral vectors successfully enterng the next generaton are recorded. After a specfed number of generatons (2 n our experments), CR has been changed for several tmes (2/=4 tmes n our experments) under the same normal dstrbuton wth center CRm and standard devaton.1, and we recalculate the mean of normal dstrbuton of CR accordng to all the recorded CR values correspondng to successful tral vectors durng ths perod. Wth ths new normal dstrbuton s mean and the standard devdaton.1, we repeat the above procedure. As a result, the proper CR value range for the current problem can be learned to sut the partcular problem. Note that we wll reset the record of the successful CR values to zero once we recalculate the normal dstrbuton s mean to avod the possble napproprate long-term accumulaton effects. We ntroduce the above learnng strategy adaptaton schemes nto the orgnal DE algorthm and develop a Self-adaptve Dfferental Evoluton algorthm (SaDE) algorthm. The SaDE does not requre the choce of a certan learnng strategy and the settng of specfc values to crtcal control parameters CR and F. The learnng strategy and control parametercr, whch are hghly dependent on the problem s characterstc and complexty, are self-adapted by usng the prevous learnng experence. Therefore, the SaDE algorthm can demonstrate consstently good performance on problems wth dfferent propertes, such as unmodal and multmodal problems. The nfluence on the performance of SaDE by the number of generatons durng whch prevous learnng nformaton s collected s not sgnfcant. C. Local search To speed up the convergence of the SaDE algorthm, we apply a local search procedure once every generatons, on % of ndvduals ncludng the best ndvdual found so far and the randomly selected ndvduals out of the best % of the ndvduals n the current populaton. Here, we employ the Sequental Quadratc Programmng (SQP) method as the local search method. IV. HANDLIN CONSTRAINTS In real world applcatons, most optmzaton problems have complex constrants. A constraned optmzaton problem s usually wrtten as a nonlnear programmng 19 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.

problem of the followng form: Mnmze: f( x), x = ( x, x,, x ) 1 2 n and x S Subject to: g ( x), = 1,, q h ( x) =, j= q+ 1,, m j S s the whole search space. q s the number of nequalty constrants. The number of equalty constrants s m-q. For convenence, the equalty constrants are always transformed nto the nequalty form, and then we can combne all the constrants as max{ g ( x),} = 1, q ( x) = h ( x) = q+ 1,, m Therefore, the objectve of our algorthm s to mnmze the ftness functon f ( x ), at the same tme the optmum solutons obtaned must satsfy all the constrants ( x ). Among varous constrants handlng methods mentoned n the ntroducton, some methods based on superorty of feasble solutons, such as the approach proposed by Deb [], has demonstrated promsng performance, as ndcated n [6][7] whch deal wth constrants usng DE. Besdes ths, Deb s selecton crteron [] has no parameter to fne-tune, whch s also the motvaton of our SaDE too - no fne-tunng of parameters as much possble. Hence, we ncorporate ths constrants handlng technque as follows: Durng the selecton procedure, the tral vector s compared to that of ts correspondng target vector n the current populaton consderng both the ftness value and constrants. The tral vector wll replace the target vector and enter the populaton of the next generaton f any of the followng condtons s true. 1) The tral vector s feasble and the target vector s not. 2) The tral vector and target vector are both feasble and tral vector has smaller or equal ftness value (for mnmzaton problem) than the correspondng target vector. 3) The tral vector and target vector are both nfeasble, but tral vector has a smaller overall constran volaton. The overall constran volaton s a weghted mean value of all the constrants, whch s expressed as followng, m = 1 v ( x ) = m = w( ( x)) 1 w where w 1 = s a weghted parameter, max s the max maxmum volaton of the constrants () x obtaned so far. Here, we set w as 1 whch vares durng the max evoluton n order to accurately normalze the constrants of the problem, thus the overall constran volaton can represent all constrants more equally. V. EXPERIMENTAL RESULTS We evaluate the performance of the SaDE algorthm on 24 benchmark functons wth constrants [2], whch nclude lnear, nonlnear, quadratc, cubc, polynomal constrants. The populaton sze s set at. For each functon, the SaDE algorthm runs 2 tmes. We * use the ftness value of best known solutons ( f ( x )) newly updated n [2]. The error values acheved when =e+3, =e+4, =e+ for the 24 test functons are lsted n Tables I-IV. We record the needed n each run for fndng a soluton satsfyng the successful condton [2] n Table V. The success rate, feasble rate, and success performance are also lsted. The convergence maps of SaDE on functons 1-6, functons 7-12, functons 13-18, and functons 19-24 are plotted n Fgures 1-4 respectvely. log(v) log(f(x)-f(x*)) 8 6 4 2-2 -4-6 -8 1 1 - -1-2 -2-3 g1 g2 g3 g4 g g6-3. 1 1. 2 2. x 1 (1-a) log(f(x)-f(x*)) vs g1 g2 g3 g4 g g6-12. 1 1. 2 2. 3 x 1 4 (1-b) log(v) vs Fgure 1: Convergence raph for Functon 1-6 2 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.

2 1 g7 g8 g9 g1 g11 g12 6 4 2 g13 g14 g1 g16 g17 g18 log(f(x)-f(x*)) log(v) -2-4 -6-2 -8-3 -12-4. 1 1. 2 2. x 1 (2-a) log(f(x)-f(x*)) vs -14. 1 1. 2 2. 3 x 1 4 (3-b) log(v) vs Fgure 3 Convergence raph for Functon 13-18 1 1 g7 g8 g9 g1 g11 g12 1 g19 g2 g21 g22 g23 g24 log(v) - log(f(x)-f(x*)) - -1-2 1 2 3 4 6 (2-b) log(v) vs Fgure 2 Convergence raph for Functon 7-12 -2-3. 1 1. 2 2. x 1 (4-a) log(f(x)-f(x*)) vs 1 g13 g14 g1 g16 g17 g18 log(v) 2 2 1 1 g19 g2 g21 g22 g23 g24 log(f(x)-f(x*)) - -1 - -2-2. 1 1. 2 2. x 1 (3-a) log(f(x)-f(x*)) vs -1. 1 1. 2 2. x 1 (4-b) log(v) vs Fgure 4 Convergence raph for Functon 19-24 From the results, we could observe that, for all problems, the SaDE algorthm could reach the newly updated best known solutons except problems 2 and 22. As shown n Table V, the feasble rates of all problems are 21 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.

1%, except problem 2 and 22. Problem 2 s hghly constraned and no algorthm n the lterature found feasble solutons. The successful rates are very encouragng, as most problems have 1%. Problems 2, 3, 1, 14, 18, 21 and 23 have 84%, 96%, 8%, 92%, 6% and 88% respectvely. Problem 17 has 4%, wth successfully fndng better soluton only once. Although the successful rate of problem 22 s, the result we obtaned ndeed s much better than prevous best known solutons, and approxmates the newly updated best known solutons. We set MAX_ as e+, however from the experment results we could fnd that SaDE actually acheved the best known solutons wthn e+4 for many problems. We calculate the algorthm complexty accordng to [2] show n Table VI. We use Matlab 6. to mplement the algorthm and the system confguratons are: [7] R. Landa-Becerra and C. A. C. Coello. Optmzaton wth Constrants usng a Cultured Dfferental Evoluton Approach. In Proceedngs of the enetc and Evolutonary ComputatonConference (ECCO'2), volume 1, pages 27-34, New York, June 2. Washngton DC, USA, ACM Press. Intel Pentum 4 CPU 3. HZ 2 B of memory Wndows XP Professonal Verson 22 TABLE VI: COMPUTATIONAL COMPLEXITY T1 T2 (T2-T1)/T1 4.9182 8.34.698 VI. CONCLUSION In ths paper, we generalzed the Self-adaptve Dfferental Evoluton algorthm for handlng optmzaton problem wth multple constrants, wthout ntroducng any addtonal parameters. The performance of our approach was evaluated on the testbed for CEC26 specal sesson on constraned real parameter optmzaton. The SaDE algorthm demonstrated effectveness and robustness. REFERENCES [1] A. K. Qn and P. N. Suganthan, Self-adaptve Dfferental Evoluton Algorthm for Numercal Optmzaton In: IEEE Congress on Evolutonary Computaton (CEC 2) Ednburgh, Scotland, Sep 2-, 2. [2] J. J. Lang, T. P. Runarsson, E. Mezura-montes, M. Clerc, P.N.Suganthan, C. A. C. Coello, and K.Deb, Problem Defntons and Evaluaton Crtera for the CEC 26 Specal Sesson on Constraned Real-Paremeter Optmzaton, Techncal Report, 2. http://www.ntu.edu.sg/home/epnsugan [3] Z. Mchalewcz and M. Schoenauer, Evolutonary Algorthms for Constraned Parameter Optmzaton Problems, Evolutonary Computaton, 4(1):1 32, 1996. [4] R. Storn and K. V. Prce, Dfferental evoluton-a smple and Effcent Heurstc for lobal Optmzaton over Contnuous Spaces, Journal of lobal Optmzaton 11:341-39. 1997. [] K. Deb. An Effcent Constrant Handlng Method for enetc Algorthms, Computer Methods n Appled Mechancs and Engneerng, 186(2/4):311 338, 2. [6] J. Lampnen. A Constrant Handlng Approach for the Dfferental Evoluton Algorthm. In Proceedngs of the Congress on Evolutonary Computaton 22 (CEC 22), volume 2, pages 1468 1473, Pscataway, New Jersey, May 22. 22 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.

TABLE I ERROR VALUES ACHIEVED WHEN = 1 3, = 1 4, = 1 FOR PROBLEMS 1-6 1 3 1 4 1 Prob. g1 g2 g3 g4 g g6 Best 2.92e+() 3.298e-1() 3.3141e-1() 3.714e+1() 2.278e+1(3) 6.448e+1() Medan 4.2828e+() 3.727e-1() 8.4884e-1() 7.864e+1() 2.7489e+2(3) 3.4622e+2() Worst.2143e+() 4.3843e-1() 8.81e-1(1) 1.1877e+2() 1.948e+2(3) 1.72e+3() c,,,,,,,,, 1, 3,, v 4.7389e-3 Mean 4.1779e+ 3.792e-1 7.8931e-1 7.797e+1 1.98e+2 4.3473e+2 Std.227e-1 2.9111e-2 1.394e-1 2.146e+1 1.1344e+2 3.6649e+2 Best 2.8414e-1() 4.24e-3().973e-() 2.43e-7() 1.4e-11() 4.47e-11() Medan 2.967e-1() 2.233e-2() 9.9222e-4() 2.983e-7().2481e-4() 4.47e-11() Worst 3.e-1() 3.8493e-2() 7.1664e-1() 3.379e-7() 1.39e-3() 4.47e-11() c,,,,,,,,,,,, v Mean 2.9488e-1 2.182e-2 6.242e-2 2.979e-7 6.3977e-4 4.47e-11 Std 4.926e-12 8.273e-3 1.421e-1 2.148e-8.4286e-4 Best () 8.719e-1() 1.3749e-1() 2.1667e-7() () 4.47e-11() Medan () 3.8e-9() 1.777e-8() 2.1667e-7() () 4.47e-11() Worst () 1.833e-2() 1.3389e-4() 2.1667e-7() () 4.47e-11() c,,,,,,,,,,,, v Mean 2.6e-3 1.332e- 2.1667e-7 4.47e-11 Std 4.9786e-3 3.4743e- 1.8e-12 1.819e-13 TABLE II ERROR VALUES ACHIEVED WHEN = 1 3, = 1 4, = 1 FOR PROBLEMS 7-12 1 3 1 4 1 Prob. g7 g8 g9 g1 g11 g12 Best 3.884e+() 8.1964e-11() 4.14e-1() 9.2363e+2() 3.3473e-4() 1.961e-14() Medan 6.7238e+() 8.1964e-11() 7.42e-1() 1.473e+3() 9.684e-2() 1.22e-12() Worst 1.248e+1() 8.1964e-11() 1.8411e+() 2.997e+3() 2.164e-1() 1.2816e-1() c,,,,,,,,,,,, v Mean 7.128e+ 8.1964e-11 8.134e-1 1.463e+3 1.76e-1 1.822e-11 Std 2.64e+ 1.99e-17 3.36e-1 2.7973e+2 6.847e-2 2.6763e-11 Best 6.8221e-8() 8.1964e-11() 3.744e-7() 1.991e-6() () () Medan 2.76e-3() 8.1964e-11() 7.133e-7() 1.3888e-1() () () Worst 2.29e-2() 8.1964e-11() 3.8943e-() 3.494e+() 9.998e-() () c,,,,,,,,,,,, v Mean 4.9297e-3 8.1964e-11 4.44e-6 3.978e-1 9.3997e-6 Std.929e-3 6.492e-18 8.783e-6 7.429e-1 2.749e- Best 6.818e-8() 8.1964e-11() 3.744e-7() 6.6393e-11() () () Medan 1.468e-7() 8.1964e-11() 3.744e-7() 1.812e-6() () () Worst 6.3431e-() 8.1964e-11() 3.744e-7() 7.83e-6() () () c,,,,,,,,,,,, v Mean 4.7432e-6 8.1964e-11 3.744e-7 1.697e-6 Std 1.4993e- 3.8426e-18 7.981e-14 1.244e-6 TABLE III ERROR VALUES ACHIEVED WHEN = 1 3, = 1 4, = 1 FOR PROBLEMS 13-18 1 3 1 4 Prob. g13 g14 g1 g16 g17 g18 Best 9.4326e-1(3) 4.391e+(3) 6.6347e-2(2) 2.8438e-2() 9.1761e+1(4) 1.2814e-1() Medan 8.33e-1(3).4314e+(3) 9.442e-1(2).7669e-2() 9.3311e+1(4) 2.6913e-1() Worst.214e-1(3) 2.4726e+(3) 1.6126e+(2) 1.187e-1() 9.9e+1(4) 4.1448e-1() c, 2, 3, 3, 3, 1, 2,,, 3, 4,, v.6738e-2 3.732e-2 1.324e-2.918e-2 Mean 1.1713e+ 4.9e+ 1.4324e+.973e-2 1.664e+2 2.7886e-1 Std 1.211e+ 1.7242e+ 1.74e+ 1.9716e-2 6.1893e+1 7.746e-2 Best.312e-6() 1.331e-() 6.822e-11() 6.214e-11() 3.9816e+1() 2.781e-11() Medan 8.33e-6() 1.446e-4() 6.81e-() 6.21e-11() 7.48e+1() 1.178e-1() Worst 3.8491e-1() 4.416e-4() 1.463e-4() 6.4e-11() 7.48e+1() 1.9163e-1() c,,,,,,,,,,,, v Mean 1.778e-1 1.68e-4 6.343e- 6.277e-11 7.2688e+1 1.312e-2 Std 1.7638e-1 1.91e-4 6.8e- 6.794e-14 6.8484e+.2992e-2 23 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.

1 Best 4.1898e-11() 2.9e-6() 6.822e-11() 6.214e-11() 8.188e-11() 1.61e-11() Medan 4.1898e-11() 1.4793e-() 6.822e-11() 6.214e-11() 7.48e+1() 1.61e-11() Worst 1.696e-6() 2.233e-4() 6.822e-11() 6.214e-11() 7.48e+1() 1.914e-1() c,,,,,,,,,,,, v Mean 8.263e-8 4.6979e- 6.822e-11 6.214e-11 6.967e+1 1.284e-2 Std 2.7832e-7 6.4986e- 1.6168e+1.2898e-2 1 3 1 4 1 Prob. TABLE IV ERROR VALUES ACHIEVED WHEN = 1 3, = 1 4, = 1 FOR PROBLEMS 19-24 g19 g2 g21 g22 g23 g24 Best 9.9842e+1() 1.296e+1(2) 4.61e+2() 1.6363e+4(19) 6.119e+2(4) 1.7199e-() Medan 1.38e+2() 1.63e+1(2) 3.9781e+2() 7.8714e+3(19) 2.8993e+2(4) 9.3936e-() Worst 1.926e+2() 1.369e+1(2) 4.739e+2() 1.348e+4(19) 4.429e+2() 1.713e-4() c,, 2, 18, 2, 3, 14, 19, 19, 2, 4,, v.6112e+ 6.3e-2 1.7472e+7 6.69e-3 Mean 1.4462e+2 1.426e+1 4.12e+2 1.234e+4 4.6e+2 8.44e- Std 2.26e+1 1.769e+ 1.2e+2 2.9316e+3 8.627e+1 4.396e- Best.869e-7() 1.82e-2(6) 4.688e-2() 3.199e+1() 7.6219e-3() 4.6372e-12() Medan 8.69e-4().172e-1(19) 6.1e-2() 1.372e+2().9e-2() 4.6372e-12() Worst 1.133e-1() 4.7e+(19) 6.2397e-2() 1.1282e+4(16).144e-2() 4.6372e-12() c,, 1, 1, 19,,,,,,,, v 1.836e-1 Mean 6.66e-3 2.1193e+ 6.34e-2.3477e+3.237e-2 4.6372e-12 Std 2.2429e-2 2.47e+ 3.89e-3 6.2993e+3 1.3633e-2 Best.446e-11() 1.82e-2(6) () 1.9483e+() () 4.6372e-12() Medan 1.3868e-1() 2.377e-1(2) 2.78e-8() 4.697e+1() 3.979e-13() 4.6372e-12() Worst 3.1141e-9().397e-1(2) 6.712e-3() 1.224e+2() 1.3788e-3() 4.6372e-12() c, 16, 2,,,,,,,, v 8.82e-2 Mean 4.22e-1 2.64e-1 7.673e-4.23e+1 1.261e-4 4.6372e-12 Std 7.2976e-1 1.638e-1 1.8e-3 3.41e+1 3.4116e-4 TABLE VI NUMBER OF TO ACHIEVE THE FIXED ACCURACY LEVEL ( ( f(x) - f(x*) ).1), SUCCESS RATE, FEASIBLE RATE AND SUCCESS PERFORMANCE Prob. Best Medan Worst Mean Std Feasble Success Success Rate Rate Performance g1 211 211 211 211 1% 1% 211 g2 7691 12897-18899 1427 1% 84% 1838 g3 3 261-2432 1399 1% 96% 29896 g4 217 217 2113 217 2 1% 1% 217 g 3 6 12 7434 3484 1% 1% 73 g6 1246 1444 18347 14394 1242 1% 1% 1246 g7 219 1124 42286 1439 1248 1% 1% 27637 g8 782 1272 177 1268 242 1% 1% 1323 g9 1296 16787 33166 186 17 1% 1% 21446 g1 26 2 13 876 33968 1% 1% 44167 g11 12643 2111 212 2333 3482 1% 1% 2111 g12 463 1717 276 1611 82 1% 1% 276 g13 2161 2219 1268 42372 2977 1% 1% 2168 g14 32 77-1432 1794 1% 8% 4 g1 2 41 97 424 18928 1% 1% 27 g16 13144 14433 1797 144 748 1% 1% 14948 g17 443 - - 49772 114 1% 4% 12 g18 26 26-64 1387 1% 92% 28261 g19 231 188 7848 48733 132 1% 1% 216 g2 - - - - - % % - g21 98 32-32766 16618 1% 6% 16417 g22 - - - - - 1% % - g23 82 298-2944 1379 1% 88% 129 g24 428 4843 67 4847 36 1% 1% 4624 24 Authorzed lcensed use lmted to: Nanyang Technologcal Unversty. Downloaded on March 24,21 at 21:39:19 EDT from IEEE Xplore. Restrctons apply.