Modified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds

Similar documents
Dynamic Optimization using Self-Adaptive Differential Evolution

Adaptive Differential Evolution and Exponential Crossover

Competitive Self-adaptation in Evolutionary Algorithms

An Introduction to Differential Evolution. Kelly Fleetwood

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

Differential Evolution: Competitive Setting of Control Parameters

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning

Differential Evolution: a stochastic nonlinear optimization algorithm by Storn and Price, 1996

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems

THE objective of global optimization is to find the

2 Differential Evolution and its Control Parameters

DE/BBO: A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization

OPTIMIZATION OF MODEL-FREE ADAPTIVE CONTROLLER USING DIFFERENTIAL EVOLUTION METHOD

Integer weight training by differential evolution algorithms

WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Beta Damping Quantum Behaved Particle Swarm Optimization

Differential Evolution Based Particle Swarm Optimization

A numerical study of some modified differential evolution algorithms

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Multi-start JADE with knowledge transfer for numerical optimization

Gradient-based Adaptive Stochastic Search

A PARAMETER CONTROL SCHEME FOR DE INSPIRED BY ACO

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Weight minimization of trusses with natural frequency constraints

Multi-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm

Solving Systems of Nonlinear Equations by Harmony Search

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Egocentric Particle Swarm Optimization

COMPETITIVE DIFFERENTIAL EVOLUTION

Metaheuristics and Local Search

DIFFERENTIAL EVOLUTION (DE) was proposed by

Learning Tetris. 1 Tetris. February 3, 2009

Benchmarking a Hybrid Multi Level Single Linkage Algorithm on the BBOB Noiseless Testbed

Research Article Algorithmic Mechanism Design of Evolutionary Computation

On the Pathological Behavior of Adaptive Differential Evolution on Hybrid Objective Functions

x 2 i 10 cos(2πx i ). i=1

Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution

ARTIFICIAL NEURAL NETWORKS REGRESSION ON ENSEMBLE STRATEGIES IN DIFFERENTIAL EVOLUTION

Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution

Artificial Intelligence Heuristic Search Methods

Benchmarking Derivative-Free Optimization Algorithms

ECONOMETRIC INSTITUTE THE COMPLEXITY OF THE CONSTRAINED GRADIENT METHOD FOR LINEAR PROGRAMMING J. TELGEN REPORT 8005/0

Hybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization 1. Abstract. 1 Introduction

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Research Article A Convergent Differential Evolution Algorithm with Hidden Adaptation Selection for Engineering Optimization

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems

Finding Multiple Global Optima Exploiting Differential Evolution s Niching Capability

GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS

Particle Swarm Optimization. Abhishek Roy Friday Group Meeting Date:

A multistart multisplit direct search methodology for global optimization

Segment-Fixed Priority Scheduling for Self-Suspending Real-Time Tasks

Stopping Rules for Box-Constrained Stochastic Global Optimization

DESIGN OF MULTILAYER MICROWAVE BROADBAND ABSORBERS USING CENTRAL FORCE OPTIMIZATION

Improving Differential Evolution Algorithm by Synergizing Different Improvement Mechanisms

PageRank Method for Benchmarking Computational Problems and their Solvers

Chapter 8: Introduction to Evolutionary Computation

An Improved Differential Evolution Trained Neural Network Scheme for Nonlinear System Identification

Particle swarm optimization approach to portfolio optimization

Optimization Models and Applications

Bayesian Congestion Control over a Markovian Network Bandwidth Process

3D HP Protein Folding Problem using Ant Algorithm

Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction

DIFFERENTIAL evolution (DE) [3] has become a popular

CONVERGENCE ANALYSIS OF DIFFERENTIAL EVOLUTION VARIANTS ON UNCONSTRAINED GLOBAL OPTIMIZATION FUNCTIONS

Permutation distance measures for memetic algorithms with population management

Ant Colony Optimization: an introduction. Daniel Chivilikhin

Key words. Global optimization, multistart strategies, direct-search methods, pattern search methods, nonsmooth calculus.

Fractional Filters: An Optimization Approach

Optimization of Catalytic Naphtha Reforming Process Based on Modified Differential Evolution Algorithm

Through-wall Imaging of Conductors by Transverse Electric Wave Illumination

Adaptive Generalized Crowding for Genetic Algorithms

Application Research of Fireworks Algorithm in Parameter Estimation for Chaotic System

Using Differential Evolution for GEP Constant Creation

IMPROVED ARTIFICIAL BEE COLONY FOR DESIGN OF A RECONFIGURABLE ANTENNA ARRAY WITH DISCRETE PHASE SHIFTERS

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

Population Variance Based Empirical Analysis of. the Behavior of Differential Evolution Variants

Constrained Real-Parameter Optimization with Generalized Differential Evolution

Crossover and the Different Faces of Differential Evolution Searches

Determination of Component Values for Butterworth Type Active Filter by Differential Evolution Algorithm

AN ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM FOR SOLVING SECOND-ORDER DIRICHLET PROBLEMS

Bio-inspired Continuous Optimization: The Coming of Age

EFFECT OF STRATEGY ADAPTATION ON DIFFERENTIAL EVOLUTION IN PRESENCE AND ABSENCE OF PARAMETER ADAPTATION: AN INVESTIGATION

Outline. Ant Colony Optimization. Outline. Swarm Intelligence DM812 METAHEURISTICS. 1. Ant Colony Optimization Context Inspiration from Nature

Automatic Loop Shaping in QFT by Using CRONE Structures

Are You a Good Beam or a Bad Beam Allen Holder Trinity University Mathematics University of Texas Health Science Center at San Antonio

Quantum-Inspired Differential Evolution with Particle Swarm Optimization for Knapsack Problem

Higher-Order Methods

Efficient Haplotype Inference with Boolean Satisfiability

Simple Optimization (SOPT) for Nonlinear Constrained Optimization Problem

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata

Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm

A Fast Heuristic for GO and MINLP

Meta-heuristics for combinatorial optimisation

Colored Bin Packing: Online Algorithms and Lower Bounds

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

Computational statistics

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

Improving Search Space Exploration and Exploitation with the Cross-Entropy Method and the Evolutionary Particle Swarm Optimization

Transcription:

Modified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds Md. Abul Kalam Azad a,, Edite M.G.P. Fernandes b a Assistant Researcher, b Professor Md. Abul Kalam Azad Algoritmi R&D Centre E-mail: akazad@dps.uminho.pt URL: www.norg.uminho.pt/nsos/ Department of Production and Systems School of Engineering, University of Minho, Portugal Presented By: May 2, 2009 M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 1 / 35

Outline of the Presentation 1 Introduction 2 Motivation 3 Differential Evolution 4 Modified Differential Evolution 5 Algorithm of Proposed Modified Differential Evolution 6 Experimental Results 7 Conclusions M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 2 / 35

Introduction CMS2009, May 1-3, 2009 Problems involving global optimization over continuous spaces are ubiquitous throughout the scientific community. The task is to optimize certain properties of a system by pertinently choosing its variables. The task of global optimization is to find a point where the objective function obtains its smallest value. Nonlinear optimization problems with simple bounds where min f(x) subject to x Ω, f : R n R with Ω = {x R n : lb j x j ub j, j = 1,...,n}, and lb, ub R n. (1) M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 3 / 35

Introduction CMS2009, May 1-3, 2009 There exist many solution methods to solve (1). If the objective function is not differentiable or has no information about derivatives: Deterministic methods that guarantee to find a global optimum with a required accuracy: DIRECT, MCS, Pattern Search, Simplex Search, etc. Stochastic methods that find the global minimum only with high probability: GA, SA, PSO, DE, EM, etc. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 4 / 35

Motivation Differential Evolution (DE) is proposed by Storn and Price in 1997. DE is a population based heuristic approach. DE has only three parameters. DE has been shown to be very efficient. DE is applicable to derivative free nonlinear optimization problems. Proposed Method A modified differential evolution (mde) introducing self-adaptive parameters and the inversion operator for nonlinear optimization problems with simple bounds. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 5 / 35

Differential Evolution (DE) DE creates new candidate solutions by combining points of the same population. A candidate replaces a current solution only if it has better objective value. DE has three parameters: 1 Amplification factor of the difference vector F. 2 Crossover control parameter CR. 3 Population size NP. DE s operators 1 Mutation 2 Crossover 3 Selection M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 6 / 35

Outline of Differential Evolution Target Vector n- problem dimension, NP- population size. Target point is defined by x p,t = (x p1,t, x p2,t,...,x pn,t ) where t is the index of generation and p = 1, 2,...,NP. The target point at t = 1 is generated randomly with uniform distribution as x p,1 = lb + r (ub lb) (2) r U[0, 1]. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 7 / 35

Outline of Differential Evolution Mutation DE creates new points (mutant points) by adding the weighted difference between two points to a third point in a population. v p,t+1 = x r1,t +F(x r2,t x r3,t) (3) }{{}}{{} base differential Integer random numbers r 1, r 2, r 3 U{1, 2,...,NP} and r 1 r 2 r 3 p. F [0, 2] constant parameter which controls the amplification of the differential variation (x r2,t x r3,t). NP must be greater or equal to 4 M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 8 / 35

Outline of Differential Evolution Crossover [Trial Point] The mutant point s components are then mixed with the target point s components to yield the so-called trial point u p,t+1. { vpj,t+1 if (r u pj,t+1 = j CR) or j = z p, x pj,t if (r j > CR) and j z p j = 1, 2,...,n. (4) Random r j U[0, 1] which performs the mixing of jth component of points and CR [0, 1] is constant parameter. Random integer z p U{1, 2,...,n} which ensures that u p,t+1 gets at least one component from v p,t+1. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 9 / 35

Outline of Differential Evolution Bounds Check After crossover the bounds of each component must be checked. lb j if u pj,t+1 < lb j u pj,t+1 = ub j if u pj,t+1 > ub j otherwise u pj,t+1 (5) Selection The trial point u p,t+1 is compared to the target point x p,t to decide whether or not it should become a member of generation t + 1 as { up,t+1 if f(u x p,t+1 = p,t+1 ) f(x p,t ) otherwise. x p,t (6) M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 10 / 35

Modified Differential Evolution According to Storn and Price, DE is sensitive to the choice of three control parameters. They suggested (i) F [0.5, 1]; (ii) CR [0.8, 1]; and (iii) NP = 10 n. Modification by Brest et al. Proposed self adaptive control parameters for F and CR. Point dependent parameters (F p,1, CR p,1 ), p = 1,...,NP. New control parameters for next generation F p,t+1 and CR p,t+1 are calculated as M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 11 / 35

Modification CMS2009, May 1-3, 2009 Modification by Brest et al. { Fl + λ F p,t+1 = 1 F u, if λ 2 < τ 1 F p,t, otherwise { λ3, if λ CR p,t+1 = 4 < τ 2 CR p,t, otherwise. (7) (8) Random λ U[0, 1] and τ 1 = τ 2 = 0.1 represent probabilities to adjust parameters F p and CR p, respectively. F l = 0.1 and F u = 1.0. So F p,t+1 [0.1, 1.0] and CR p,t+1 [0, 1]. F p,t+1 and CR p,t+1 are obtained before the mutation is performed. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 12 / 35

Modification CMS2009, May 1-3, 2009 Modification by Kaelo et al. Kaelo et al. proposed alternative technique for mutation. After choosing three points the best is selected for base point, Remaining two points are used as differential variation: v p,t+1 = x 1,t +F(x 2,t x 3,t ) }{{}}{{} base differential x 1,t = arg min{f(x r1,t), f(x r2,t), f(x r3,t)}. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 13 / 35

Our Modified Differential Evolution (mde) Our mde has previous modifications proposed by Kaelo et al. and Brest et al. We made small modification proposed by Kaelo et al. After every B generations we used best point found so far as the base point and two randomly chosen points in differential variation: v p,t+1 = x best,t +F(x r1,t x r2,t) }{{}}{{} base differential Inversion operator Our mde has inversion operator with some inversion probability (p inv [0, 1]) to points after crossover. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 14 / 35

Our Modified Differential Evolution (mde) Illustrative example of inversion: h k u p,t = u p1,t u p2,t u p3,t u p4,t u p5,t u p6,t u p7,t u p8,t h k u p,t = u p1,t u p2,t u p6,t u p5,t u p4,t u p3,t u p7,t u p8,t Termination condition t- current generation G max - maximum generations. The terminition condition is ((t > G max ) OR ( f max, t f min, t ɛ (= 10 6 ))). f max, t - max. function value f min, t - min. function value M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 15 / 35

Algorithm of Proposed Modified Differential Evolution 1 Set the values of parameters. 2 Set t = 1. Initialize the population x p, 1. 3 Calculate f max, 1 and f min, 1 and set f best = f min, 1 and x best = x min, 1. 4 If termination condition is met stop. Otherwise set t = t + 1. 5 Compute mutant point v p, t by using mde. 6 Perform crossover by using mde to make point u p, t. 7 Perform inversion by using mde to make trial point u p, t. 8 Check the domains of the trial point. 9 Perform selection. If f p, t f p, t 1 then set x p, t = u p, t. Otherwise set f p, t = f p, t 1 and x p, t = x p, t 1. 10 Calculate f max, t and f min, t and set f best = f min, t and x best = x min, t. 11 Go to step 4. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 16 / 35

Experimental Results We coded all the variants of DE in C with AMPL interface. The name of the solvers are: DE Original mde[1] DE Kaelo mde[2] (with inversion) DE Brest We tested all the solvers on a set of 64 nonlinear optimization problems with simple bounds. We used same parameters for all solvers. We used same termination condition for all solvers. We run all solvers 30 times for each problem. After 30 runs we reported f best, f w, f avg, standard deviation of f, no. of function evaluations and no. of generations. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 17 / 35

Experimental Results Example Test Problems Ackley s Problem min x f(x) n n 20 exp 0.02 n 1 xj 2 exp n 1 cos(2πx j ) + 20 + exp(1) j=1 j=1 subject to 30.0 x j 30.0, j = 1, 2,..., n. Griewank Problem min x f(x) 1 + 1 4000 n j=1 x 2 j ( ) n x j cos j j=1 subject to 600.0 x j 600.0, j = 1, 2,...,n. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 18 / 35

Experimental Results Shekel 5 Problem 5 1 min f(x) x n i=1 j=1 (x j a ij ) 2 + c i subject to 0.0 x j 10.0, j = 1, 2,...,n. For the above 3 test problems for all solvers: Used same parameters. Used same termination condition. Plotted profile of objective function value at different generation. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 19 / 35

Experimental Results Ackley s Problem, n = 10, G max = 1000, f opt = 0.0 Profile of objective function value of ack after single run 2.5 DE_Original DE_Kaelo DE_Brest mde[1] mde[2] 2.0 Objective function value 1.5 1.0 0.5 0 0 100 200 300 400 500 600 700 800 900 1000 No. of generations M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 20 / 35

Experimental Results Griewank Problem, n = 10, G max = 1000, f opt = 0.0 6 5 Profile of objective function value of gw after single run DE_Original DE_Kaelo DE_Brest mde[1] mde[2] Objective function value 4 3 2 1 0 0 100 200 300 400 500 600 700 800 900 1000 No. of generations M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 21 / 35

Experimental Results Shekel 5 Problem, n = 4, G max = 250, f opt = 10.1499 2 3 4 Profile of objective function value of s3 after single run DE_Original DE_Kaelo DE_Brest mde[1] mde[2] Objective function value 5 6 7 8 9 10 11 0 50 100 150 200 250 No. of generations M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 22 / 35

Experimental Results Neumaier 3 Problem min x f(x) = n (x j 1) 2 j=1 n x j x j 1 j=2 subject to n 2 x j n 2, j = 1, 2,...,n. For above test problem for all solvers: Used n = 10, 20, 30, 40. Used same parameters. Used same termination condition. Run 30 times. Reported f best at every G max /50 generations and made average. Plotted profile of mean of best objective function value at every G max /50 generations. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 23 / 35

Experimental Results Neumaier 3 Problem, n = 10, G max = 500, f opt = 210.0 Mean of best objective function value Profile of mean of best objective function value of nf3_10 after 30 runs 4500 DE_Original DE_Kaelo 4000 DE_Brest mde[1] 3500 mde[2] 3000 2500 2000 1500 1000 500 0 Mean of best objective function value Profile of mean of best objective function value of nf3_10 after 30 runs 3000 DE_Original DE_Kaelo DE_Brest 2500 mde[1] mde[2] 2000 1500 1000 500 0 500 0 50 100 150 200 250 300 350 400 450 500 No. of generations (a) 500 0 20 40 60 80 100 120 140 160 180 200 No. of generations (b) M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 24 / 35

Experimental Results Neumaier 3 Problem, n = 20, G max = 1500, f opt = 1520.0 Mean of best objective function value 16 x Profile of mean of best objective function value of nf3_20 after 30 runs 104 DE_Original DE_Kaelo 14 DE_Brest mde[1] mde[2] 12 10 8 6 4 2 0 0 200 400 600 800 1000 1200 1400 1600 No. of generations (a) Mean of best objective function value 5 4 3 2 1 0 x 10 4 Profile of mean of best objective function value of nf3_20 after 30 runs DE_Original DE_Kaelo DE_Brest mde[1] mde[2] 0 100 200 300 400 500 600 700 800 No. of generations (b) M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 25 / 35

Experimental Results Neumaier 3 Problem, n = 30, G max = 3000, f opt = 4930.0 Mean of best objective function value 12 x Profile of mean of best objective function value of nf3_30 after 30 runs 105 DE_Original DE_Kaelo DE_Brest 10 mde[1] mde[2] 8 6 4 2 Mean of best objective function value 6 x 105 Profile of mean of best objective function value of nf3_30 after 30 runs DE_Original DE_Kaelo DE_Brest 5 mde[1] mde[2] 4 3 2 1 0 0 0 500 1000 1500 2000 2500 3000 No. of generations (a) 0 100 200 300 400 500 600 700 800 900 1000 No. of generations (b) M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 26 / 35

Experimental Results Neumaier 3 Problem, n = 40, G max = 4000, f opt = 11440.0 Mean of best objective function value x 10 6 Profile of mean of best objective function value of nf3_40 after 30 runs DE_Original 6 DE_Kaelo DE_Brest mde[1] mde[2] 5 4 3 2 1 Mean of best objective function value 3.5 x Profile of mean of best objective function value of nf3_40 after 30 runs 106 DE_Original DE_Kaelo DE_Brest 3 mde[1] mde[2] 2.5 2 1.5 1 0.5 0 0 500 1000 1500 2000 2500 3000 3500 4000 No. of generations (a) 0 0 200 400 600 800 1000 1200 No. of generations (b) M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 27 / 35

Performance Profile CMS2009, May 1-3, 2009 Compared all solvers based on performance profile proposed by Dolan and Moré. P- set of all problems and S- set of all solvers. Performance metric found by solver s S on problem p P is m (p,s) = f (p,s) f opt f w f opt. f (p,s) - average/best function value after 30 runs. f opt - optimum value of problem p. f w - worst function value after 30 runs (for all solvers). m (p,s) = 0 if f (p,s) f opt 1 if f (p,s) = f w f (p,s) f opt f w f opt otherwise M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 28 / 35 m 1.0 0 f opt f w f

Performance Profile CMS2009, May 1-3, 2009 Since min{m (p,s) : s S} can be 0, the performance ratios are r (p,s) = 1 + m (p,s) min{m (p,s) : s S}, if min{m (p,s) : s S} < ɛ m (p,s) min{m (p,s) : s S}, otherwise for p P, s S and ɛ = 10 5. The overall assessment of performance of a particular solver s ρ s (τ) = n P τ n P. n Pτ - number of problems in P with r (p,s) τ. n P - total number of problems in P. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 29 / 35

Performance Profile ρ s (τ) is the probability (for solver s S) that the performance ratio r (p,s) is within a factor τ R of the best possible ratio. The function ρ s is the cumulative distribution function for the performance ratio. The value of ρ s (1) gives the probability that the solver s will win over the others in the set. However, for large values of τ, the ρ s (τ) measures the solver robustness. The solver with largest ρ s (τ) is the one that solves more problems in the set P. We plotted performance profile based on f avg and f best after 30 runs for all problems with all solvers. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 30 / 35

Performance Profile CMS2009, May 1-3, 2009 performance profile based on f avg after 30 runs 1 Performance profile of f_avg after 30 runs 0.9 0.8 ρ(τ) 0.7 0.6 DE_Original 0.5 DE_Kaelo DE_Brest mde[1] mde[2] 0.4 0 1 2 3 4 5 6 7 8 9 10 τ M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 31 / 35

Performance Profile CMS2009, May 1-3, 2009 performance profile based on f best after 30 runs 1.0 Performance profile of f_best after 30 runs 0.9 ρ(τ) 0.8 0.7 DE_Original DE_Kaelo DE_Brest mde[1] mde[2] 0.6 0 1 2 3 4 5 6 7 8 9 10 τ M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 32 / 35

Conclusions A modified differential evolution (mde) for nonlinear optimization problems with simple bounds is presented. A compartive study based on performance profile is presented. It is shown that our mde outperformed other variants of DE. Our mde wins over other DE based on robustness. For particular problem our mde also wins over other variants of DE. Future Study Now we are trying to implement our mde to general constrained nonlinear optimization problems and will consider mixed-integer problems in the future. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 33 / 35

References CMS2009, May 1-3, 2009 [Ali2004] M.M. Ali and A. Törn, Population set based global optimization algorithms: Some modifications and numerical studies, Computer and Operration Research, vol. 31, no. 10, pp. 1703 1725, 2004. [Ali2005] M.M. Ali, C. Khompatraporn and Z.B. Zabinsky, A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems, Journal of Global Optimization, vol. 31, pp. 635-672, 2005. [Boender1982] C.G.E. Boender, A.H.G. Rinnoy Kan, L. Stougie and G.T. Timmer, A Stochastic Method for Global Optimization, Mathematical Programming, vol. 22, pp. 125-140, 1982. [Brest2006] J. Brest, S. Greiner, B. Bošković, M. Mernik and V. Žumer, Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems, IEEE Transactions on Evolutionary Computation, vol. 10. no. 6, pp. 646-657, 2006. [Dolan2002] E.D. Dolan and J.J. Moré, Benchmarking optimization software with performance profiles, Mathematical Programming, series A 91, pp. 201-213, 2002. [Kaelo2006] P. Kaelo and M.M. Ali, A numerical study of some modified differencial evolution algorithms, EJOR, vol. 169, pp. 1176-1184, 2006. [Kim2007] H.-K. Kim, J.-K. Chong, K.-Y. Park and D.A. Lowther, Differential evolution strategy for constrained global optimization and application to practical engineering problems, IEEE Transactions on Magnetics, vol. 43, no. 4, pp. 1565-1568, 2007. [Price1997] K. Price and R. Storn, Differential evolution - a simple evolution strategy for fast optimization, Dr. Dobb s Journal, vol. 22, no. 4, pp. 18-21 and 78, 1997. [Storn1997] R. Storn and K. Price, Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization, vol. 11, pp. 341-359, 1997. M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 34 / 35

Supported by under the grant C2007-UMINHO-ALGORITMI-04. and Thank You Very Much M. A. K. Azad (Algoritmi R&D Centre) University of Minho, Portugal May 2, 2009 35 / 35