Global Optimization Using Hybrid Approach

Similar documents
The Study of Teaching-learning-based Optimization Algorithm

Self-adaptive Differential Evolution Algorithm for Constrained Real-Parameter Optimization

A Hybrid Variational Iteration Method for Blasius Equation

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

Some modelling aspects for the Matlab implementation of MMA

SPECTRAL ANALYSIS USING EVOLUTION STRATEGIES

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Solving of Single-objective Problems based on a Modified Multiple-crossover Genetic Algorithm: Test Function Study

Markov Chain Monte Carlo Lecture 6

MODIFIED PREDATOR-PREY (MPP) ALGORITHM FOR CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

A New Evolutionary Computation Based Approach for Learning Bayesian Network

Chapter 2 Real-Coded Adaptive Range Genetic Algorithm

arxiv: v1 [math.oc] 3 Aug 2010

An improved multi-objective evolutionary algorithm based on point of reference

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

A new Approach for Solving Linear Ordinary Differential Equations

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Particle Swarm Optimization with Adaptive Mutation in Local Best of Particles

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CHAPTER 7 CONSTRAINED OPTIMIZATION 2: SQP AND GRG

Differential Evolution Algorithm with a Modified Archiving-based Adaptive Tradeoff Model for Optimal Power Flow

Optimum Design of Steel Frames Considering Uncertainty of Parameters

An Adaptive Learning Particle Swarm Optimizer for Function Optimization

Lecture Notes on Linear Regression

Solving Nonlinear Differential Equations by a Neural Network Method

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

MMA and GCMMA two methods for nonlinear optimization

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*

An Interactive Optimisation Tool for Allocation Problems

Lecture 14: Bandits with Budget Constraints

The Minimum Universal Cost Flow in an Infeasible Flow Network

Constrained Evolutionary Programming Approaches to Power System Economic Dispatch

Chapter - 2. Distribution System Power Flow Analysis

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

An Admission Control Algorithm in Cloud Computing Systems

Computing Correlated Equilibria in Multi-Player Games

4DVAR, according to the name, is a four-dimensional variational method.

Economic dispatch solution using efficient heuristic search approach

Thin-Walled Structures Group

Credit Card Pricing and Impact of Adverse Selection

THE ROBUSTNESS OF GENETIC ALGORITHMS IN SOLVING UNCONSTRAINED BUILDING OPTIMIZATION PROBLEMS

Problem Set 9 Solutions

Exercises of Chapter 2

A HYBRID DIFFERENTIAL EVOLUTION -ITERATIVE GREEDY SEARCH ALGORITHM FOR CAPACITATED VEHICLE ROUTING PROBLEM

A Hybrid Co-evolutionary Particle Swarm Optimization Algorithm for Solving Constrained Engineering Design Problems

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Entropy Generation Minimization of Pin Fin Heat Sinks by Means of Metaheuristic Methods

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

Chapter Newton s Method

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

CHAPTER 2 MULTI-OBJECTIVE GENETIC ALGORITHM (MOGA) FOR OPTIMAL POWER FLOW PROBLEM INCLUDING VOLTAGE STABILITY

On the Multicriteria Integer Network Flow Problem

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Lecture 16 Statistical Analysis in Biomaterials Research (Part II)

A Novel Evolutionary Algorithm for Capacitor Placement in Distribution Systems

CHAPTER 7 STOCHASTIC ECONOMIC EMISSION DISPATCH-MODELED USING WEIGHTING METHOD

A SEPARABLE APPROXIMATION DYNAMIC PROGRAMMING ALGORITHM FOR ECONOMIC DISPATCH WITH TRANSMISSION LOSSES. Pierre HANSEN, Nenad MLADENOVI]

Which Separator? Spring 1

Very Large Scale Continuous and Discrete Variable. Woptimization,

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Evolutionary Computational Techniques to Solve Economic Load Dispatch Problem Considering Generator Operating Constraints

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning

CS-433: Simulation and Modeling Modeling and Probability Review

HYBRID FUZZY MULTI-OBJECTIVE EVOLUTIONARY ALGORITHM: A NOVEL PARETO-OPTIMIZATION TECHNIQUE

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

Optimal Solution to the Problem of Balanced Academic Curriculum Problem Using Tabu Search

Self-Adaptive Simulated Binary Crossover for Real-Parameter Optimization

Multi-Objective Evolutionary Programming for Economic Emission Dispatch Problem

SOLVING CAPACITATED VEHICLE ROUTING PROBLEMS WITH TIME WINDOWS BY GOAL PROGRAMMING APPROACH

Kernel Methods and SVMs Extension

ECE559VV Project Report

DETERMINATION OF TEMPERATURE DISTRIBUTION FOR ANNULAR FINS WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY BY HPM

= z 20 z n. (k 20) + 4 z k = 4

Transient Stability Constrained Optimal Power Flow Using Improved Particle Swarm Optimization

Generalized Linear Methods

Comparative Analysis of SPSO and PSO to Optimal Power Flow Solutions

MULTI-OBJECTIVE OPTIMUM DESIGN OF 3D STRUCTURES UNDER STATIC AND SEISMIC LOADING CONDITIONS

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Solving Fractional Nonlinear Fredholm Integro-differential Equations via Hybrid of Rationalized Haar Functions

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Economics 101. Lecture 4 - Equilibrium and Efficiency

Avoiding Premature Convergence in a Mixed-Discrete Particle Swarm Optimization (MDPSO) Algorithm

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Why BP Works STAT 232B

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Research on Route guidance of logistic scheduling problem under fuzzy time window

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

Pattern Classification

The Expectation-Maximization Algorithm

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Maximizing Overlap of Large Primary Sampling Units in Repeated Sampling: A comparison of Ernst s Method with Ohlsson s Method

Transcription:

Tng-Yu Chen, Y Lang Cheng Global Optmzaton Usng Hybrd Approach TING-YU CHEN, YI LIANG CHENG Department of Mechancal Engneerng Natonal Chung Hsng Unversty 0 Kuo Kuang Road Tachung, Tawan 07 tyc@dragon.nchu.edu.tw http://wwwme.nchu.edu.tw Abstract: -The paper deals wth a global optmzaton algorthm usng hybrd approach. To take the advantage of global search capablty the evoluton strategy (ES) wth some modfcatons n recombnaton formulas and eltes keepng s used frst to fnd the near-optmal solutons. The sequental quadratc programmng(sqp) s then used to fnd the eact soluton from the solutons found by ES. One mert of the algorthm s that the solutons for multmodal problems can be found n a sngle run. Eght popular test problems are used to test the proposed algorthm. The results are satsfactory n qualty and effcency. Key-Words: -Global optmzaton algorthm, hybrd approach, evoluton strategy Introducton The global optmzaton has been a hot research topc for a long tme. Wth the progress of evolutonary computaton, many global optmzaton algorthms have been developed usng varous evolutonary methods. Tu and Lu[] proposed a stochastc genetc algorthm(stga) to solve global optmzaton problems. They dvded the search space dynamcally and eplored each regon by generatng fve offsprng. The method was clamed to be effcent and robust. Toksar[] developed an algorthm based on ant colony optmzaton(aco) to fnd the global soluton. In hs method each ant searches the neghborhood of the best soluton n the prevous teraton. Lang et al.[] used partcle swarm optmzaton (PSO) to fnd global solutons for multmodal functons. Ther method modfed the orgnal PSO by usng other partcles hstorcal best data to update the velocty of a partcle. In dong so, the premature convergence can be avoded. Zhang et al.[] proposed a method called estmaton of dstrbuton algorthm wth local search(eda/l). Ths method used unform desgn to generate ntal populaton n the feasble regon. The offsprng are produced by usng statstcal nformaton obtaned from parent populaton. The local search s used to fnd the fnal soluton. In general the evolutonary algorthms are though to have a better chance to fnd the global soluton from multple search ponts. However, the evolutonary algorthms also have some drawbacks. The frst one s that t takes sgnfcant number of functon evaluatons. Ths may consumes a lot of computatonal tmes especally used n structural optmzaton. The second drawback s that sometmes t only fnds near-optmal soluton. To reduce the effect of the frst drawback some appromate analyss methods such as artfcal neural network and response surface methodology may be employed to replace the tme-consumng eact analyses. To overcome the second drawback some gradent-based local search method may be use to locate the eact soluton. Takng the advantage of evolutonary algorthms and avodng ts dsadvantage, a new hybrd global optmzaton algorthm GOES(global optmzaton wth evoluton strategy) s developed n ths paper. Ths algorthm ntegrates evoluton strategy wth the sequental quadratc programmng(sqp) to fnd the eact global soluton. Eght wdely used test problems are employed to test aganst the algorthm. The global solutons for all test problems are found. Bref Revew of ES The evoluton strategy(es) was developed by Rechenburg[] and etended later by Schwefel[6]. There are three evolutonary steps n ES. The frst one s recombnaton and t s eecuted by one of the followng formulas. ISSN: 09-769 Issue, Volume 7, May 008

Tng-Yu Chen, Y Lang Cheng a, (A) no recombnaton a, or b, (B) dscrete 0.(,, ) = a+ b (C) ntermedate () a, or, (D) global, dscrete b 0.( a,, ) (E) global, ntermedate + b where s the new th desgn varable after recombnaton. a, and b, are the th desgn varables of two ndvduals a and b randomly chosen from μ parent ndvduals, respectvely. These two parents are used to generate a specfc new ndvdual usng formulas (B) and (C)., a, and b are also the th desgn varables of two ndvduals randomly chosen from μ parent ndvduals. However, n formulas (D) and (E) each new desgn varable may come from two dfferent parents. The number of so generated new ndvduals s λ and ths value s usually several tmes of μ. To further refne the search space, Chen[7] developed another three formulas for recombnaton as follows: = ( t ) a, + t b,, () t [0,] = ( a, + t or ( b, + t t [-0.,0.] = m [, μ] + + a, b, ) ), + ( a, b, m, t )/ m, () () Where s a unformly dstrbuted random number between 0 and. t s a unformly dstrbuted random number between -0. and 0.. m s an arbtrary nteger between and μ. The purpose of addng formula () s to provde the chance of generatng any value between and. Formula () gves the chance to a, b, generate a value neghborng a or b. Formula () fnds the centrod of some randomly selected,, ndvduals. The addng of the three formulas to the orgnal fve formulas can ncrease the search area n the desgn space. The second step n ES s the mutaton operaton. The mutaton s done by the followng formulas. = + zσ () and ( τ z+ τz ) σ = σ e τ = n (6) τ = n where s the mutated th desgn varable from. s the th desgn varable of an ndvdual after recombnaton. z σ s the change for the th desgn varable of that ndvdual. s the updated self-adaptve varable assocated wth the th desgn varable. σ s the self-adaptve varable used for the prevous mutaton step n the last generaton. The varable σ s also subjected to the same recombnaton operaton. n s the number of desgn varables. z and z are two random numbers from a normal dstrbuton N (0,) wth mean zero and standard devaton one. Equaton (7) s the probablty densty functon of the normal dstrbuton. ( z 0) P( z ) = e (7) π σ where the mean value of the normal dstrbuton s 0 and the standard devaton s. The last step n ES s the selecton operaton whch s used to choose some best ndvduals resulted from mutaton operaton to enter the net generaton. Two approaches are avalable. One s called ( μ, λ) selecton and the other one s named ( μ + λ) selecton. For ( μ, λ) selecton, the best μ ndvduals are chosen from the λ offsprng to enter the net generaton. The ( μ + λ) selecton combnes λ offsprng wth μ parents n current generaton frst and then chooses the best μ ndvduals from the combned pool to be parents n the net generaton. The ( μ, λ) selecton may have better chance to fnd the global soluton whle the ( μ + λ) selecton may accelerate the ISSN: 09-769 Issue, Volume 7, May 008

Tng-Yu Chen, Y Lang Cheng convergence rate. The flow chart of ES s shown n Fg.. Fg. Flow chart of evoluton strategy GOES Algorthm The GOES algorthm s developed n ths paper to fnd the global optmum soluton(s). The algorthm can be dvded nto three phases. The frst phase s bascally ES wth some modfcatons. The second phase s SQP search. The last phase s to determne the global solutons(s) from prevous two phases. The followngs are the steps of GOES algorthm. () Use random numbers to generate μ ndvduals n the desgn space as the ntal populaton. Establsh an eternal elte pool that contans some best ndvduals. () Perform recombnaton operaton usng equaton () to produce λ temporary offsprng. () Perform mutaton operaton usng equaton(). ()Compute objectve functon values for all λ ndvduals. () Compute constrant functon values. If the problem has constrants, compute all constrant functon values for all λ ndvduals. For unconstraned problems skp ths step. (6) Select eltes usng ( μ, λ) approach and update the eternal elte pool. For unconstraned optmzaton problems f the ndvdual wth smallest objectve functon value s better than the one n the elte pool, replace the one n the pool by the best one obtaned n ths generaton. For constraned optmzaton problems, choose the best feasble soluton and update the one n the eternal pool f necessary. If no feasble soluton s found, no updatng s performed. For multmodal problems multple global solutons may est. In order to fnd these solutons n a sngle run, several dfferent eltes are saved n the eternal pool. To dentfy these global solutons durng ES search, a crteron to dfferentate dfferent solutons s establshed as follows. n el el el, k j, k d (, j) = ( ) ε U L el (8) k = k k where d el (, j) s the normalzed dstance el el between elte and elte j., k and j, k are the kth desgn varable for the th and the jth eltes, U L respectvely. and are the upper and lower k bound for the kth desgn varable, respectvely. k ε el s a small value gven by users. If the nequalty s satsfed, the two ndvduals and j are though to be two dfferent solutons and saved n the eternal pool separately. Otherwse, they are the same soluton. (7) Based on objectve functon value and constrant volaton, select the best μ ndvduals to enter the net generaton. For unconstraned mnmzaton optmzaton problems, put the λ ndvduals n ascendng order based on ther objectve functon values. The frst μ ndvduals are chosen to enter the net generaton. For constraned optmzaton problems, the selecton rules wll be dscussed n the net secton. (8) If the mamum number of generaton s reached, go to step (9). Otherwse, go to step (). (9)Use the sequental quadratc programmng(sqp) to fnd the eact solutons. The startng ponts for SQP are those ndvduals saved n the eternal elte pool. (0)Determne the global soluton(s). The best soluton or solutons resulted from SQP or ES search are taken as the global solutons. Selecton Steps for Constraned Problems The selecton rules for constraned problems n GOES are eecuted n the followng order. ()Select feasble soluton to enter the net generaton frst. If the number of feasble soluton s greater than μ, select the best μ ndvduals accordng to ther objectve functon values. If the number of feasble soluton s less than μ, select all feasble solutons frst and go to step (). ISSN: 09-769 6 Issue, Volume 7, May 008

Tng-Yu Chen, Y Lang Cheng ()For nfeasble solutons compute the normalzed volaton for each volated constrant. Dvde the nfeasble solutons nto several ranks based on the domnaton check of constrant volaton. The domnaton check proceeds as follows: For any two ndvduals A and B, f every constrant volaton of A s less than that of B, then B s domnated by A. Otherwse, A and B do not domnate each other. Perform domnaton check on all nfeasble solutons usng the normalzed volatons to fnd the non-domnated ones. These nfeasble solutons are assgned to the frst rank. Repeat the domnaton check for the rest nfeasble solutons to allocate them to other ranks. The hgher the rank s, the less the overall constrant volaton. Go to step (). ()Select nfeasble ndvduals from rank one frst. If the number of ndvduals n rank one s less than the requred number to fll up μ, go to rank two and repeat ths process untl the requred number μ s reached. If the number of ndvduals n the lowest rank used to fll up μ s greater than the requred number, use objectve functon values to determne the ones to be selected. Fg. s the flow chart of GOES algorthm. mn. subject to ) = a( + e( 0 f )cos( ) + e b 0 + c d) (9) where a =, b =. (π ), c = π, d = 6, e =0, f = (8π ) Fg. shows the contour of the objectve functon. Clearly t has three global solutons. Table shows the solutons of ths problem. In order to understand the capablty of GOES to fnd all global solutons n a sngle run, the algorthm s run 00 tmes wth dfferent ntal populaton. In Table the fracton wthn the parentheses under ES n the second column means that n 9 tmes GOES algorthm fnds all three global solutons. For the other 7 tmes two of the three global solutons are found. That s for ths partcular problem GOES has 9% of chance to fnd all three global solutons n a sngle run and 7% of chance to fnd two of the three solutons. The reason for falng to fnd all three solutons s sometmes ES search fal to cover all three areas that contan the global solutons. The rest data n the table gve results obtaned by usng SQP only. The SQP solver s eecuted 00 tmes wth dfferent ntal pnts. The SQP solver successfully fnds the global soluton from any ntal pont. However for any sngle run of SQP t can only fnd one of the three global solutons. Although GOES can not guarantee fndng all global solutons n any sngle run, ts advantage over gradent-based method s apparent. Fg. Flow chart of GOES Numercal Eamples Eght test problems ncludng four unconstraned and four constraned problems are used to test the proposed algorthm. The global solutons are found for all test problems. Problem : Brann RCOS functon[8] Ths unconstraned optmzaton problem s formulated as follows: Fg. Brann RCOS functon ISSN: 09-769 7 Issue, Volume 7, May 008

Tng-Yu Chen, Y Lang Cheng Table Global solutons of Brann RCOS functon Eact Soluton ES Global (9/00) -π π 9. -.9. 9.67.7.7.7.6.76.7 OBJ 0.98 0.98 0.98 0.07 0.98 0. Table (contnued) GOES SQP (0/00) (/00) (6/00) -.0. 9. -π π 9..68.76.7.7.7.7 OBJ 0.98 0.98 0.98 0.98 0.98 0.98 Problem : Bumpy functon[9] Ths s another unconstraned optmzaton problem. The mathematcal formulaton of ths problem s gven below. ma. subjectto ) = (cos + cos 0 0 0 0 cos cos ) + (0) Fg. shows the contour of the functon. Although t has only one global soluton, t also has many local solutons. To solve ths type of problem usng gradent-based solver only, at most of tme local solutons wll be found. Table lsts the solutons by GOES and other papers. In ths table Lee s approach was called reproducng kernel appromaton method usng genetc algorthms. The GA soluton at the last column n Table was also provded by Lee s paper. The hardware used by Lee was personal computer wth Pentum CPU GHz and DDR Ram GB whch s the same as we use. It s clear that the soluton found by GOES s closest to the eact soluton. The number of functon evaluatons and the CPU tme s also the least one of the three. Fg. Bumpy functon Table Global solutons of Bumpy functon Eact Soluton[9] GOES Lee[0] GA [0].9.9.888.9 0 0 0.0008 0.000 OBJ 0.6767 0.6766 0.676 0.6766 No.e NA* 80 0 00 tme(s) NA* < No.e s number of functon evaluatons tme(s) s CPS tme(sec) NA* s not avalable Problem : Ackley functon[] Ths unconstraned optmzaton problem s defned as mn. subject to. ) = 0e 0 0 0. 0.( + ) 0.[cos(π ) + cos(π ) 0 0 e () Fg. s the contour of ths functon. It s clear that ths problem has a sngle global soluton surrounded by many local solutons. Ths ncreases the dffculty of fndng the global soluton. Table gves the solutons of ths problem. Agan the soluton by GOES s closest to the eact soluton and the CPU tme s the least one compared wth the other two solutons. + 0 + e ISSN: 09-769 8 Issue, Volume 7, May 008

Tng-Yu Chen, Y Lang Cheng Fg. 6 Rastrgn functon Fg. Ackley functon Table Global solutons of Ackley functon Eact Soluton[] GOES Lee[0] GA [0] 0-0.00000-0.00899 0.00088 0 0-0.009-0.007 OBJ 0 0.00000 0.0096 0.0069 No.e NA* 00 00 00 tme(s) NA* < 86 Problem : Rastrgn functon[] Ths unconstraned optmzaton problem s formulated as follows. Table Global solutons of Rastrgn Functon Eact Soluton[] GOES Lee[0] GA [0] 0 -.0E-09-0.0067-0.000 0.70E-08-0.000 0.0008 OBJ 0 0.00E+00 0.0009 0.00007 No.e NA* 000 00 00 tme(s) NA* < 09 Problem : Ths constraned optmzaton problem s formulated as follows: 9 mn. F ( ) = + + + y = = mn. ) = 0+ subject to + 0[cos(π ) + cos(π () Fg. 6 shows the multmodal nature of the problem. Table gves the solutons found by GOES and other papers. It s clear that GOES fnds the best soluton compared wth other methods. Also the CPU tme spent by GOES s less than those of the other two methods. )] subject to g( ) + + y6 + y7 g( ) 8+ y6 g( ) + + y6 + y8 g( ) 8 + y7 g( ) + + y7 + y8 g6( ) 8+ y8 g7( ) y+ y6 g8( ) y y+ y7 g ( ) y y + y 9 8 0, =,,,, () Ths problem contanng desgn varables and 9 constrants s from Floudas and Pardalos s book[]. Two sets of solutons are gven n Table. ISSN: 09-769 9 Issue, Volume 7, May 008

Tng-Yu Chen, Y Lang Cheng Part (A) n Table shows the eact soluton of the problem and the soluton by GOES. Apparently GOES ddn t fnd the global soluton of ths problem. However, f the recombnaton formula used n GOES s changed from equaton () to equaton () or (H) n equaton (), GOES stll can fnd the global soluton shown n Part (B). Therefore the recombnaton formulas n evoluton strategy may produce dfferent results for dfferent problems. Further researches on recombnaton formulas may be needed. Table Global solutons of problem (A) (B) Eact ES GOES ES GOES Soluton[6] GLOBAL Eqn() Eqn(B) Eqn() Eqn(B) Eqn() 0.0 0.99 0.69 0.978 0.07 0.98 0 0 0.98 y 0.896 0.989 y 0.78 0.96 y 0.996 0.97 y 0.67 0.878 y 0.8 0.96 y 6 0.0.7 y 7 0.7.8 y 8 0.99.679 y 9 0.99 0.77 OBJ - -.7 - -. - - - No.e NA* 00 00 000 000 00 00 tme(s) < < Problem 6: The formulaton of the problem s gven below. Mn ) = e Subject () to h ( ) + + + h ( ) = 0, h ( ) + =,.., =,.., =,, + = 0, Ths problem was provded by Hock and Schttkows[]. Three recombnaton formulas (), () and (B) are used to test GOES and the results are shown n Table 6. Part (A) n Table 6 contans the eact global soluton and the soluton from GOES by usng recombnaton formula (). Part (B) lsts the results by usng recombnaton formulas (B) and (). It s seen that all three formulas fnd the global soluton. The man dfference between ths problem and other test problems s t has three equalty constrants. In general equalty constrants are hard to satsfy. Therefore t s observed that at the end of ES search none of the three solutons are close to the global soluton. But SQP search eventually manages to lead the way to the global soluton. Ths eample problem further proves that the ntegraton of evolutonary computaton wth gradent-based search method can have a better chance to fnd the eact global soluton. Table 6 Global solutons of problem 6 (A) (B) Eact Soluton[] ES GOES ES GOES GLOBAL Eqn() Eqn(B) Eqn() Eqn (B) Eqn().0.9-0.78.67-0.6 6.08 7.9 OBJ 680.6 No.e tme(s) NA*.77.0.0609.9 0.07-0.778.9.67-0.07-0.6 0.0.08 0.90.9.7.999-0.68. -0.609.8.8..9-0.76.66-0.6.08.9 70.8 680.6 68.8 680.6-0.86.0.87.9-0.7-0.786.70.68-0.0-0.6 0.6.08.8.9 7.8 680.6 00 00 000 6800 000 6800 ISSN: 09-769 60 Issue, Volume 7, May 008

Tng-Yu Chen, Y Lang Cheng Problem 7: C-Bumpy functon[9] The objectve functon of ths problem s the same as problem. But two constrants are added. The optmzaton problem s defned as ma. ) = cos subject to 0 0 + cos 0, 0, cos + g( ) > 0.7, g ( ) +, cos () Fg. 7 shows the global and local solutons of the problem. Table 7 gves the solutons obtaned by varous approaches. Agan GOES yelds better soluton than solutons by other methods. The computatonal tme s also the least one. mn. ) =.787 + 0.8689 + 7.99 079. subject to g ( ) 8.07+ 0.00688 g ( ) 80.9+ 0.0077 + 0.0099 + 0.008 g( ) 9.0096+ 0.00706 + 0.007 + 0.00908 0 g ( ) 9, 90 g ( ) 0 g ( ) 78 (6), 7 + 0.00066, 7 0.000, 7 The optmum solutons are lsted n Table 8. The objectve functon value from Coello s soluton s the smallest one. But one of the constrants s not satsfed, the soluton s an nfeasble soluton. The best feasble soluton s obtaned by Homafar. Hs approach used genetc algorthm wth penalty functon approach. The soluton by GOES s the second best one and the result s very close to Homafar s soluton. The CPU tme for GOES s also the least one n the known data. Table 8 Global solutons of Hmmeblau functon GOES Lee[0] DPF[0] APF[0] Homafar[6] 78 79.9 8.68 79.7 78 Fg. 7 C-Bumpy functon Table 7 Global solutons of Bumpy functon Eact GOES Lee[0] DPF[0] APF[0] Soluton[9].9.60.69.6.6 0.7 0.68 0.9 0.6 0.8 OBJ 0.6 0.6 0.6 0.6 0.6 No.e NA* 900 900 00 00 tme(s) NA* < 6.86.0.6 9.99.86.7.76 9.99 9.9 0.07.67 6.776 6.9.78.86 6.776 OBJ -066. -0.7-00.6-07. -066.6 No.e 800 60 000 000 NA* tme(s) < 8 NA* Problem 8: Hmmeblau problem[] Ths constraned optmzaton problem havng fve desgn varables s defned as ISSN: 09-769 6 Issue, Volume 7, May 008

Tng-Yu Chen, Y Lang Cheng Table 8(contnued) Gen [7] Hmmelblau[] Coello[8] 8.9 78.6 78.0.09..007..07 7.08..8.7..9 OBJ -08. -07.9-00.9 No.e NA* NA* NA* tme(s) NA* NA* NA* 6 Concluson The proposed global optmzaton algorthm GOES usng hybrd approach of ES plus SQP has been proved to be successful n solvng 8 test problems. For most test problems the proposed method not only fnds the best soluton compared wth other methods but also spends the least computatonal tme. References: []Z. Tu and Y. Lu, A robust stochastc genetc algorthm(stga) for global numercal optmzaton, IEEE Transactons on Evolutonary Computaton, Vol. 8, No., 00, pp. 6-70. []M. D. Toksar, Ant colony optmzaton for fndng the global mnmum, Appled Mathematcs and Computaton, vol. 76, 006, pp. 08-6. []J.J. Lang, A.K. Qn, P.N. Suganthan and S. Baskar, Comprehensve learnng partcle swarm optmzer for global optmzaton of multmodal functons, IEEE transactons on Evolutonary Computaton, Vol. 0, No., 006, pp. 8-9. [] Q. Zhang, J. Sun, E. Tsang and J, Ford, Hybrd estmaton of dstrbuton algorthm for global optmzaton, Engneerng Computatons, Vol., No., 00, pp. 9-07. []I. Rechenberg, Evolutonsstratege: Optmerung technsher systeme nach prnzpen der bologschen evoluton, Frommann-Holzboog Verlag, Stuttgart, 97. [6]H.-P. Schwefel, Numercal optmzaton of computer models, John Wley & sons Ltd, Chchester, U.K, 977. [7]H.C. Chen, Dscrete and med-varable evoluton strategy, Master s thess, Department of Mechancal Engneerng, Natonal Chung Hsng Unversty, Tawan, 006. [8] Z. Mchalewcz, Genetc algorthms + Data structures = Evoluton program, thrd edton, Sprnger-Verlag, 996. [9] A. J. Keane, Eperences wth optmzers n structural desgn, Proc. Conf. Adaptve Computng n Engneerng Desgn and Control, 99, pp. -7, [0]C.C. Lee, Reproducng kernel appromaton method for structural optmzaton usng genetc algorthms, Ph D dssertaton, Natonal Tawan Unversty, 006. [] J. Branke and C.Schmdt, Faster convergence by means of ftness estmaton, Soft Computng, Vol. 9, 00, pp. -0. []D. Buche, N.N. Schraudolph and P. Koumoutsakos, Acceleratng evolutonary algorthms wth Guassan process ftness functon models, IEEE transactons on Systems, Man, and cybernetcs-part C: Applcatons and Revews, Vol., No., 00, pp. 8-9. [] C. A. Floudas and P. M. Pardalos, Recent advances n global optmzaton, Prnceton Seres n Computer Scence, Prnceton Unversty Press, Prnceton, NJ, 99. [] W. Hock and K. Schttkows, Test eamples for nonlnear programmng codes, Lecture Notes n Economcs and Mathematcal Systems, Vol. 87, Sprnger-Verlag, 98. []D.M. Hmmebleau, Appled nonlnear programmng, McGraw-Hll, New York, 97. [6]A. Homafar, S.H.Y. La and X. Q, Constraned optmzaton va genetc algorthms, Smulaton, Vol. 6, No., 99, pp. -. [7] M. Gen and R. Cheng, A survey of penalty technques n genetc algorthms, In Tosho Fukuda and Takesh Furuhash, edtors, Proceedngs of the 996 Internatonal Conference on Evolutonary Computaton IEEE, Nagoya, Japan, 996, pp. 80-809. [8] C. A. C. Coello, Self-adaptve penaltes for GA-based optmzaton, Evolutonary Computaton, Vol., 999, pp. 7-80. ISSN: 09-769 6 Issue, Volume 7, May 008