Pure Strategy or Mixed Strategy?

Similar documents
A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem

Evolutionary Computation Theory. Jun He School of Computer Science University of Birmingham Web: jxh

Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape

On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms

DRAFT -- DRAFT -- DRAFT -- DRAFT -- DRAFT --

Average Drift Analysis and Population Scalability

A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions

DRAFT -- DRAFT -- DRAFT -- DRAFT -- DRAFT --

On the convergence rates of genetic algorithms

Evolutionary Algorithms How to Cope With Plateaus of Constant Fitness and When to Reject Strings of The Same Fitness

Evolutionary Computation

Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem

Genetic Algorithms: Basic Principles and Applications

A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape

Usefulness of infeasible solutions in evolutionary search: an empirical and mathematical study

Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata

NOTE ON THE SKEW ENERGY OF ORIENTED GRAPHS. Communicated by Ivan Gutman. 1. Introduction

Lecture 9 Evolutionary Computation: Genetic algorithms

Evolutionary Search under Partially Ordered Fitness Sets

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem.

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Fast Nonnegative Matrix Factorization with Rank-one ADMM

A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms

Plateaus Can Be Harder in Multi-Objective Optimization

Lecture 06: Niching and Speciation (Sharing)

Continuous Dynamical System Models of Steady-State Genetic Algorithms

A Statistical Genetic Algorithm

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules

Quad-trees: A Data Structure for Storing Pareto-sets in Multi-objective Evolutionary Algorithms with Elitism

Running Time Analysis of Multi-objective Evolutionary Algorithms on a Simple Discrete Optimization Problem

1618. Dynamic characteristics analysis and optimization for lateral plates of the vibration screen

Iterative common solutions of fixed point and variational inequality problems

On the Approximation Ability of Evolutionary Optimization with Application to Minimum Set Cover: Extended Abstract

Multi-objective approaches in a single-objective optimization environment

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

An Improved Quantum Evolutionary Algorithm with 2-Crossovers

Solving the Constrained Nonlinear Optimization based on Imperialist Competitive Algorithm. 1 Introduction

UNIVERSITY OF DORTMUND

WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY

On the Impact of Objective Function Transformations on Evolutionary and Black-Box Algorithms

Robust Sparse Recovery via Non-Convex Optimization

Analysis of Random Noise and Random Walk Algorithms for Satisfiability Testing

CSC 4510 Machine Learning

A MIXED INTEGER QUADRATIC PROGRAMMING MODEL FOR THE LOW AUTOCORRELATION BINARY SEQUENCE PROBLEM. Jozef Kratica

Expected Running Time Analysis of a Multiobjective Evolutionary Algorithm on Pseudo-boolean Functions

Streaming Algorithms for Optimal Generation of Random Bits

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing

REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531

Convergence Rate of Expectation-Maximization

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms

Constrained Real-Parameter Optimization with Generalized Differential Evolution

RESOLUTION OF NONLINEAR OPTIMIZATION PROBLEMS SUBJECT TO BIPOLAR MAX-MIN FUZZY RELATION EQUATION CONSTRAINTS USING GENETIC ALGORITHM

Quantum-Inspired Differential Evolution with Particle Swarm Optimization for Knapsack Problem

Switch Analysis for Running Time Analysis of Evolutionary Algorithms

Streaming Algorithms for Optimal Generation of Random Bits

Markov Decision Processes

The best generalised inverse of the linear operator in normed linear space

On the Effectiveness of Sampling for Evolutionary Optimization in Noisy Environments

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning

Chapter 8: Introduction to Evolutionary Computation

Monotonicity Analysis, Evolutionary Multi-Objective Optimization, and Discovery of Design Principles

Quadratic Multiple Knapsack Problem with Setups and a Solution Approach

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

OPTIMAL POWER FLOW BASED ON PARTICLE SWARM OPTIMIZATION

An Analysis on Recombination in Multi-Objective Evolutionary Optimization

Delay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delays

A Simple Implementation of the Stochastic Discrimination for Pattern Recognition

Looking Under the EA Hood with Price s Equation

Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming

GENETIC ALGORITHM FOR CELL DESIGN UNDER SINGLE AND MULTIPLE PERIODS

Introduction to integer programming II

arxiv: v1 [cs.sy] 25 Oct 2017

Runtime Analysis of Genetic Algorithms with Very High Selection Pressure

The Fitness Level Method with Tail Bounds

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

Success Probability of the Hellman Trade-off

Spectral gradient projection method for solving nonlinear monotone equations

biologically-inspired computing lecture 18

Evolutionary computation

Permutation transformations of tensors with an application

Convergence of Ant Colony Optimization on First-Order Deceptive Systems

DESIGN OF AN ADAPTIVE FUZZY-BASED CONTROL SYSTEM USING GENETIC ALGORITHM OVER A ph TITRATION PROCESS

Geometric Semantic Genetic Programming (GSGP): theory-laden design of variation operators

CHEMICAL Reaction Optimization (CRO) [1] is a simple

Evolutionary Computation

Runtime Analyses for Using Fairness in Evolutionary Multi-Objective Optimization

Experimental Supplements to the Theoretical Analysis of EAs on Problems from Combinatorial Optimization

Stability and Robustness of Weak Orthogonal Matching Pursuits

A Canonical Genetic Algorithm for Blind Inversion of Linear Channels

Interplanetary Trajectory Optimization using a Genetic Algorithm

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

When to use bit-wise neutrality

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

An Evolutionary Programming Based Algorithm for HMM training

arxiv: v1 [math.ra] 11 Aug 2014

Transcription:

Pure Strategy or Mixed Strategy? Jun He, Feidun He, Hongbin Dong arxiv:257v4 [csne] 4 Apr 204 Abstract Mixed strategy evolutionary algorithms EAs) aim at integrating several mutation operators into a single algorithm However no analysis has been made to answer the theoretical question: whether and when is the performance of mixed strategy EAs better than that of pure strategy EAs? In this paper, asymptotic convergence rate and asymptotic hitting time are proposed to measure the performance of EAs It is proven that the asymptotic convergence rate and asymptotic hitting time of any mixed strategy +) EA consisting of several mutation operators is not worse than that of the worst pure strategy +) EA using only one mutation operator Furthermore it is proven that if these mutation operators are mutually complementary, then it is possible to design a mixed strategy +) EA whose performance is better than that of any pure strategy +) EA using only one mutation operator I INTRODUCTION Different search operators have been proposed and applied in EAs [] Each search operator has its own advantage Therefore an interesting research issue is to combine the advantages of variant operators together and then design more efficient hybrid EAs Currently hybridization of evolutionary algorithms becomes popular due to their capabilities in handling some real world problems [2] Mixed strategy EAs, inspired from strategies and games [3], aims at integrating several mutation operators into a single algorithm [4] At each generation, an individual will choose one mutation operator according to a strategy probability distribution Mixed strategy evolutionary programming has been implemented for continuous optimization and experimental results show it performs better than its rival, ie, pure strategy evolutionary programming which utilizes a single mutation operator [5], [6] However no analysis has been made to answer the theoretical question: whether and when is the performance of mixed strategy EAs better than that of pure strategy EAs? This paper aims at providing an initial answer In theory, many of EAs can be regarded as a matrix iteration procedure Following matrix iteration analysis [7], the performance of EAs is measured by the asymptotic convergence rate, ie, the spectral radius of a probability transition sub-matrix associated with an EA Alternatively the performance of EAs can be measured by the asymptotic hitting time [8], which approximatively equals the reciprocal of the asymptotic convergence rate Then a theoretical analysis is made to compare the performance of mixed strategy and pure strategy EAs The rest of this paper is organized as follows Section 2 describes pure strategy and mixed strategy EAs Section 3 defines asymptotic convergence rate and asymptotic hitting time Section 4 makes a comparison of pure strategy and mixed strategy EAs Section 5 concludes the paper II PURE STRATEGY AND MIXED STRATEGY EAS Before starting a theoretical analysis of mixed strategy EAs, we first demonstrate the result of a computational experiment Example : Let s see an instance of the average capacity 0- knapsack problem [9], [0]: maximize 0 i= v ib i, b i {0,}, subject to 0 i= w ib i C, where v = 0 and v i = for i = 2,,0; w = 9 and w i = for i = 2,,0; C = 9 The fitness function is that for x = b,,b 0 ) { 0 fx) = i= v ib i, if 0 i= w ib i C, 0, if 0 i= w ib i > C We consider two types of mutation operators: s: flip each bit b i with a probability 0; s2: flip each bit b i with a probability 09; The selection operator is to accept a better offspring only Three +) EAs are compared in the computation experiment: ) EAs) which adopts s only, 2) EAs2) with s2 only, and 3) EAs,s2) which chooses either s or s2 with a probability 05 at each generation Each of these three EAs runs 00 times independently The computational experiment shows that EAs, s2) always finds the optimal solution more quickly than other twos ) Jun He is with Department of Computer Science, Aberystwyth University, Ceredigion, SY23 3DB, UK Email: junhe@aberacuk Feidun He is with School of Information Science and Technology, Southwest Jiaotong University, Chengdu, Sichuan, 6003, China Hongbin Dong is with College of Computer Science and Technology, Harbin Engineering University, Harbin, 5000, China

2 This is a simple case study that shows a mixed strategy EA performs better than a pure strategy EA In general, we need to answer the following theoretical question: whether or when do a mixed strategy EAs are better than pure strategy EAs? Consider an instance of the discrete optimization problem which is to maximize an objective function fx): max{fx);x S}, 2) where S a finite set For the analysis convenience, suppose that all constraints have been removed through an appropriate penalty function method Under this scenario, all points in S are viewed as feasible solutions In evolutionary computation, fx) is called a fitness function The following notation is used in the algorithm and text thereafter x,y,z S are called points in S, or individuals in EAs or states in Markov chains The optimal set S opt S is the set consisting of all optimal solutions to Problem 2) and non-optimal set S non := S\S opt t is the generation counter A random variable Φ t represents the state of the t-th generation parent; Φ t+/2 the state of the child which is generated through mutation The mutation and selection operators are defined as follows: A mutation operator is a probability transition from S to S It is defined by a mutation probability transition matrix P m whose entries are given by P m x,y), x,y S 3) A strict elitist selection operator is a mapping from S S to S, that is for x S and y S, { x, if fy) fx), z = y, if fy) > fx) A pure strategy +) EA, which utilizes only one mutation operator, is described in Algorithm 4) Algorithm Pure Strategy Evolutionary Algorithm EAs) : input: fitness function; 2: generation counter t 0; 3: initialize Φ 0 ; 4: while stopping criterion is not satisfied do 5: Φ t+/2 mutate Φ t by mutation operator s; 6: evaluate the fitness of Φ t+/2 ; 7: Φ t+ select one individual from {Φ t,φ t+/2 } by strict elitist selection; 8: t t+; 9: end while 0: output: the maximal value of the fitness function The stopping criterion is that the running stops once an optimal solution is found If an EA cannot find an optimal solution, then it will not stop and the running time is infinite This is common in the theoretical analysis of EAs Let s,, sκ be κ mutation operators called strategies) Algorithm 2 describes the procedure of a mixed strategy +) EA At the t-th generation, one mutation operator is chosen from the κ strategies according to a strategy probability distribution q s x),,q sκ x), 5) subject to 0 q s x) and s q sx) = Write this probability distribution in short by a vector qx) = [q s x)] Pure strategy EAs can be regarded a special case of mixed strategy EAs with only one strategy EAs can be classified into two types: A homogeneous EA is an EA which applies the same mutation operators and same strategy probability distribution for all generations An inhomogeneous EA is an EA which doesn t apply the same mutation operators or same strategy probability distribution for all generations This paper will only discuss homogeneous EAs mainly due to the following reason: The probability transition matrices of an inhomogeneous EA may be chosen to be totally different at different generations This makes the theoretical analysis of an inhomogeneous EA extremely hard

3 Algorithm 2 Mixed Strategy Evolutionary Algorithm EAs,, sκ) : input: fitness function; 2: generation counter t 0; 3: initialize Φ 0 ; 4: while stopping criterion is not satisfied do 5: choose a mutation operator sk from s,, sκ; 6: Φ t+/2 mutate Φ t by mutation operator sk; 7: evaluate Φ t+/2 ; 8: Φ t+ select one individual from {Φ t,φ t+/2 } by strict elitist selection; 9: t t+; 0: end while : output: the maximal value of the fitness function III ASYMPTOTIC CONVERGENCE RATE AND ASYMPTOTIC HITTING TIME Suppose that a homogeneous EA is applied to maximize a fitness function fx), then the population sequence {Φ t,t = 0,, } can be modelled by a homogeneous Markov chain [], [2] Let P be the probability transition matrix, whose entries are given by Px,y) = PΦ t+ = y Φ t = x), x,y S Starting from an initial state x, the mean number mx) of generations to find an optimal solution is called the hitting time to the set S opt [3] τx) := min{t;φ t S opt Φ 0 = x}, mx) := E[τx)] = + t=0 tpτx) = t) Let s arrange all individuals in the order of their fitness from high to low: x,x 2,, then their hitting times are: Denote it in short by a vector m = [mx)] Write the transition matrix P in the canonical form [4], mx ),mx 2 ), P = ) I 0, 6) T where I is a unit matrix and 0 a zero matrix T denotes the probability transition sub-matrix among non-optimal states, whose entries are given by Px,y), x S non,y S non The part plays no role in the analysis Since x S opt,mx) = 0, it is sufficient to consider mx) on non-optimal states x S non For the simplicity of notation, the vector m will also denote the hitting times for all non-optimal states: [mx)],x S non The Markov chain associated with an EA can be viewed as a matrix iterative procedure, where the iterative matrix is the probability transition sub-matrix T Let p 0 be the vector [p 0 x)] which represents the probability distribution of the initial individual: p 0 x) := PΦ 0 = x), x S non, and p t the vector [p t x)] which represents the probability distribution of the t-generation individual: p t x) := PΦ t = x), x S non If the spectral radius ρt) of the matrix T satisfies: ρt) <, then we know [7] lim p t = 0 t Following matrix iterative analysis [7], the asymptotic convergence rate of an EA is defined as below Definition : The asymptotic convergence rate of an EA for maximizing fx) is RT) := lnρt) 7) where T is the probability transition sub-matrix restricted to non-optimal states and ρt) its spectral radius Asymptotic convergence rate is different from previous definitions of convergence rate based on matrix norms or probability distribution [2]

4 25 2 5 05 RT) TT) ρt) 02 0 02 04 06 08 Fig The relationship between the asymptotic hitting time and asymptotic convergence rate: /RT) < TT) < 5/RT) if ρt) 05 Note: Asymptotic convergence rate depends on both the probability transition sub-matrix T and fitness function fx) Because the spectral radius of the probability transition matrix ρp) =, thus ρp) cannot be used to measure the performance of EAs Becaue the mutation probability transition matrix is the same for all functions fx), and ρp m ) =, so ρp m ) cannot be used to measure the performance of EAs too If ρt) <, then the hitting time vector satisfies see Theorem 32 in [4]), m = I T) 8) The matrix N := I T) is called the fundamental matrix of the Markov chain, where T is the probability transition sub-matrix restricted to non-optimal states The spectral radius ρn) of the fundamental matrix can be used to measure the performance of EAs too Definition 2: The asymptotic hitting time of an EA for maximizing fx) is { ρn) = ρi T) TT) = ), if ρt) <, +, if ρt) = where T is the probability transition sub-matrix restricted to non-optimal states and N is the fundamental matrix From Lemma 5 in [8],, we know the asymptotic hitting time is between the best and worst case hitting times, ie, min{mx);x S non } TT) max{mx);x S non } 9) From Lemma 3 in [8], we know Lemma : For any homogeneous +)-EA using strictly elitist selection, it holds From Lemma and Taylor series, we get that ρt) = max{px,x);x S non }, ρn) =, if ρt) < ρt) RT)TT) = k= ) k k TT) If we make a mild assumption TT) 2, ie, the asymptotic hitting time is at least two generations), then the asymptotic hitting time approximatively equals the reciprocal of the asymptotic convergence rate see Figure ) Example 2: Consider the problem of maximizing the One-Max function: fx) = x, where x = b b n ) a binary string, n the string length and x := n i= b i The mutation operator used in the +) EA is to choose one bit randomly and then flip it Then asymptotic convergence rate and asymptotic hitting time are /n < RT) < /n ), TT) = n

5 IV A COMPARISON OF PURE STRATEGY AND MIXED STRATEGY In this section, subscripts q and s are added to distinguish between a mixed strategy EA using a strategy probability distribution q and a pure strategy EA using a pure strategy s For example, T q denotes the probability transition sub-matrix of a mixed strategy EA; T s the transition sub-matrix of a pure strategy EA Theorem : Let s, sκ be κ mutation operators ) The asymptotic convergence rate of any mixed strategy EA consisting of these κ mutation operators is not smaller than the worst pure strategy EA using only one of these mutation operator; 2) and the asymptotic hitting time of any mixed strategy EA is not larger than the worst pure strategy EA using one only of these mutation operator Proof: ) From Lemma we know ρt q ) = max{ κ P sk x,x);x S non } κ ρt sk ) max{ρt sk );k =,,κ} κ κ Thus we get that 2) From Lemma, we know k= k= RT q ) := lnρt q ) max{ lnρt sk );k =,,κ} ρn) = ρt), then we get ρn q ) max{ρn sk );k =,,κ} In the following we investigate whether and when the performance of a mixed strategy EA is better than a pure strategy EA Definition 3: A mutation operator s is called complementary to another mutation operator s2 on a fitness function fx) if for any x such that P s x,x) = ρt s ), 0) it holds P s2 x,x) < ρt s ) ) Theorem 2: Let fx) be a fitness function and EAs) a pure strategy EA If a mutation operator s2 is complementary to s, then it is possible to design a mixed strategy EAs,s2) which satisfies ) its asymptotic convergence rate is larger than that of EAs); 2) and its asymptotic hitting time is shorter than that of EAs) Proof: ) Design a mixed strategy EAs, s2) as follows For any x such that let the strategy probability distribution satisfy P s x,x) = ρt s ), q s2 x) = For any other x, let the strategy probability distribution satisfy Because s2 is complementary to s, we get that and then which proves the first conclusion in the theorem 2) From Lemma we get that q s x) = ρt q ) < ρt s ), lnρt q ) > lnρt s ), ρn) = ρt) ρn q ) < ρn sk ), k =,,κ, which proves the second conclusion in the theorem Definition 4: κ mutation operators s,,sκ are called mutually complementary on a fitness function fx) if for any x S non and sl {s,,sκ} such that P sl x,x) min{ρt s ),,ρt sκ )}, 2)

6 it holds: sk sl, P sk x,x) < min{ρt s ),,ρt sκ )} 3) Theorem 3: Let fx) be a fitness function and s,,sκ be κ mutation operators If these mutation operators are mutually complementary, then it is possible to design a mixed strategy EA which satisfies ) its asymptotic convergence rate is larger than that of any pure strategy EA using one mutation operator; 2) and its asymptotic hitting time is shorter than that of any pure strategy EA using one mutation operator Proof: ) We design a mixed strategy EAs,, sκ) as follows For any x and any strategy sl {s,,sκ} such that P sl x,x) min{ρt s ),,ρt sκ )}, from the mutually complementary condition, we know sk sl, it holds Let the strategy probability distribution satisfy P sk x,x) < min{ρt s ),,ρt sκ )} q sk x) = For any other x, we assign a strategy probability distribution in any way Because the mutation operators are mutually complementary, we get that and then which proves the first conclusion in the theorem 2) From Lemma we get that ρt q ) < min{ρt s ),,ρt sκ )}, lnρt q ) > min{ lnρt s ),, lnρt sκ )}, ρn) = ρt), ρn q ) < ρn sk ), k =,,κ, which proves the second conclusion in the theorem Example 3: Consider the problem of maximizing the following fitness function fx) see Figure 2): x, if x < 05n and x is even; fx) = x +2, if x < 05n and x is odd; x, if x 05n where x = b b n ) is a binary string, n the string length and x := n i= b i 5 fx) 0 5 x 0 2 4 6 8 0 2 4 6 8 Fig 2 The shape of the function fx) in Example 3 when n = 6 Consider two common mutation operators: s: to choose one bit randomly and then flip it; s2: to flip each bit independently with a probability /n EAs) uses the mutation operator s only Then ρt s ) =, and then the asymptotic convergence rate is RT s ) = 0 EAs2) utilizes the mutation operator s2 only Then ρt s2 ) = n n n)

7 We have ) For any x such that we have and we know that 2) For any x such that we know that min{ρt s ),ρt s2 )} = n P s x,x) n P s x,x) =, P s2 x,x) < n P s2 x,x) = ρt s2 ) = n n) n n) n, n) n P s x,x) = n < ρt s2) = n n) n, n) n Hence these two mutation operators are mutually complementary We design a mixed strategy EAs,s2) as follows: let the strategy probability distribution satisfy { 0, if x 05n; q s x) =, if x > 05n According to Theorem 3, the asymptotic convergence rate of this mixed strategy EAs,s2) is larger than that of either EAs) or EAs2) The result of this paper is summarized in three points V CONCLUSION AND DISCUSSION Asymptotic convergence rate and asymptotic hitting time are proposed to measure the performance of EAs They are seldom used in evaluating the performance of EAs before It is proven that the asymptotic convergence rate and asymptotic hitting time of any mixed strategy +) EA consisting of several mutation operators is not worse than that of the worst pure strategy +) EA using only one of these mutation operators Furthermore, if these mutation operators are mutually complementary, then it is possible to design a mixed strategy EA whose performance asymptotic convergence rate and asymptotic hitting time) is better than that of any pure strategy EA using one mutation operator An argument is that several mutation operators can be applied simultaneously, eg, in a population-based EA, different individuals adopt different mutation operators However in this case, the number of fitness evaluations at each generation is larger than that of a +) EA Therefore a fair comparison should be a population-based mixed strategy EA against a population-based pure strategy EA Due to the length restriction, this issue will not be discussed in the paper Acknowledgement: J He is partially supported by the EPSRC under Grant EP/I009809/ H Dong is partially supported by the National Natural Science Foundation of China under Grant No 60973075 and Natural Science Foundation of Heilongjiang Province of China under Grant No F200937, China REFERENCES [] DB Fogel and Z Michalewicz Handbook of Evolutionary Computation Oxford Univ Press, 997 [2] C Grosan, A Abraham, and H Ishibuchi Hybrid Evolutionary Algorithms Springer Verlag, 2007 [3] PK Dutta Strategies and Games: Theory and Practice MIT Press, 999 [4] J He and X Yao A game-theoretic approach for designing mixed mutation strategies In L Wang, K Chen, and Y-S Ong, editors, Proceedings of the st International Conference on Natural Computation, LNCS 362, pages 279 288, Changsha, China, August 2005 Springer [5] H Dong, J He, H Huang, and W Hou Evolutionary programming using a mixed mutation strategy Information Sciences, 77):32 327, 2007 [6] L Shen and J He A mixed strategy for evolutionary programming based on local fitness landscape In Proceedings of 200 IEEE Congress on Evolutionary Computation, pages 350 357, Barcelona, Spain, July 200 IEEE Press [7] RS Varga Matrix Iterative Analysis Springer, 2009 [8] J He and T Chen Population scalability analysis of abstract population-based random search: Spectral radius Arxiv preprint arxiv:08453, 20 [9] Z Michalewicz Genetic Algorithms + Data Structure = Evolution Program Springer Verlag, New York, 996 [0] J He and Y Zhou A comparison of GAs using penalizing infeasible solutions and repairing infeasible solutions II: Avarerage capacity knapsack In L Kang, Y Liu, and S Y Zeng, editors, Proceedings of the 2nd International Symposium on Intelligence Computation and Applications, LNCS 4683, pages 02 0, Wuhan, China, September 2007 Springer

[] G Rudolph Convergence analysis of canonical genetic algorithms IEEE Transactions on Neural Networks, 5):96 0, 994 [2] J He and L Kang On the convergence rate of genetic algorithms Theoretical Computer Science, 229-2):23 39, 999 [3] J He and X Yao Towards an analytic framework for analysing the computation time of evolutionary algorithms Artificial Intelligence, 45-2):59 97, 2003 [4] M Iosifescu Finite Markov Chain and their Applications Wiley, Chichester, 980 8