Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn
Outline Nature-Inspired Optimization Algorithms (NIOAs) Eigen Coordinate System I: Stochastic Tuning + Single Population Distribution Information II: Deterministic Tuning + Cumulative Population Distribution Information III: Adaptive Tuning + Cumulative Population Distribution Information Conclusion 2
Outline Nature-Inspired Optimization Algorithms (NIOAs) Eigen Coordinate System I: Stochastic Tuning + Single Population Distribution Information II: Deterministic Tuning + Cumulative Population Distribution Information III: Adaptive Tuning + Cumulative Population Distribution Information Conclusion 3
Main Paradigms of NIOAs Genetic algorithm (GA) Evolution strategy (ES) Evolutionary programming (EP) Ant colony optimization (ACO) Differential evolution (DE) Particle swarm optimization (PSO) 4
Main Paradigms of NIOAs Genetic algorithm (GA) Evolution strategy (ES) Evolutionary programming (EP) Ant colony optimization (ACO) Differential evolution (DE) Particle swarm optimization (PSO) the most popular paradigms in current studies 4
Differential Evolution (1/4) Differential evolution (DE) was proposed by Storn and Price in 1995. DE includes three main operators, i.e., mutation, crossover, and selection. R. Storn and K. Price. Differential evolution a simple and efficient adaptive scheme for global optimization over continuous spaces, Berkeley, CA, Tech. Rep. TR-95-012, 1995. K. Price, R. Storn, and J. Lampinen. Differential Evolution A Practical Approach to Global Optimization. Berlin, Germany: Springer-Verlag, 2005. 5
Differential Evolution (2/4) 6
Differential Evolution (2/4) 6
Differential Evolution (2/4) three main operators 6
Differential Evolution (2/4) three main operators mutation + crossover = trial vector generation strategy 6
Differential Evolution (3/4) A classic DE version: DE/rand/1/bin base vector differential vector mutation scaling factor 7
Differential Evolution (3/4) A classic DE version: DE/rand/1/bin base vector differential vector mutation scaling factor crossover control parameter crossover 7
Differential Evolution (3/4) A classic DE version: DE/rand/1/bin base vector differential vector mutation scaling factor crossover control parameter crossover selection 7
Differential Evolution (3/4) A classic DE version: DE/rand/1/bin base vector differential vector mutation scaling factor crossover control parameter crossover selection Remark: in the crossover, each variable in DE is updated independently 7
Differential Evolution (4/4) Schematic diagram to illustrate DE/rand/1/bin g x i the target vector the mutant vector g g r2 r3 g v i F ( x x ) x g r 2 x g r1 base vector perturbed vectors 0 1 8 x g r3 x
Differential Evolution (4/4) Schematic diagram to illustrate DE/rand/1/bin the triangle denotes the trial vector g u i g x i the target vector the mutant vector g g r2 r3 g v i F ( x x ) x g r 2 x g r1 base vector perturbed vectors 0 1 8 x g r3 x
Particle Swarm Optimization (1/3) The movement equations of the classic PSO the personal historical best experience the entire swarm s best experience g + 1 g g g g i, j = i, j + g 1 1, j pbesti, j i, j + 2r2, j j xi, j v v cr ( x ) c ( gbest ) or v + = v + cr ( x ) + c r ( lbest x ) g 1 g g g g g i, j i, j 1 1, j pbesti, j i, j 2 2, j i, j i, j g+ 1 g g+ 1 i, j = i, j+ i, j x x v the neighborhood s best experience J. Kennedy and R. C. Eberhart. Particle swarm optimization. in Proc. IEEE Int. Conf. Neural Networks, 1995, pp. 1942-1948. 9
Particle Swarm Optimization (1/3) The movement equations of the classic PSO the personal historical best experience the entire swarm s best experience g + 1 g g g g i, j = i, j + g 1 1, j pbesti, j i, j + 2r2, j j xi, j v v cr ( x ) c ( gbest ) or v + = v + cr ( x ) + c r ( lbest x ) g 1 g g g g g i, j i, j 1 1, j pbesti, j i, j 2 2, j i, j i, j velocity update equation g+ 1 g g+ 1 i, j = i, j+ i, j x x v the neighborhood s best experience J. Kennedy and R. C. Eberhart. Particle swarm optimization. in Proc. IEEE Int. Conf. Neural Networks, 1995, pp. 1942-1948. 9
Particle Swarm Optimization (1/3) The movement equations of the classic PSO the personal historical best experience the entire swarm s best experience g + 1 g g g g i, j = i, j + g 1 1, j pbesti, j i, j + 2r2, j j xi, j v v cr ( x ) c ( gbest ) or v + = v + cr ( x ) + c r ( lbest x ) g 1 g g g g g i, j i, j 1 1, j pbesti, j i, j 2 2, j i, j i, j velocity update equation g+ 1 g g+ 1 i, j = i, j+ i, j x x v position update equation the neighborhood s best experience J. Kennedy and R. C. Eberhart. Particle swarm optimization. in Proc. IEEE Int. Conf. Neural Networks, 1995, pp. 1942-1948. 9
Particle Swarm Optimization (1/3) The movement equations of the classic PSO g + 1 g g g g i, j = i, j + g 1 1, j pbesti, j i, j + 2r2, j j xi, j v v cr ( x ) c ( gbest ) or v + = v + cr ( x ) + c r ( lbest x ) g 1 g g g g g i, j i, j 1 1, j pbesti, j i, j 2 2, j i, j i, j velocity update equation g+ 1 g g+ 1 i, j = i, j+ i, j x x v position update equation cognition part J. Kennedy and R. C. Eberhart. Particle swarm optimization. in Proc. IEEE Int. Conf. Neural Networks, 1995, pp. 1942-1948. 9
Particle Swarm Optimization (1/3) The movement equations of the classic PSO g + 1 g g g g i, j = i, j + g 1 1, j pbesti, j i, j + 2r2, j j xi, j v v cr ( x ) c ( gbest ) or v + = v + cr ( x ) + c r ( lbest x ) g 1 g g g g g i, j i, j 1 1, j pbesti, j i, j 2 2, j i, j i, j velocity update equation g+ 1 g g+ 1 i, j = i, j+ i, j x x v position update equation cognition part social part J. Kennedy and R. C. Eberhart. Particle swarm optimization. in Proc. IEEE Int. Conf. Neural Networks, 1995, pp. 1942-1948. 9
Particle Swarm Optimization (1/3) The movement equations of the classic PSO g + 1 g g g g i, j = i, j + g 1 1, j pbesti, j i, j + 2r2, j j xi, j v v cr ( x ) c ( gbest ) or v + = v + cr ( x ) + c r ( lbest x ) g 1 g g g g g i, j i, j 1 1, j pbesti, j i, j 2 2, j i, j i, j velocity update equation g+ 1 g g+ 1 i, j = i, j+ i, j x x v position update equation cognition part social part Remark: in the velocity update, each variable in PSO is updated independently J. Kennedy and R. C. Eberhart. Particle swarm optimization. in Proc. IEEE Int. Conf. Neural Networks, 1995, pp. 1942-1948. 9
Particle Swarm Optimization (2/3) The principle of the movement equations v + = v + cr ( x ) + c ( gbest ) g 1 g g g g g i, j i, j 1 1, j pbesti, j i, j 2r2, j j xi, j g+ 1 g g+ 1 i, j = i, j+ i, j x x v g pbest i, j g 1 x + i, j g 1 v + i, j g gbest j g v i, j g x i, j the global version 10
Particle Swarm Optimization (2/3) The principle of the movement equations v + = v + cr ( x ) + c ( gbest ) g 1 g g g g g i, j i, j 1 1, j pbesti, j i, j 2r2, j j xi, j g+ 1 g g+ 1 i, j = i, j+ i, j x x v g pbest i, j g 1 x + i, j g 1 v + i, j g gbest j g v i, j g x i, j the global version 10
Particle Swarm Optimization (2/3) The principle of the movement equations v + = v + cr ( x ) + c ( gbest ) g 1 g g g g g i, j i, j 1 1, j pbesti, j i, j 2r2, j j xi, j g+ 1 g g+ 1 i, j = i, j+ i, j x x v g pbest i, j g 1 x + i, j g 1 v + i, j g gbest j g v i, j g x i, j the global version 10
Particle Swarm Optimization (2/3) The principle of the movement equations v + = v + cr ( x ) + c ( gbest ) g 1 g g g g g i, j i, j 1 1, j pbesti, j i, j 2r2, j j xi, j g+ 1 g g+ 1 i, j = i, j+ i, j x x v g pbest i, j g 1 x + i, j g 1 v + i, j g gbest j g v i, j g x i, j the global version 10
Particle Swarm Optimization (3/3) The framework of PSO 11
Particle Swarm Optimization (3/3) The framework of PSO 11
Particle Swarm Optimization (3/3) The framework of PSO 11
Particle Swarm Optimization (3/3) The framework of PSO 11
Outline Nature-Inspired Optimization Algorithms (NIOAs) Eigen Coordinate System I: Stochastic Tuning + Single Population Distribution Information II: Deterministic Tuning + Cumulative Population Distribution Information III: Adaptive Tuning + Cumulative Population Distribution Information Conclusion 12
What is the Original Coordinate system? 13
What is the Original Coordinate system? 13
What is the Eigen Coordinate System? The Eigen coordinate system is established by the columns of an orthogonal matrix B, which comes from the Eigen decomposition of the covariance matrix C: C = BD 2 B T 14
What is the Eigen Coordinate System? The Eigen coordinate system is established by the columns of an orthogonal matrix B, which comes from the Eigen decomposition of the covariance matrix C: C = BD 2 B T an orthogonal matrix 14
What is the Eigen Coordinate System? The Eigen coordinate system is established by the columns of an orthogonal matrix B, which comes from the Eigen decomposition of the covariance matrix C: C = BD 2 B T an orthogonal matrix a diagonal matrix 14
What is the Eigen Coordinate System? The Eigen coordinate system is established by the columns of an orthogonal matrix B, which comes from the Eigen decomposition of the covariance matrix C: C = BD 2 B T an orthogonal matrix a diagonal matrix the transposed matrix of B 14
What is the Eigen Coordinate System? The Eigen coordinate system is established by the columns of an orthogonal matrix B, which comes from the Eigen decomposition of the covariance matrix C: C = BD 2 B T an orthogonal matrix a diagonal matrix the transposed matrix of B Each column of B is an eigenvector of C, and each diagonal element of D is the square root of an eigenvalue of C. 14
How to Establish the Eigen Coordinate System pop x1,1 x1,2 x1, D = xnp,1 xnp,2 x NP, D Matlab Code: xmean = mean(pop, 2); C = 1/(NP-1) * (pop - xmean(:, ones(np, 1))) * (pop - xmean(:, ones(np, 1)))'; C = triu(c) + transpose(triu(c, 1)); % enforce symmetry [B, D] = eig(c); T NP individuals 15
How to Establish the Eigen Coordinate System pop x1,1 x1,2 x1, D = xnp,1 xnp,2 x NP, D Matlab Code: xmean = mean(pop, 2); C = 1/(NP-1) * (pop - xmean(:, ones(np, 1))) * (pop - xmean(:, ones(np, 1)))'; C = triu(c) + transpose(triu(c, 1)); % enforce symmetry [B, D] = eig(c); T NP individuals 15
Advantages of the Covariance Matrix (1/6) Covariance matrices have an appealing geometrical interpretation: they can be uniquely identified with the (hyper-) ellipsoid N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, vol. 9, no. 2, pp. 159-195, 2001. 16
Advantages of the Covariance Matrix (2/6) The Eigen decomposition is denoted by C = BD B 2 T 17
Advantages of the Covariance Matrix (2/6) The Eigen decomposition is denoted by C = BD B 2 T D = δ I 2 C= δ I the ellipsoid is isotropic a positive number the identity matrix 17
Advantages of the Covariance Matrix (2/6) The Eigen decomposition is denoted by C = BD B 2 T D = δ I 2 C= δ I the ellipsoid is isotropic a positive number the identity matrix B = I 2 C= D the ellipsoid is axis parallel oriented the identity matrix a diagonal matrix 17
Advantages of the Covariance Matrix (2/6) The Eigen decomposition is denoted by C = BD B 2 T D = δ I 2 C= δ I the ellipsoid is isotropic a positive number the identity matrix B = I 2 C= D the ellipsoid is axis parallel oriented the identity matrix a diagonal matrix B C the ellipsoid can be adapted to suit the contour lines of the objective function the new principal axes of the ellipsoid correspond to the columns of B 17
Advantages of the Covariance Matrix (3/6) the Hessian matrix Setting on is equivalent to optimizing the isotropic function, where with. 18
Advantages of the Covariance Matrix (3/6) the Hessian matrix Setting on is equivalent to optimizing the isotropic function, where with. Remark 1: The optimal covariance matrix equals to the inverse Hessian matrix 18
Advantages of the Covariance Matrix (3/6) the Hessian matrix Setting on is equivalent to optimizing the isotropic function, where with. Remark 1: The optimal covariance matrix equals to the inverse Hessian matrix Remark 2: The objective of covariance matrix adaptation is to approximate the inverse Hessian matrix 18
Advantages of the Covariance Matrix (4/6) minimize : f( x, x ) = x + x, -5 < x < 5, -5 < x < 5 2 2 1 2 1 2 1 2 minimize x + minimize x, -5 < x < 5, -5 < x < 5 2 2 1 2 1 2 isotropic function 19
Advantages of the Covariance Matrix (4/6) minimize : f( x, x ) = x + x, -5 < x < 5, -5 < x < 5 2 2 1 2 1 2 1 2 minimize x + minimize x, -5 < x < 5, -5 < x < 5 2 2 1 2 1 2 isotropic function 19
Advantages of the Covariance Matrix (5/6) Covariance Matrix Adaptation isotropic function 20
Advantages of the Covariance Matrix (6/6) In general, the covariance matrix C is constructed and adapted according to the feedback information resulting from the search process. 21
Advantages of the Covariance Matrix (6/6) In general, the covariance matrix C is constructed and adapted according to the feedback information resulting from the search process. Therefore, unlike the original coordinate system, the Eigen coordinate system is dynamic throughout the search process, with the aim of suiting the function landscape. 21
Outline Nature-Inspired Optimization Algorithms (NIOAs) Eigen Coordinate System I: Stochastic Tuning + Single Population Distribution Information II: Deterministic Tuning + Cumulative Population Distribution Information III: Adaptive Tuning + Cumulative Population Distribution Information Conclusion 22
Motivation The commonly used crossover operators of DE are implemented in the original coordinate system x 2 v ig, x ig, x 1 Y. Wang, H.-X. Li, T. Huang, and L Li. Differential evolution based on covariance matrix learning and bimodal distribution parameter setting. Applied Soft Computing, vol. 18, pp. 232-347, 2014. (CoBiDE) 23
CoBiDE (1/5) Covariance matrix learning 24
CoBiDE (1/5) Covariance matrix learning 24
CoBiDE (1/5) Covariance matrix learning 24
CoBiDE (1/5) Covariance matrix learning 24
CoBiDE (1/5) Covariance matrix learning 24
CoBiDE (2/5) An explanation x 2 v ig, covariance matrix learning x ig, x 1 x 2 x ig, x 1 v ig, x 2 x 1 25
CoBiDE (3/5) the optimal solution 26
CoBiDE (4/5) The first question: which individuals should be chosen for computing the covariance matrix 27
CoBiDE (4/5) The first question: which individuals should be chosen for computing the covariance matrix Compute the covariance matrix C via the top ps*np individuals in the current population 27
CoBiDE (4/5) The first question: which individuals should be chosen for computing the covariance matrix Compute the covariance matrix C via the top ps*np individuals in the current population Utilizing single population distribution information 27
CoBiDE (4/5) The first question: which individuals should be chosen for computing the covariance matrix Compute the covariance matrix C via the top ps*np individuals in the current population the λ offspring produced from μ parents the center of the μ best individuals from the λ offspring Utilizing single population distribution information N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, vol. 9, no. 2, pp. 159-195, 2001. 27
CoBiDE (4/5) The first question: which individuals should be chosen for computing the covariance matrix Compute the covariance matrix C via the top ps*np individuals in the current population the λ offspring produced from μ parents the center of the μ best individuals from the λ offspring Utilizing single population distribution information Remark: the variance will decease significantly N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, vol. 9, no. 2, pp. 159-195, 2001. 27
CoBiDE (5/5) The second question: how to determine the probability that the crossover is implemented in the Eigen coordinate system 28
CoBiDE (5/5) The second question: how to determine the probability that the crossover is implemented in the Eigen coordinate system 28
CoBiDE (5/5) The second question: how to determine the probability that the crossover is implemented in the Eigen coordinate system Stochastic tuning 28
Outline Nature-Inspired Optimization Algorithms (NIOAs) Eigen Coordinate System I: Stochastic Tuning + Single Population Distribution Information II: Deterministic Tuning + Cumulative Population Distribution Information III: Adaptive Tuning + Cumulative Population Distribution Information Conclusion 29
Motivation Single population fails to contain enough information to reliably estimate the covariance matrix. Y. Wang, H.-X. Li, T. Huang, and L Li. Differential evolution based on covariance matrix learning and bimodal distribution parameter setting. Applied Soft Computing, vol. 18, pp. 232-347, 2014. S. Guo and C. Yang. Enhancing differential evolution utilizing Eigenvector-based crossover operator. IEEE Transactions on Evolutionary Computation, vol. 19, no. 1, pp. 31-49, 2015. 30
Motivation Single population fails to contain enough information to reliably estimate the covariance matrix. Moreover, some extra parameters have been introduced. Y. Wang, H.-X. Li, T. Huang, and L Li. Differential evolution based on covariance matrix learning and bimodal distribution parameter setting. Applied Soft Computing, vol. 18, pp. 232-347, 2014. S. Guo and C. Yang. Enhancing differential evolution utilizing Eigenvector-based crossover operator. IEEE Transactions on Evolutionary Computation, vol. 19, no. 1, pp. 31-49, 2015. 30
CPI-DE (1/3) We make use of the cumulative distribution information of the population to establish an appropriate coordinate system for DE s crossover Y. Wang, Z.-Z. Liu, J. Li, H.-X. Li, and G. G. Yen. Utilizing cumulative population distribution information in differential evolution. Applied Soft Computing, vol. 48, pp. 329-346, 2016. (CPI-DE) 31
CPI-DE (1/3) We make use of the cumulative distribution information of the population to establish an appropriate coordinate system for DE s crossover The algorithmic framework Y. Wang, Z.-Z. Liu, J. Li, H.-X. Li, and G. G. Yen. Utilizing cumulative population distribution information in differential evolution. Applied Soft Computing, vol. 48, pp. 329-346, 2016. (CPI-DE) 31
CPI-DE (1/3) We make use of the cumulative distribution information of the population to establish an appropriate coordinate system for DE s crossover The algorithmic framework Deterministic tuning Y. Wang, Z.-Z. Liu, J. Li, H.-X. Li, and G. G. Yen. Utilizing cumulative population distribution information in differential evolution. Applied Soft Computing, vol. 48, pp. 329-346, 2016. (CPI-DE) 31
CPI-DE (2/3) Rank-NP-update of the covariance matrix in DE NP g + 1 g 1 g g 1 g T NP = ( + i i:2* NP )( + i:2* NP ) i= 1 C w x m x m C = (1 c ) C + c ( σ ) C g+ 1 g g 1 g+ 1 NP NP NP 2 32
CPI-DE (2/3) Rank-NP-update of the covariance matrix in DE NP g + 1 g 1 g g 1 g T NP = ( + i i:2* NP )( + i:2* NP ) i= 1 C w x m x m C = (1 c ) C + c ( σ ) C g+ 1 g g 1 g+ 1 NP NP NP 2 32 cumulative population distribution information
CPI-DE (3/3) The relationship between rank-np-update in CPI-DE and rank-μ-update in CMA-ES rank-np-update in CPI-DE rank-μ-update in CMA-ES 33
CPI-DE (3/3) The relationship between rank-np-update in CPI-DE and rank-μ-update in CMA-ES rank-np-update in CPI-DE rank-μ-update in CMA-ES rank-np-update in CPI-DE is a natural extension of rank-μ-update in CMA-ES 33
Outline Nature-Inspired Optimization Algorithms (NIOAs) Eigen Coordinate System I: Stochastic Tuning + Single Population Distribution Information II: Deterministic Tuning + Cumulative Population Distribution Information III: Adaptive Tuning + Cumulative Population Distribution Information Conclusion 34
Motivation It is necessary to strike the balance between the accuracy of estimation and the convergence performance. 35
Motivation It is necessary to strike the balance between the accuracy of estimation and the convergence performance. The current methods adjust the original and Eigen coordinate systems in either a random way or a deterministic way. How to exploit the feedback information from the search process to adaptively tune them has not yet been investigated. 35
Motivation It is necessary to strike the balance between the accuracy of estimation and the convergence performance. The current methods adjust the original and Eigen coordinate systems in either a random way or a deterministic way. How to exploit the feedback information from the search process to adaptively tune them has not yet been investigated. It is an interesting topic to boost the research on the coordinate systems to other NIOA paradigms. 35
A New Point of View (1/4) How to describe some common nature-inspired operators in the original coordinate system Z.-Z. Liu, Y. Wang*, S. Yang, and K. Tang. An adaptive framework to tune the coordinate systems in nature-inspired optimization algorithms. IEEE Transactions on Cybernetics, in press. (ACoS) 36
A New Point of View (1/4) How to describe some common nature-inspired operators in the original coordinate system each variable is updated independently in an operator Z.-Z. Liu, Y. Wang*, S. Yang, and K. Tang. An adaptive framework to tune the coordinate systems in nature-inspired optimization algorithms. IEEE Transactions on Cybernetics, in press. (ACoS) 36
A New Point of View (2/4) A convenient transformation from a nature-inspired operator in the original coordinate system to the corresponding natureinspired operator in the Eigen coordinate system 37
A New Point of View (2/4) A convenient transformation from a nature-inspired operator in the original coordinate system to the corresponding natureinspired operator in the Eigen coordinate system 37
A New Point of View (2/4) A convenient transformation from a nature-inspired operator in the original coordinate system to the corresponding natureinspired operator in the Eigen coordinate system 37
A New Point of View (2/4) A convenient transformation from a nature-inspired operator in the original coordinate system to the corresponding natureinspired operator in the Eigen coordinate system 37
A New Point of View (3/4) An example: DE 38
A New Point of View (3/4) An example: DE the crossover operator in the original coordinate system 38
A New Point of View (3/4) An example: DE the crossover operator in the original coordinate system 38
A New Point of View (3/4) An example: DE the crossover operator in the original coordinate system the crossover operator in the Eigen coordinate system 38
A New Point of View (4/4) Another example: PSO 39
A New Point of View (4/4) Another example: PSO the velocity update equation in the original coordinate system 39
A New Point of View (4/4) Another example: PSO the velocity update equation in the original coordinate system 39
A New Point of View (4/4) Another example: PSO the velocity update equation in the original coordinate system the velocity update equation in the Eigen coordinate system 39
ACoS (1/9) 40
ACoS (1/9) 40
ACoS (2/9) Each individual selects an appropriate coordinate system from the original coordinate system and the Eigen coordinate system by a probability. 41
ACoS (3/9) The probability is adaptively updated based on the collected information from the offspring. 42
ACoS (4/9) The Eigen coordinate system is coupled with the original coordinate system 43
ACoS (5/9) The Eigen coordinate system is updated by making use of an additional archiving mechanism and the rank-μ-update strategy 44
ACoS (5/9) The Eigen coordinate system is updated by making use of an additional archiving mechanism and the rank-μ-update strategy An external archive A is used to store the offspring in both the current generation and the past several generations 44
ACoS (5/9) The Eigen coordinate system is updated by making use of an additional archiving mechanism and the rank-μ-update strategy An external archive A is used to store the offspring in both the current generation and the past several generations A does not undergo any nature-inspired operators. 44
ACoS (5/9) The Eigen coordinate system is updated by making use of an additional archiving mechanism and the rank-μ-update strategy An external archive A is used to store the offspring in both the current generation and the past several generations A does not undergo any nature-inspired operators. Therefore, it can obtain sufficient information to estimate the Eigen coordinate system while never affecting the population size and the generation number. 44
ACoS (5/9) The Eigen coordinate system is updated by making use of an additional archiving mechanism and the rank-μ-update strategy An external archive A is used to store the offspring in both the current generation and the past several generations A does not undergo any nature-inspired operators. Therefore, it can obtain sufficient information to estimate the Eigen coordinate system while never affecting the population size and the generation number. strike the balance between the accuracy of estimation and the convergence performance 44
ACoS (6/9) The Eigen coordinate system is updated by making use of an additional archiving mechanism and the rank-μ-update strategy 45
ACoS (6/9) The Eigen coordinate system is updated by making use of an additional archiving mechanism and the rank-μ-update strategy cumulative population distribution information 45
ACoS (7/9) Apply ACoS to DE 46
ACoS (7/9) Apply ACoS to DE 46
ACoS (7/9) Apply ACoS to DE 46
ACoS (7/9) Apply ACoS to DE 46
ACoS (7/9) Apply ACoS to DE 46
ACoS (8/9) Apply ACoS to PSO 47
ACoS (8/9) Apply ACoS to PSO 47
ACoS (8/9) Apply ACoS to PSO 47
ACoS (8/9) Apply ACoS to PSO 47
ACoS (8/9) Apply ACoS to PSO 47
ACoS (9/9) ACoS can be used to improve the performance of three famous DE variants: JADE, jde, and SaDE. 48
ACoS (9/9) ACoS can be used to improve the performance of three famous DE variants: JADE, jde, and SaDE. ACoS can be used to improve the performance of two of the most popular PSO variants: PSO-w and PSO-cf. 48
ACoS (9/9) ACoS can be used to improve the performance of three famous DE variants: JADE, jde, and SaDE. ACoS can be used to improve the performance of two of the most popular PSO variants: PSO-w and PSO-cf. We also apply ACoS to bat algorithm (BA) and teaching learning-based optimization (TLBO). 48
ACoS (9/9) ACoS can be used to improve the performance of three famous DE variants: JADE, jde, and SaDE. ACoS can be used to improve the performance of two of the most popular PSO variants: PSO-w and PSO-cf. We also apply ACoS to bat algorithm (BA) and teaching learning-based optimization (TLBO). The structure of ACoS is simple. 48
Outline Nature-Inspired Optimization Algorithms (NIOAs) Eigen Coordinate System I: Stochastic Tuning + Single Population Distribution Information II: Deterministic Tuning + Cumulative Population Distribution Information III: Adaptive Tuning + Cumulative Population Distribution Information Conclusion 49
Conclusion (1/2) The Eigen coordinate system should be coupled with the original coordinate system in that in the early stage the Eigen coordinate system is inaccurate due to the random distribution of the population. 50
Conclusion (1/2) The Eigen coordinate system should be coupled with the original coordinate system in that in the early stage the Eigen coordinate system is inaccurate due to the random distribution of the population. The Eigen coordinate system can significantly accelerate the convergence. 50
Conclusion (1/2) The Eigen coordinate system should be coupled with the original coordinate system in that in the early stage the Eigen coordinate system is inaccurate due to the random distribution of the population. The Eigen coordinate system can significantly accelerate the convergence. However, based on our observation and analysis, it is hard for the Eigen coordinate system to change the search pattern of a NIOA, which means that the capability of a NIOA implemented in the Eigen coordinate system jumping from a local optimum is limited if this NIOA tends to converge to a local optimum of an optimization problem. 50
Conclusion (2/2) Due to the fact that the theory study of NIOAs is very difficult (in particular, the theoretical analysis of population cooperation is very challenging), tuning the coordinate systems in NIOAs is a very important and promising step toward understanding why NIOAs work or do not work, which deserves in-depth investigations in the future. 51
Conclusion (2/2) Due to the fact that the theory study of NIOAs is very difficult (in particular, the theoretical analysis of population cooperation is very challenging), tuning the coordinate systems in NIOAs is a very important and promising step toward understanding why NIOAs work or do not work, which deserves in-depth investigations in the future. The source codes of CoBiDE, CPI-DE, and ACoS can be downloaded from: http://www.escience.cn/people/yongwang1/index.html 51