Population Variance Based Empirical Analysis of. the Behavior of Differential Evolution Variants
|
|
- Abraham Barber
- 5 years ago
- Views:
Transcription
1 Applied Mathematical Sciences, Vol. 9, 2015, no. 66, HIKARI Ltd, Population Variance Based Empirical Analysis of the Behavior of Differential Evolution Variants S. Thangavelu, G. Jeyakumar and C. Shunmuga Velyautham Department of Computer Science and Engineering, Amrita School of Engineering,Amrita Vishwa Vidyapeetham Coimbatore, India Copyright 2014 S. Thangavelu, G. Jeyakumar and C. Shunmuga Velyautham. This article is distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract Differential Evolution (DE) is a simple but efficient Evolutionary Algorithm (EA) for stochastic real parameter optimization. With various types of mutation and crossover applicable to DE, there exist many variants of DE. The empirical comparisons between the performances of these variants on chosen benchmarking problems are well reported in literature. However, attempts to analyze the reason for such identified behavior of the variants are scarce. As an attempt in this direction, this paper empirically analyzes the performance as well as the reason for such performance of 14 classical DE variants on 4 benchmarking functions with different modality and decomposability. The empirical analysis is carried out by measuring the mean objective function values (MOV), success rate (Sr), probability of convergence (Pc), quality measure (Qm) and empirical evolution of the variance of the population (Evar). The study also includes reporting evidences for the variants suffering with stagnation and/or premature convergence. Keywords: Differential Evolution, Population Variance, Stagnation and Premature Convergence 1 Introduction Differential Evolution (DE) [1] is a well known Evolutionary Algorithm (EA) for solving optimization problems in continuous spaces. The superiority of DE has been tested and proved on many benchmarking problems and real-world applications [2, 3]. Like other EAs, DE also employs mutation, recombination and
2 3250 S. Thangavelu et al. selection operations during evolution. However, DE has some unique characteristics which makes it different from other algorithms in the EA family. The mutation operation in DE, differential mutation, adds the weighted difference between a pair of parent vectors to a target vector to produce a mutant vector. Between the target vector and the mutant vector, a recombination operation is carried out to produce a trial vector. It is followed by a one-to-one greedy selection between the target vector and the trial vector. With various types of mutation and recombination there exist many strategies for generating trial vectors and consequently many DE variants in the literature. However, no variant has turned out to be superior in solving wide range of problems. Even though there are various studies [5, 6, 7, 8 and 9] reported in the literature on the performance of DE variants, the reasons for performance difference of the variants have not been properly addressed. There are few works reported in the literature to study the convergence nature and explorative power of the DE variants [10, 11]. This necessitates investigation on the inherent evolution mechanism of the variants, based on their mutation and recombination operations. This paper is an attempt to identify the reason for such performance of the variants, in light of the population diversity during the course of evolution. The remainder of the paper is organized as follow. Section 2 describes the algorithmic structure of DE, Section 3 briefs the exploration and exploitation process in DE and Section 4 presents the related works. In Section 5 design of experiment is presented. The results are presented and discussed in Section 6 and finally Section 7 concludes the paper. 2 Differential Evolution Population Initialization X(0) {x 1(0),...,x NP(0)} g 0 Compute { f(x 1(g)),...,f(x NP(g)) } while the stopping condition is false do for i = 1 to NP do MutantVector: y i generate mutant(x(g)) TrialVector: z i crossover(x i(g),y i) if f(z i) < f(x i(g)) then x i(g+1) z i else x i(g+1) x i(g) end if end for g g+1 Compute{ f(x 1(g)),...,f(x NP(g))} end while As depicted in Figure 1, the DE algorithm starts with initialization of a NP D-dimensional vectors, called initial population, denoted as X(0). After population initialization, an iterative process comprising mutation, recombination and selection is started. At each generation (iteration i) a new population X(i) is generated until the stopping criteria is satisfied. Figure1. Description of DE algorithm
3 Population variance based empirical analysis of the behavior 3251 We choose seven different mutation operations (rand/1, best/1, rand/2, best/2, current-to-rand/1, current-to-best/1 and rand-to-best/1) and two recombination operations (bin and exp), which resulted 14 different DE variants viz (DE/rand/1/bin, DE/rand/1/exp, DE/best/1/bin, DE/best/1/exp, DE/rand/2/bin, DE/rand/2/exp, DE/best/2/bin, DE/best/2/exp, DE/current-to-rand/1/bin, DE/current-to-rand/1/exp, DE/current-to-best/1/bin, DE/current-to-best/1/exp, DE/rand-to-best/1/bin and DE/rand-to-best/1/exp). The performance of these 14 fourteen variants on a test suite with 14 benchmark problems has been reported in [5 and 6]. This paper focuses on identifying the possible reasons for the reported performance of the above said DE variants, as an initial attempt. 3 Exploration and Exploitation of DE The search behavior of any EA (and hence DE) depends on its exploration and exploitation processes. The former explores the search space for finding the better candidate. On the other hand, the exploitation process focuses the search towards the desired local region. As stated by Beyer [13] and Feoktistov [14] the ability of an EA to find a global optimal solution depends on its ability to find the right relation between exploitation of the elements found so far and exploration of the search space. In DE, the variation operators viz. mutation and crossover operators are often attributed for the exploration process and the selection operator for the exploitation process. The variation operators often produce new candidate solutions and the selection operator selects suitable candidate(s) among the existing candidates. It is fairly understandable that the exploration process increases the population variability and the exploitation process decreases the population variability. Population Variance can be considered a good measure of population variability [15]. It serves as a measure to understand the diversity in the population, at every generation. Since each variation operator has a distinctive way of exploring the search space, the inherent evolutionary behavior of each DE variant is different. The empirical evolution of a DE variant, for the given random initial population, ends up in either one of the following three circumstances: (1) successful convergence (2) premature convergence and (3) stagnation. This paper is a preliminary attempt to identify the circumstances under which each of the DE variant descends in the search space (assuming a minimization problem) during its evolution and depict suitable empirical evidence for such cases. 4 Related Works With existence of many variants for mutation and crossover operators in DE, choosing an appropriate mutation/crossover operator for a given problem is crucial. Obviously, the performance of DE is largely affected by the strategy chosen for mutation and crossover. However, such choice is purely problem dependent. No DE variant, till now, has proven to be best suited for all kinds of
4 3252 S. Thangavelu et al. problems, which is quiet understandable from No Free Lunch Theorem [12]. Extensive empirical analysis on the performance of DE variants reported in the literature provides insight about the efficacy of different DE variants. Mezura-Montes et al in [7] reported performance of eight DE variants, and identified DE/rand/1/bin, DE/best/1/bin, DE/current-to-rand/1/bin and DE/rand/2/dir as the competitive variants. A comparison of ten different variants of DE to solve an optimal design problem was reported in [8]. This study concluded DE/best/*/* variants to be better than DE/rand/*/* variants. Mezura-Montes et al. in [9] presented an extensive analysis of different variants of DE on 24 benchmark problems. There have been few theoretical investigations too towards this direction. The relationship between the control parameters of DE and its evolution of population diversity is analyzed in [20], and based on the analysis, the control parameters are set up to avoid the premature convergence. A parameter adaptation technique based on the population variability is proposed in [21]. An idea to modify the selection operator to improve the balance between exploration and exploitation is proposed in [17]. Some mechanisms for diversity enhancement for DE is proposed and tested in [16, 22]. An empirical comparative study between DE/rand/1/bin and DE/best/1/bin by their population convergence measure and by their empirical evolution of population variance is presented in [10] and [11], respectively. Theoretical expressions to measure the population diversity of the DE variants also derived in the literature [19, 24]. All the above studies on the population diversity of DE have been carried out scarcely on one or two variants of DE. This necessitates extensive empirical analysis on the evolutionary behavior of different DE variants. This work is an attempt in this direction. 5 Experimental Design This paper investigates the performance of the above mentioned 14 classical DE variants on benchmark functions with 30 dimensions [7, 23] viz. 1. f1 Step Function(Unimodal Separble) 2. f2 Schwefel s Function 1.2(Unimodal Nonseparble) 3. f3 Generalized Restrigin s Function (Multimodal Separable) and 4. f4 - Generalized Griewank's Function(Multimodal NonSeparable). Classical DE has only three parameters to be set: NP (population size), F (mutation scale factor) and Cr (crossover rate). The NP has been set a moderate
5 Population variance based empirical analysis of the behavior 3253 value of 60. The F value is set as based on [7]. To decide the Cr values a bootstrap test was conducted for every variant-benchmark function with varying values of Cr as {0.0, 0.1, 0.2, 0.4, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}. The maximum number generation (MaxGen) is set as 3000 (and hence the maximum number of function evaluations (MaxnFE) is 180,000 (ie., 3000 * NP)). The stopping criterion has been fixed as a tolerance error of 1 x Since, DE is a stochastic algorithm, for each variant-benchmarking function 100 independent runs were performed with random initialization of the initial population for every run. At each run, DE variants will stop its evolution either upon reaching the stopping criteria or upon exhausting maximum allotted function evaluations. The performance analysis of the DE variants has been carried out by measuring their Mean Objective function Value (MOV), Success rate (Sr), Probability of Convergence (Pc) and Q-measure (Qm). The success rate is calculated as the percentage of number of successful runs out of total runs for each function,, where nc_f is total number of successful runs made by a variant for a function and nt_t is the total number of runs, in this current work nt_t = 100. A run is considered successful when the tolerance error is reached before the maximum number of generations. Pc% is measured as (nc total number of successful runs made by a variant for all the benchmarking functions, nt total number of runs (for our experiment nt = 4 functions * 100 runs = 400 runs)) [5, 6, 14]. The Quality measure is calculated as, where, with j =1,,nc are successful runs and FEj is the number of function evaluations in j th trial. The variants with good convergence rate and more probability of convergence will have lower values of quality measure. Subsequently, the empirical analysis of the DE variants has been carried out by observing their population variance (Evar). Analyzing the evolution pattern of the population variance, generation by generation, will provide insight about the exploration capability of the underlying variance operators of the given DE variant. In DE algorithm the variation operators perturb independently each component of the candidate in the population [11]. Hence, the population variance is measured independently for all D components of the candidates at all the 100 runs and the average of that is used for the study. 6 Experimental Results and Discussion Table 1 presents the MOV, Standard deviation, Sr, Pc% and Qm values obtained by all the 14 variants on solving the 4 benchmark functions. As can be seen from the results, no variant has turned out to be the best in solving all the functions. But,
6 3254 S. Thangavelu et al. DE/rand/2/bin and DE/rand-to-best/1/bin managed to solve three functions; DE/best/2/bin and DE/rand/1/bin have solved two functions. It is worth observing that the top four positions in terms of both higher probability of convergence and lower quality measure are secured by the same set of variants DE/rand-to-best/1/bin, DE/rand/1/bin, DE/best/2/bin and DE/rand/2/bin. The variants DE/current-to-rand/1/exp, DE/current-to-best/1/exp, DE/best/1/exp and DE/best/1/bin have not solved any of the 4 functions. All other remaining variants have solved a maximum of only one function each. In terms of Sr, between the extremes (0 and 100), there are many variants that provide both successful and unsuccessful runs in solving the given function like DE/rand/1/bin and DE/best/1/bin. The convergence analysis of different DE variants on different benchmarking functions, based on their performance analysis above, has been carried out. This involved observing the best objective function value as well as cumulative mean and standard deviation obtained by a DE variant in every 300 generations for a random run. Table 2 presents the convergence behavior of DE/rand/2/bin and DE/rand-to-best/1/bin on functions f1, f3 and f4 for a random run. The convergence behavior of DE/best/2/bin and DE/best/2/exp variants on function f2 is shown in Table 3. It is evident from Tables 2 and 3, that the objective function value converges faster (within lesser number of generations) to the global optimum. Table 1. The Pc(%), Qm, MOV, standard deviation and Sr measured for the 14 variants Sno Variant Pc(%) MOV Std Sr MOV Std Sr 1 DE/rand/1/bin E E E E DE/rand/1/exp E E E E DE/best/1/bin E E E E DE/best/1/exp E E E E DE/rand/2/bin E E E E DE/rand/2/exp E E E E DE/best/2/bin E E E E DE/best/2/exp E E E E DE/current-to-rand/1/bin E E E E DE/current-to-rand/1/exp E E E E DE/current-to-best/1/bin E E E E DE/current-to-best/1/exp E E E E DE/rand-to-best/1/bin E E E E DE/rand-to-best/1/exp E E E E Sno Variant Qm MOV Std Sr MOV Std Sr 1 DE/rand/1/bin E E E E DE/rand/1/exp E E E E DE/best/1/bin E E E E DE/best/1/exp E E E E DE/rand/2/bin E E E E DE/rand/2/exp E E E E DE/best/2/bin E E E E DE/best/2/exp E E E E DE/current-to-rand/1/bin E E E E DE/current-to-rand/1/exp E E E E DE/current-to-best/1/bin E E E E DE/current-to-best/1/exp E E E E DE/rand-to-best/1/bin E E E E DE/rand-to-best/1/exp E E E E f1 f3 f2 f4
7 Population variance based empirical analysis of the behavior 3255 Table 2. The evolution of best objective function value, its mean and standard deviation for the functions f1, f3 and f4 by the variants DE/rand/2/bin and DE/rand-to-best/1/bin DE/rand/2/bin f1 f3 f4 G ObjValue Mean Stddev G ObjValue Mean Stddev G ObjValue Mean Stddev DE/rand-to-best/1/bin f1 f3 f4 G ObjValue Mean Stddev G ObjValue Mean Stddev G ObjValue Mean Stddev The DE/rand/2/bin and DE/rand-to-best/1/bin variants have taken more number of generations to converge while solving f3, than f1 and f4. Thus the best performing variants are characterized by fast convergence to the global optimum of a given function within the maximum number of generations. The convergence behavior of worst performing variants i.e. DE/current- to-rand/ 1/exp and DE/current-to-best/1/exp, (with 0 success rates) on all functions f1 and f2 is depicted in Table 4. The results show that in all the function cases the variants have very slow convergence. This slow convergence of the worst performing variants often results in stagnation of evolution. In spite of enough diversity available in the population as can be seen in Table 4, these variants often stagnate and find it difficult to reach better solutions faster in successive generation. The evidence of these variants for not giving any successful run on any of the functions, due to stagnation, is apparent from the results. On the other hand, the variants DE/best/1/bin and DE/best/1/exp which too did not solve any of the functions displayed a different convergence behavior by virtue of their greedy mutation strategy. As shown in Table 5, for function f1, these variants converge very soon to a point in the search space with no subsequent convergence leading to premature convergence.
8 3256 S. Thangavelu et al. Table 3. The evolution of best objective function value, its mean and standard deviation for the function f2 by the variants DE/best/2/bin and DE/best/2/exp DE/best/2/bin DE/best/2/exp G ObjValue Mean Stddev G ObjValue Mean Stddev Table 4. The evolution of best objective function value, its mean and standard deviation by the variants DE/current-to-rand/1/exp and DE/current-to-best/1/exp f1 DE/current-to-rand/1/exp G ObjValue Mean Stddev G ObjValue Mean Stddev DE/current-to-best/1/exp f1 G ObjValue Mean Stddev G ObjValue Mean Stddev f2 f2 Table 6 shows the convergence of the objective function values for both sample successful and unsuccessful runs for the variants DE/rand/1/bin and DE/best/1/bin on a function each by way of an example. The results show that both DE/rand/1/bin and DE/best/1/bin suffer with premature convergence on solving the functions f1 and f4. It can be observed from the above results that the successful variants show slow/fast and steady convergence to the global optimum. The unsuccessful variants either fall in local optimum due to its fast and unsteady convergence or in
9 Population variance based empirical analysis of the behavior 3257 stagnation due to slow convergence. The partially successful variants some time show steady convergence and at other times show fast or slow convergence. Table 5. The evolution of best objective function value, its mean and standard deviation by the variants DE/best /1/bin and DE/best/1/exp DE/best/1/bin - f1 DE/best/1/exp f1 G ObjValue Mean Stddev G ObjValue Mean Stddev Table 6. The evolution of best objective function value, its mean and standard deviation during the successful and unsuccessful run of the variants DE/rand/1/bin and DE/best/1/bin DE/rand/1/bin - f2 DE/best/1/bin - f4 Successful Run UnSuccessful Run Successful Run UnSuccessful Run G ObjValue Mean Stddev ObjValue Mean Stddev ObjValue Mean Stddev ObjValue Mean Stddev The empirical evolution of population variance measured for all the 14 variants on all the four functions are presented in Tables 7, 8, 9 and 10. It is interesting to observe from the results that the variants which are reaching the global optimum have maintained slow and steady convergence of population variance. In f1, the most successful variants viz DE/rand/1/exp, DE/rand/2/exp, DE/current-to- rand/ 1/bin, DE/current-to-best/1/bin DE/rand-to-best/1/bin and DE/rand-to-best/1/exp show steady maintenance of population variance, till they reach the global optimum. This is also observed in the cases of the best performing DE variants in solving other functions, except f2. Interestingly in the case of f2, the population variance increased with generation. Even with increased variability in the population, DE/best/2/* variants could make Pc % = 100.
10 3258 S. Thangavelu et al. Tables 7. Comparison of the evolution of population variance recorded at each generation of the 100 runs, by all the variants for f1 G rand/1/bin best/1/bin rand/2/bin best/2/bin c-t-r/1/bin c-t-b/1/bin r-t-b/1/bin G rand/1/exp best/1/exp rand/2/exp best/2/exp c-t-r/1/exp c-t-b/1/exp r-t-b/1/exp Table 8. Comparison of the evolution of population variance recorded at each generation of the 100 runs, by all the variants for f2 G rand/1/bin best/1/bin rand/2/bin best/2/bin c-t-r/1/bin c-t-b/1/bin r-t-b/1/bin E E E E E E E E E E E E E E E E+19 G rand/1/exp best/1/exp rand/2/exp best/2/exp c-t-r/1/exp c-t-b/1/exp r-t-b/1/exp E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E+26
11 Population variance based empirical analysis of the behavior 3259 Table 9. Comparison of the evolution of population variance recorded at each generation of the 100 runs, by all the variants for f3 G rand/1/bin best/1/bin rand/2/bin best/2/bin c-t-r/1/bin c-t-b/1/bin r-t-b/1/bin G rand/1/exp best/1/exp rand/2/exp best/2/exp c-t-r/1/exp c-t-b/1/exp r-t-b/1/exp Table 10. Comparison of the evolution of population variance recorded at each generation of the 100 runs all the variants for f4 G rand/1/bin best/1/bin rand/2/bin best/2/bin c-t-r/1/bin c-t-b/1/bin r-t-b/1/bin G rand/1/exp best/1/exp rand/2/exp best/2/exp c-t-r/1/exp c-t-b/1/exp r-t-b/1/exp For a difference, the DE/rand/1/bin and DE/rand-to-best/1/bin variants showed a sudden increase in the population variance. In the case of DE/best/1/exp, DE/current-to-rand/1/exp and DE/current-to-best/1/exp variants which experienced stagnation, the population remains diverse but with no search progress.
12 3260 S. Thangavelu et al. The variants with exponential crossover variants are more prone to such problem than binomial counterparts. By the virtue of the population spread in the non interesting region of the search space the trial vectors arising out of the exploration process are not capable of replacing the candidate in the current population leading to stagnation [18]. In case of DE/rand/1/bin, by virtue of its explorative nature, the population variance is high. On the other hand, the greedy DE/best/1/bin loses its population diversity soon. 7 Conclusion This paper is an attempt to empirically analyze the performance of classical DE variants. Fourteen different DE variants are benchmarked on 4 test functions with different features. Initially, the performance of the variants is identified by measuring MOV, Sr, Pc% and Qm. The empirical evolution of the population variance is then observed to identify the reason for such performance of the variants. The results are analyzed with the intention of identifying how the exploration and exploitation capabilities of the variants are balanced in those variants. It is found generally that the variants, failing to balance exploration and exploitation, suffer with premature convergence and stagnation problem. Suitable empirical evidence for such cases is reported. Early detection of the premature convergence and stagnation during evolution and redirecting the search in the desired vicinity of the search space to improvise the searching capability of the DE variants is a promising avenue for further research. References [1] R. Storn and K. Price Differential evolution a simple and efficient adaptive scheme for global optimization over continuous spaces, Technical, Report TR , ICSI, (1995). [2] K. Price, R.M. Storn and J.A.Lampinen, Differential Evolution : A practical Approach to Global Optimzation, Springer-Verlag, (2005). [3] J. Vesterstrom and R.Thomsen, A comparative study of differential evolution particle swarm optimization and evolutionary algorithm on numerical benchmark problems, Proceedings - IEEE Congress on Evolutionary Computation (CEC 2004), 3 (2004), [4] R. Storn and K. Price, Differential Evolution a simple and efficient heuristic strategy for global optimization and continuous spaces, Journal of Global Optimization, 11 (1997),
13 Population variance based empirical analysis of the behavior 3261 [5] G. Jeyakumar and C.ShunmugaVelayutham, An empirical comparison of differential evolution variants on different classes of unconstrained global optimization problems, Proceedings - International Conference on Computer Information Systems and Industrial Management Application, (2009), [6] G. Jeyakumar and C.ShunmugaVelayutham, An empirical performance analysis of differential evolution variants on unconstrained global optimization problems, International Journal of Computer Information Systems and Industrial Management Applications, 2 (2010), [7] E. Mezura-Montes, J. Velazques-Reyes and C. A. Coello Coello, A comparative study of differential evolution variants for global optimization, Proceedings - Genetic and Evolutionary Computation Conference, (2006), [8] B. V. Babu and S. A. Munawar, Optimal design of shell-and-tube heat exchanger by different strategies of differential evolution, Technical Report, PILANI (2001). [9] E. Mezura-Montes, M. E. Miranda-Varela and R. D. C. Gomez-Ramon, Differential evolution in constrained numerical optimization: an empirical study, Information Sciences, 180 (2010), [10] G. Jeyakumar and C. Shunmuga velayutham, Empirical measurements on the convergence nature of differential evolution variants, Communications in Computer and Information Science (CCIS-131), Springer-Verlag Berlin Heidelberg, (2011), [11] G. Jeyakumar and C. Shunmuga velayutham, Analyzing the explorative power of differential evolution varints on different classes of problem, Lecture Notes in Computer Science (LNCS-6466), Springer-Verlag Berlin Heidelberg, (2010), [12] David H. Wolpert and William G. Macreedy, No free lunch theorems for optimization, IEEE Transaction on Evolutionary Computation, 1 (1997), [13] H. G. Beyer, On the Explorative Power of ES/EP like algorithms, In - Porto, V.W., Saravanan, N., Waagen, D.E. and Eiben, A.E. (eds.), Proceedings - 7th Annual Conference on Evolutionary Programming, Lecture Notes in Computer Science, Springer, 1447 (1998),
14 3262 S. Thangavelu et al. [14] V. Feoktistov, Differential Evolution In Search of Solutions, Springer-Optimization and its applications, Springer, (2006). [15] D. Zaharie, Statistical properties of differential evolution and related random search algorithms, In Brito, P. (eds.), Proceedings of the International Conference on Computational Statistics Porto, (2008), [16] D. Zaharie, Control of population diversity and adaptation in differential evolution algorithms, In - Matousek, R. and Osmera, P. (eds.), Proceedings of the Mendel s 9th International Conference on Soft Computing, (2003), [17] A. R. Angela, S. Adriano, O. Andrade and A. B. Soares, Exploration vs exploitation in differential evolution, Proceedings of the AISB 2008 Symposium on Swarm Intelligence Algorithms and Applications, Aberdeen, Scotland, (2008), [18] J. Lampinen and I. Zelinka, On stagnation of the differential evolution algorithm, In - Osmera, P. (eds.), Proceedings of the Mendel s 6th International Conference on Soft Computing, (2003), [19] D. Zaharie, On the explorative power of differential evolution algorithms, Proceeding - 3rd International Workshop on Symbolic and Numeric Algorithms on Scientific Computing, SYNASC-2001, (2001). [20] D. Zaharie, Critical values for the control parameters of differential evolution algorithms, Proceedings - 8th International Conference on Soft Computing, (2002), [21] D. Zaharie, Parameter adaptation in differential evolution by controlling the population diversity, In Petcu, D. et al (eds.), Proceedings - 4th International Workshop on Symbolic and Numeric Algorithms for Scientific Computing, (2002), [22] D. Zaharie and F. Zamfirache, Diversity enhancing mechanism for evolutionary optimization in static and dynamic environments, Proceedings - 3rd Romanian-Hungarian Joint Symposium on Applied Computational Intelligence, (2006), [23] X. Yao, Y. Liu, K. H Liang and G. Lin, Fast evolutionary algorithms, In G. Rozenberg, T. Back, and A. Eiben, editors, Advances in Evolutionary Computing : Theory and Applications, Springer-Verlag, New York, NY, USA, (2003),
15 Population variance based empirical analysis of the behavior 3263 [24] G. Jeyakumar and C. Shunmuga Velayutham, A Comparative Study on Theoretical and Empirical Evolution of the Population Variance of the Differential Evolution Variants, Lecture Notes in Computer Science (LNCS-6457), Springer-Verlag Berlin Heidelberg, (2010), Received: December 15, 2015; Published: April 20, 2015
CONVERGENCE ANALYSIS OF DIFFERENTIAL EVOLUTION VARIANTS ON UNCONSTRAINED GLOBAL OPTIMIZATION FUNCTIONS
CONVERGENCE ANALYSIS OF DIFFERENTIAL EVOLUTION VARIANTS ON UNCONSTRAINED GLOBAL OPTIMIZATION FUNCTIONS G.Jeyakumar 1 C.Shanmugavelayutham 2 1, 2 Assistant Professor Department of Computer Science and Engineering
More informationInvestigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems
Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Miguel Leon Ortiz and Ning Xiong Mälardalen University, Västerås, SWEDEN Abstract. Differential evolution
More informationCrossover and the Different Faces of Differential Evolution Searches
WCCI 21 IEEE World Congress on Computational Intelligence July, 18-23, 21 - CCIB, Barcelona, Spain CEC IEEE Crossover and the Different Faces of Differential Evolution Searches James Montgomery Abstract
More informationAdaptive Differential Evolution and Exponential Crossover
Proceedings of the International Multiconference on Computer Science and Information Technology pp. 927 931 ISBN 978-83-60810-14-9 ISSN 1896-7094 Adaptive Differential Evolution and Exponential Crossover
More information2 Differential Evolution and its Control Parameters
COMPETITIVE DIFFERENTIAL EVOLUTION AND GENETIC ALGORITHM IN GA-DS TOOLBOX J. Tvrdík University of Ostrava 1 Introduction The global optimization problem with box constrains is formed as follows: for a
More informationDecomposition and Metaoptimization of Mutation Operator in Differential Evolution
Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Karol Opara 1 and Jaros law Arabas 2 1 Systems Research Institute, Polish Academy of Sciences 2 Institute of Electronic
More informationDifferential Evolution: Competitive Setting of Control Parameters
Proceedings of the International Multiconference on Computer Science and Information Technology pp. 207 213 ISSN 1896-7094 c 2006 PIPS Differential Evolution: Competitive Setting of Control Parameters
More informationConstrained Real-Parameter Optimization with Generalized Differential Evolution
2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 Constrained Real-Parameter Optimization with Generalized Differential Evolution
More informationDynamic Optimization using Self-Adaptive Differential Evolution
Dynamic Optimization using Self-Adaptive Differential Evolution IEEE Congress on Evolutionary Computation (IEEE CEC 2009), Trondheim, Norway, May 18-21, 2009 J. Brest, A. Zamuda, B. Bošković, M. S. Maučec,
More informationTHE objective of global optimization is to find the
Large Scale Global Optimization Using Differential Evolution With Self-adaptation and Cooperative Co-evolution Aleš Zamuda, Student Member, IEEE, Janez Brest, Member, IEEE, Borko Bošković, Student Member,
More informationFinding Multiple Global Optima Exploiting Differential Evolution s Niching Capability
Finding Multiple Global Optima Exploiting Differential Evolution s Niching Capability Michael G. Epitropakis Computational Intelligence Laboratory, Department of Mathematics, University of Patras, Greece.
More informationResearch Article A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution
Mathematical Problems in Engineering Volume 2015, Article ID 769245, 16 pages http://dx.doi.org/10.1155/2015/769245 Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential
More informationA Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning
009 Ninth International Conference on Intelligent Systems Design and Applications A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning Hui Wang, Zhijian Wu, Shahryar Rahnamayan,
More informationEvolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution
Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution Michael G. Epitropakis, Member, IEEE, Vassilis P. Plagianakos and Michael N. Vrahatis Abstract In
More informationPerformance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems
Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems Saku Kukkonen, Student Member, IEEE and Jouni Lampinen Abstract This paper presents
More informationBeta Damping Quantum Behaved Particle Swarm Optimization
Beta Damping Quantum Behaved Particle Swarm Optimization Tarek M. Elbarbary, Hesham A. Hefny, Atef abel Moneim Institute of Statistical Studies and Research, Cairo University, Giza, Egypt tareqbarbary@yahoo.com,
More informationDE/BBO: A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization
1 : A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization Wenyin Gong, Zhihua Cai, and Charles X. Ling, Senior Member, IEEE Abstract Differential Evolution
More informationConstrained Optimization by the Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites
2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 6-2, 2006 Constrained Optimization by the Constrained Differential Evolution with Gradient-Based
More informationImproving Differential Evolution Algorithm by Synergizing Different Improvement Mechanisms
Improving Differential Evolution Algorithm by Synergizing Different Improvement Mechanisms M. ALI and M. PANT Indian Institute of Technology Roorkee, India AND A. ABRAHAM Machine Intelligence Research
More informationResearch Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems
Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear
More informationThree Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms
Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn
More informationThree Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms
Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn
More informationCompetitive Self-adaptation in Evolutionary Algorithms
Competitive Self-adaptation in Evolutionary Algorithms Josef Tvrdík University of Ostrava josef.tvrdik@osu.cz Ivan Křivý University of Ostrava ivan.krivy@osu.cz Abstract Heuristic search for the global
More informationA Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems
A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems Jakob Vesterstrøm BiRC - Bioinformatics Research Center University
More informationAn Improved Differential Evolution Trained Neural Network Scheme for Nonlinear System Identification
International Journal of Automation and Computing 6(2), May 2009, 137-144 DOI: 10.1007/s11633-009-0137-0 An Improved Differential Evolution Trained Neural Network Scheme for Nonlinear System Identification
More informationCOMPETITIVE DIFFERENTIAL EVOLUTION
COMPETITIVE DIFFERENTIAL EVOLUTION Josef Tvrdík University of Ostrava, Department of Computer Science 30. dubna 22, 701 03 Ostrava, Czech Republic phone: +420/596160231, fax: +420/596120478 e-mail: tvrdik@osu.cz
More informationMulti-start JADE with knowledge transfer for numerical optimization
Multi-start JADE with knowledge transfer for numerical optimization Fei Peng, Ke Tang,Guoliang Chen and Xin Yao Abstract JADE is a recent variant of Differential Evolution (DE) for numerical optimization,
More informationA Restart CMA Evolution Strategy With Increasing Population Size
Anne Auger and Nikolaus Hansen A Restart CMA Evolution Strategy ith Increasing Population Size Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2005 c IEEE A Restart CMA Evolution Strategy
More informationDifferential evolution with an individual-dependent mechanism
Loughborough University Institutional Repository Differential evolution with an individualdependent mechanism This item was submitted to Loughborough University's Institutional Repository by the/an author.
More informationOptimization of Threshold for Energy Based Spectrum Sensing Using Differential Evolution
Wireless Engineering and Technology 011 130-134 doi:10.436/wet.011.3019 Published Online July 011 (http://www.scirp.org/journal/wet) Optimization of Threshold for Energy Based Spectrum Sensing Using Differential
More informationA COMPARISON OF PARTICLE SWARM OPTIMIZATION AND DIFFERENTIAL EVOLUTION
A COMPARISON OF PARTICLE SWARM OPTIMIZATION AND DIFFERENTIAL EVOLUTION Vu Truong Vu Ho Chi Minh City University of Transport, Faculty of Civil Engineering No.2, D3 Street, Ward 25, Binh Thanh District,
More informationA Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape
WCCI 200 IEEE World Congress on Computational Intelligence July, 8-23, 200 - CCIB, Barcelona, Spain CEC IEEE A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape Liang Shen and
More informationMulti-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm
Multi-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm Sunil Kumar Soni, Vijay Bhuria Abstract The main aim of power utilities is to provide high quality power
More informationAn Empirical Study of Control Parameters for Generalized Differential Evolution
An Empirical tudy of Control Parameters for Generalized Differential Evolution aku Kukkonen Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute of Technology Kanpur Kanpur, PIN 8 6, India saku@iitk.ac.in,
More informationAn Adaptive Population Size Differential Evolution with Novel Mutation Strategy for Constrained Optimization
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 An Adaptive Population Size Differential Evolution with Novel Mutation Strategy for Constrained Optimization Yuan
More informationInteger weight training by differential evolution algorithms
Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp
More informationToward Effective Initialization for Large-Scale Search Spaces
Toward Effective Initialization for Large-Scale Search Spaces Shahryar Rahnamayan University of Ontario Institute of Technology (UOIT) Faculty of Engineering and Applied Science 000 Simcoe Street North
More informationCenter-based initialization for large-scale blackbox
See discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/903587 Center-based initialization for large-scale blackbox problems ARTICLE FEBRUARY 009 READS
More informationEvolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape
Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape Liang Shen Department of Computer Science Aberystwyth University Ceredigion, SY23 3DB UK lls08@aber.ac.uk Jun He Department
More informationGaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress
Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Petr Pošík Czech Technical University, Faculty of Electrical Engineering, Department of Cybernetics Technická, 66 7 Prague
More informationEvolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction
Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction 3. Introduction Currency exchange rate is an important element in international finance. It is one of the chaotic,
More informationTHE DIFFERENTIAL evolution (DE) [1] [4] algorithm
482 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 42, NO. 2, APRIL 2012 An Adaptive Differential Evolution Algorithm With Novel Mutation and Crossover Strategies for Global
More informationA New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms
A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms Yang Yu and Zhi-Hua Zhou National Laboratory for Novel Software Technology Nanjing University, Nanjing 20093, China
More informationDifferential Evolution Based Particle Swarm Optimization
Differential Evolution Based Particle Swarm Optimization Mahamed G.H. Omran Department of Computer Science Gulf University of Science and Technology Kuwait mjomran@gmail.com Andries P. Engelbrecht Department
More informationEFFECT OF STRATEGY ADAPTATION ON DIFFERENTIAL EVOLUTION IN PRESENCE AND ABSENCE OF PARAMETER ADAPTATION: AN INVESTIGATION
JAISCR, 2018, Vol. 8, No. 3, pp. 211 235 10.1515/jaiscr-2018-0014 EFFECT OF STRATEGY ADAPTATION ON DIFFERENTIAL EVOLUTION IN PRESENCE AND ABSENCE OF PARAMETER ADAPTATION: AN INVESTIGATION Deepak Dawar
More informationSearch. Search is a key component of intelligent problem solving. Get closer to the goal if time is not enough
Search Search is a key component of intelligent problem solving Search can be used to Find a desired goal if time allows Get closer to the goal if time is not enough section 11 page 1 The size of the search
More informationMetaheuristics and Local Search
Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :
More informationQuantum-Inspired Differential Evolution with Particle Swarm Optimization for Knapsack Problem
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 31, 1757-1773 (2015) Quantum-Inspired Differential Evolution with Particle Swarm Optimization for Knapsack Problem DJAAFAR ZOUACHE 1 AND ABDELOUAHAB MOUSSAOUI
More informationLooking Under the EA Hood with Price s Equation
Looking Under the EA Hood with Price s Equation Jeffrey K. Bassett 1, Mitchell A. Potter 2, and Kenneth A. De Jong 1 1 George Mason University, Fairfax, VA 22030 {jbassett, kdejong}@cs.gmu.edu 2 Naval
More informationMultiobjective Optimization of Cement-bonded Sand Mould System with Differential Evolution
DOI: 10.7763/IPEDR. 013. V63. 0 Multiobjective Optimization of Cement-bonded Sand Mould System with Differential Evolution T. Ganesan 1, I. Elamvazuthi, Ku Zilati Ku Shaari 3, and P. Vasant + 1, 3 Department
More informationAN ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM FOR SOLVING SECOND-ORDER DIRICHLET PROBLEMS
Vol. 12, No. 1, pp. 143-161 ISSN: 1646-3692 AN ADAPTIVE DIFFERENTIAL EVOLUTION ALGORITHM FOR SOLVING SECOND-ORDER Hasan Rashaideh Department of Computer Science, Prince Abdullah Ben Ghazi Faculty of Information
More informationARTIFICIAL NEURAL NETWORKS REGRESSION ON ENSEMBLE STRATEGIES IN DIFFERENTIAL EVOLUTION
ARTIFICIAL NEURAL NETWORKS REGRESSION ON ENSEMBLE STRATEGIES IN DIFFERENTIAL EVOLUTION Iztok Fister Jr. 1,Ponnuthurai Nagaratnam Suganthan 2, Damjan Strnad 1, Janez Brest 1,Iztok Fister 1 1 University
More informationAn Introduction to Differential Evolution. Kelly Fleetwood
An Introduction to Differential Evolution Kelly Fleetwood Synopsis Introduction Basic Algorithm Example Performance Applications The Basics of Differential Evolution Stochastic, population-based optimisation
More informationWORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY
WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY Kiyoharu Tagawa School of Science and Engineering, Kindai University, Japan tagawa@info.kindai.ac.jp Abstract In real-world optimization problems, a wide
More informationMorphisms Between the Groups of Semi Magic Squares and Real Numbers
International Journal of Algebra, Vol. 8, 2014, no. 19, 903-907 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ija.2014.212137 Morphisms Between the Groups of Semi Magic Squares and Real Numbers
More informationTruncation Selection and Gaussian EDA: Bounds for Sustainable Progress in High-Dimensional Spaces
Truncation Selection and Gaussian EDA: Bounds for Sustainable Progress in High-Dimensional Spaces Petr Pošík Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics
More informationViability Principles for Constrained Optimization Using a (1+1)-CMA-ES
Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES Andrea Maesani and Dario Floreano Laboratory of Intelligent Systems, Institute of Microengineering, Ecole Polytechnique Fédérale de
More informationMetaheuristics and Local Search. Discrete optimization problems. Solution approaches
Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 1 Metaheuristics and Local Search Discrete optimization problems Variables x 1,...,x n. Variable domains D 1,...,D n,
More informationRunning time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem
Research Collection Working Paper Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem Author(s): Laumanns, Marco; Thiele, Lothar; Zitzler, Eckart;
More informationVerification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization.
nd Workshop on Advanced Research and Technology in Industry Applications (WARTIA ) Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization
More informationLecture 9 Evolutionary Computation: Genetic algorithms
Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic
More informationSolving the Constrained Nonlinear Optimization based on Imperialist Competitive Algorithm. 1 Introduction
ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.15(2013) No.3,pp.212-219 Solving the Constrained Nonlinear Optimization based on Imperialist Competitive Algorithm
More informationBlack Box Search By Unbiased Variation
Black Box Search By Unbiased Variation Per Kristian Lehre and Carsten Witt CERCIA, University of Birmingham, UK DTU Informatics, Copenhagen, Denmark ThRaSH - March 24th 2010 State of the Art in Runtime
More informationOn the Pathological Behavior of Adaptive Differential Evolution on Hybrid Objective Functions
On the Pathological Behavior of Adaptive Differential Evolution on Hybrid Objective Functions ABSTRACT Ryoji Tanabe Graduate School of Arts and Sciences The University of Tokyo Tokyo, Japan rt.ryoji.tanabe@gmail.com
More informationFuzzy adaptive catfish particle swarm optimization
ORIGINAL RESEARCH Fuzzy adaptive catfish particle swarm optimization Li-Yeh Chuang, Sheng-Wei Tsai, Cheng-Hong Yang. Institute of Biotechnology and Chemical Engineering, I-Shou University, Kaohsiung, Taiwan
More informationarxiv: v1 [cs.ne] 9 May 2016
Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES) arxiv:1605.02720v1 [cs.ne] 9 May 2016 ABSTRACT Ilya Loshchilov University of Freiburg Freiburg, Germany ilya.loshchilov@gmail.com
More informationBio-inspired Continuous Optimization: The Coming of Age
Bio-inspired Continuous Optimization: The Coming of Age Anne Auger Nikolaus Hansen Nikolas Mauny Raymond Ros Marc Schoenauer TAO Team, INRIA Futurs, FRANCE http://tao.lri.fr First.Last@inria.fr CEC 27,
More informationWeight minimization of trusses with natural frequency constraints
th World Congress on Structural and Multidisciplinary Optimisation 0 th -2 th, June 20, Sydney Australia Weight minimization of trusses with natural frequency constraints Vu Truong Vu Ho Chi Minh City
More informationAn Improved Quantum Evolutionary Algorithm with 2-Crossovers
An Improved Quantum Evolutionary Algorithm with 2-Crossovers Zhihui Xing 1, Haibin Duan 1,2, and Chunfang Xu 1 1 School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191,
More informationResearch Article A Novel Ranking Method Based on Subjective Probability Theory for Evolutionary Multiobjective Optimization
Mathematical Problems in Engineering Volume 2011, Article ID 695087, 10 pages doi:10.1155/2011/695087 Research Article A Novel Ranking Method Based on Subjective Probability Theory for Evolutionary Multiobjective
More informationNew Generalized Sub Class of Cyclic-Goppa Code
International Journal of Contemporary Mathematical Sciences Vol., 206, no. 7, 333-34 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/0.2988/ijcms.206.6632 New Generalized Sub Class of Cyclic-Goppa Code
More informationModified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds
Modified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds Md. Abul Kalam Azad a,, Edite M.G.P. Fernandes b a Assistant Researcher, b Professor Md. Abul Kalam Azad Algoritmi
More informationFractional Order PID Controller with an Improved Differential Evolution Algorithm
2016 International Conference on Micro-Electronics and Telecommunication Engineering Fractional Order PID Controller with an Improved Differential Evolution Algorithm Rinki Maurya a, Manisha Bhandari b
More informationOptimization of Mechanical Design Problems Using Improved Differential Evolution Algorithm
International Journal of Recent Trends in Enineerin Vol. No. 5 May 009 Optimization of Mechanical Desin Problems Usin Improved Differential Evolution Alorithm Millie Pant Radha Thanaraj and V. P. Sinh
More informationRuntime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights
Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights Frank Neumann 1 and Andrew M. Sutton 2 1 Optimisation and Logistics, School of Computer Science, The
More informationResearch Article Algorithmic Mechanism Design of Evolutionary Computation
Computational Intelligence and Neuroscience Volume 2015, Article ID 591954, 17 pages http://dx.doi.org/10.1155/2015/591954 Research Article Algorithmic Mechanism Design of Evolutionary Computation Yan
More informationGDE3: The third Evolution Step of Generalized Differential Evolution
GDE3: The third Evolution Step of Generalized Differential Evolution Saku Kukkonen Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute of Technology Kanpur Kanpur, PIN 28 16, India saku@iitk.ac.in,
More informationOn Monitoring Shift in the Mean Processes with. Vector Autoregressive Residual Control Charts of. Individual Observation
Applied Mathematical Sciences, Vol. 8, 14, no. 7, 3491-3499 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.12988/ams.14.44298 On Monitoring Shift in the Mean Processes with Vector Autoregressive Residual
More informationA PARAMETER CONTROL SCHEME FOR DE INSPIRED BY ACO
A PARAMETER CONTROL SCHEME FOR DE INSPIRED BY ACO Dražen Bajer, Goran Martinović Faculty of Electrical Engineering, Josip Juraj Strossmayer University of Osijek, Croatia drazen.bajer@etfos.hr, goran.martinovic@etfos.hr
More informationResearch Article A Convergent Differential Evolution Algorithm with Hidden Adaptation Selection for Engineering Optimization
Mathematical Problems in Engineering, Article ID 135652, 11 pages http://dx.doi.org/10.1155/2014/135652 Research Article A Convergent Differential Evolution Algorithm with Hidden Adaptation Selection for
More informationParameter Sensitivity Analysis of Social Spider Algorithm
Parameter Sensitivity Analysis of Social Spider Algorithm James J.Q. Yu, Student Member, IEEE and Victor O.K. Li, Fellow, IEEE Department of Electrical and Electronic Engineering The University of Hong
More informationOPTIMIZATION OF MODEL-FREE ADAPTIVE CONTROLLER USING DIFFERENTIAL EVOLUTION METHOD
ABCM Symposium Series in Mechatronics - Vol. 3 - pp.37-45 Copyright c 2008 by ABCM OPTIMIZATION OF MODEL-FREE ADAPTIVE CONTROLLER USING DIFFERENTIAL EVOLUTION METHOD Leandro dos Santos Coelho Industrial
More informationThe Expected Opportunity Cost and Selecting the Optimal Subset
Applied Mathematical Sciences, Vol. 9, 2015, no. 131, 6507-6519 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2015.58561 The Expected Opportunity Cost and Selecting the Optimal Subset Mohammad
More informationOn the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study
On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study Yang Yu, and Zhi-Hua Zhou, Senior Member, IEEE National Key Laboratory for Novel Software Technology Nanjing University,
More informationAnalysis of Crossover Operators for Cluster Geometry Optimization
Analysis of Crossover Operators for Cluster Geometry Optimization Francisco B. Pereira Instituto Superior de Engenharia de Coimbra Portugal Abstract We study the effectiveness of different crossover operators
More informationRestarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census
Restarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census Anton V. Eremeev 1,2 1 Dostoevsky Omsk State University, Omsk, Russia 2 The Institute of Scientific Information for Social Sciences
More informationGenetic Algorithm for Solving the Economic Load Dispatch
International Journal of Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 5 (2014), pp. 523-528 International Research Publication House http://www.irphouse.com Genetic Algorithm
More informationGeometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators
Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators Andrea Mambrini 1 University of Birmingham, Birmingham UK 6th June 2013 1 / 33 Andrea Mambrini GSGP: theory-laden
More informationBehaviour of the UMDA c algorithm with truncation selection on monotone functions
Mannheim Business School Dept. of Logistics Technical Report 01/2005 Behaviour of the UMDA c algorithm with truncation selection on monotone functions Jörn Grahl, Stefan Minner, Franz Rothlauf Technical
More informationCSC 4510 Machine Learning
10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on
More informationPoincaré`s Map in a Van der Pol Equation
International Journal of Mathematical Analysis Vol. 8, 014, no. 59, 939-943 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.1988/ijma.014.411338 Poincaré`s Map in a Van der Pol Equation Eduardo-Luis
More informationDE [39] PSO [35] ABC [7] AO k/maxiter e-20
3. Experimental results A comprehensive set of benchmark functions [18, 33, 34, 35, 36, 37, 38] has been used to test the performance of the proposed algorithm. The Appendix A (Table A1) presents the functions
More informationAMULTIOBJECTIVE optimization problem (MOP) can
1 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1 Letters 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Decomposition-Based Multiobjective Evolutionary Algorithm with an Ensemble of Neighborhood Sizes Shi-Zheng
More informationApproximations to the t Distribution
Applied Mathematical Sciences, Vol. 9, 2015, no. 49, 2445-2449 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2015.52148 Approximations to the t Distribution Bashar Zogheib 1 and Ali Elsaheli
More informationBinary Particle Swarm Optimization with Crossover Operation for Discrete Optimization
Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization Deepak Singh Raipur Institute of Technology Raipur, India Vikas Singh ABV- Indian Institute of Information Technology
More informationResearch Article An Auxiliary Function Method for Global Minimization in Integer Programming
Mathematical Problems in Engineering Volume 2011, Article ID 402437, 13 pages doi:10.1155/2011/402437 Research Article An Auxiliary Function Method for Global Minimization in Integer Programming Hongwei
More informationPerformance Assessment of Generalized Differential Evolution 3 (GDE3) with a Given Set of Problems
Perormance Assessment o Generalized Dierential Evolution (GDE) with a Given Set o Problems Saku Kukkonen, Student Member, IEEE and Jouni Lampinen Abstract This paper presents results or the CEC 007 Special
More informationA (1+1)-CMA-ES for Constrained Optimisation
A (1+1)-CMA-ES for Constrained Optimisation Dirk Arnold, Nikolaus Hansen To cite this version: Dirk Arnold, Nikolaus Hansen. A (1+1)-CMA-ES for Constrained Optimisation. Terence Soule and Jason H. Moore.
More informationImproving on the Kalman Swarm
Improving on the Kalman Swarm Extracting Its Essential Characteristics Christopher K. Monson and Kevin D. Seppi Brigham Young University, Provo UT 84602, USA {c,kseppi}@cs.byu.edu Abstract. The Kalman
More informationGeneralization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms
Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Tadahiko Murata 1, Shiori Kaige 2, and Hisao Ishibuchi 2 1 Department of Informatics, Kansai University 2-1-1 Ryozenji-cho,
More information