Competitive Self-adaptation in Evolutionary Algorithms

Size: px
Start display at page:

Download "Competitive Self-adaptation in Evolutionary Algorithms"

Transcription

1 Competitive Self-adaptation in Evolutionary Algorithms Josef Tvrdík University of Ostrava Ivan Křivý University of Ostrava Abstract Heuristic search for the global minimum is studied. This paper is focused on the adaptation of control parameters in differential evolution (DE) and in controlled random search (CRS). The competition of different control parameter settings is used in order to ensure the self-adaptation of parameter values within the search process. In the generalized CRS the self-adaptation is ensured by several competing local-search heuristics for the generation of a new trial point. DE was experimentally compared with other adaptive algorithms on a benchmark, self-adaptive CRS was compared in estimation of regression parameters on NIST nonlinear regression datasets. The competitive algorithms outperformed other algorithms both in the reliability and in the convergence rate. Keywords: Global optimization, Differential evolution, Controlled random search, Self-adaptation. 1 Introduction We deal with the heuristic search for the global optimization problem defined as follows: for a given objective function f : S R, S R D, the point x is to be found such that x = arg minx S f(x). The point x is called the global minimum point and S is the search space. We focus on the problems, where the objective function is continuous and the search space is closed compact set, S = D [a d, b d ], a d < b d, d = 1, 2,..., D. This specification of S is called box constrains. The problem of the global optimization is hard. That is why stochastic (heuristic) algorithms are used for its solution, see e.g. [2, 12]. The authors of many stochastic algorithms claim the efficiency and the reliability of the searching for the global minimum. The reliability means that the point with minimum function value found in the search process is sufficiently close to the global minimum point and the efficiency means that the algorithm finds a point sufficiently close to the global minimum point at reasonable time. However, when using such algorithms, we face the problem of their control-parameters setting. The efficiency and the reliability of many algorithms is strongly dependent on the values of control parameters. A user is supposed to be able to change the parameter values according to the results of trial-and-error preliminary experiments with the search process. Such attempt is time-consuming. Adaptive robust algorithms reliable enough at reasonable time-consumption without the necessity of fine tuning their input parameters have been studied in recent years. Theoretical analysis done by Wolpert and Macready [21] implies, that any heuristic search algorithm cannot outperform the others for all objective functions. In spite of this fact, there is empirical evidence, that some algorithms can outperform others for relatively wide range of problems both in the convergence rate and in the reliability of finding the global minimum point. This paper presents an adaptive procedure of control-parameters setting in the algorithm of differential evolution and self-adaptive controlled random search algorithm, where its adaptation is based on the competition of local-search heuristics. 2 Differential Evolution and Adaptation of its Control Parameters The differential evolution (DE) introduced by Storn and Price [13] is simple but powerful evolutionary algorithm for global optimization over the box-constrained search space. The algorithm of DE in pseudo-code is

2 shown as Algorithm 1. Algorithm 1. Differential evolution 1 generate P = (x 1, x 2,..., x NP ); (points in S) 2 repeat 3 for i := 1 to NP do 4 compute a mutant vector u; 5 create y by the crossover of u and x i ; 6 if f(y) < f(x i ) then insert y into Q 7 else insert x i into Q 8 endif; 9 endfor; 10 P := Q; 11 until stopping condition; There are several strategies how to generate the mutant point u. One of the most popular variant (called DE/rand/1/bin in the literature [3, 10, 13]) generates the point u by adding the weighted difference of two points u = r 1 + F (r 2 r 3 ), (1) where r 1 r 2 r 3 x i, r 1, r 2 and r 3 are taken at random from P and F > 0 is an input parameter. Another variant, called DE/best/2/bin generates the point u according to formula u = x min + F (r 1 + r 2 r 3 r 4 ), (2) where r 1 r 2 r 3 r 4 x min, r 1, r 2, r 3 and r 4 are taken randomly from P (not coinciding with the current x i ), x min is the point of P with minimum function value, and F > 0 is an input parameter. In both strategies mentioned above, the elements y d, d = 1, 2...., D of trial point y are built up by the crossover of its parents x i and u using the following rule { ud if U y d = d CR or d = l (3) x id if U d > CR and d l, where l {1, 2,..., D} is a randomly chosen integer, U 1, U 2,..., U D are independent random variables uniformly distributed in [0, 1), and CR [0, 1] is an input parameter influencing the number of elements to be exchanged by crossover. Rule (3) ensures that at least one element x id of x i is replaced by u d, even if CR = 0. The differential evolution has become one of the most frequently used algorithms for solving the continuous global optimization problems in recent years [10]. But it is also known that the efficiency of searching for the global minimum is very sensitive to the setting of values F and CR. The recommended values are F = 0.8 and CR = 0.5, but even Storn and Price in their principal paper [13] use 0.5 F 1 and 0 CR 1 depending on the results of preliminary tuning. They also used the population size less than recommended NP = 10 D in many test tasks. Many papers deal with the setting of control parameters for differential evolution. Ali and Törn [1] suggested to adapt the value of the scaling factor F within the search process according to the equation { max(fmin, 1 f max f F = min ) if f max f min < 1 max(f min, 1 f min f max ) otherwise, (4) where f min, f max are the minimum and maximum function values in the population and F min is an input parameter ensuring F [F min, 1]. According to [1] this calculation of F reflects the demand to make the search more diversified at early stage and more intensified at latter stages, i.e. to produce rather larger values of F for large difference f max f min, and rather smaller values of F otherwise. The rule (4) works properly only for f min > 0. When f max > 0 and f min < 0, especially if f max < f min, the values of F fluctuate very rapidly in [F min, 1], even if the changes in f max or f min are small. However, from practical point of view, it occurs only as a short episode of the search process in most optimization tasks. Zaharie [22] derived the critical interval for the control parameters of DE. This interval ensures to keep the mean of population variance non-decreasing, which results in the following relationship 2 p F 2 2p NP + p2 NP > 0, (5) where p = max(1/d, CR) is the probability of differential perturbation according to (3). The relationship (5) implies that the mean of population variance is non-decreasing, if F > 1/NP, but practical reason of such result is very limited, because it brings no new information when we compare this result with the minimum value of F = 0.5 used in [13] and in other applications of differential evolution. Some other attempts to the adaptation of DE control parameters have appeared, recent state of adaptive parameter control in differential evolution is summarized by Liu and Lampinen [5]. New idea of self-adaptation of control parameters F and CR in differential evolution was proposed by Brest et al. [3]. The values of F and CR can be changed in each generation with probability τ 1, τ 2, respectively. Thus, the successful values are used more frequently due to the fact, that individuals with lower function level survive longer. New values of F are distributed uniformly in [ F l, F u ] and new CR are also uniform random values from [ 0, 1]. The setting of the control parameters can be made self-adaptive through the implementation of a compe-

3 tition into the algorithm. This idea, similar to the competition of local-search heuristics in evolutionary algorithm [14] or in controlled random search [15], was proposed recently [17]. Let us have H settings (different values of F and CR used in the statements on line 4 and 5 of Algorithm 1) and choose among them at random with the probability q h, h = 1, 2,..., H. The probabilities can be changed according to the success rate of the settings in preceding steps of search process. The h-th setting is successful, if it generates such a trial point y that f(y) < f(x i ). When n h is the current number of the h-th setting successes, the probability q h can be evaluated simply as the relative frequency q h = n h + n 0 H j=1 (n j + n 0 ), (6) where n 0 > 0 is a constant. The setting of n 0 1 prevents a dramatic change in q h by one random successful use of the h-th parameter setting. In order to avoid the degeneration of process the current values of q h are reset to their starting values (q h = 1/H), if any probability q h decreases bellow a given limit δ > 0. It is supposed that such a competition of different settings will prefer successful settings. The competition provides an self-adaptive mechanism of setting control parameters to appropriate values for the problem actually solved. Four variants of such competitive differential evolution were implemented and tested on benchmark in [19]. The benchmark consists of six functions commonly used, see e.g. [1, 10, 13] at four levels of dimension D of search spaces, namely D = 2, D = 5, D = 10 and D = 30: Ackley s function - multimodal, separable ( ) 1 D f(x) = 20 exp 0.02 D x2 d ( ) 1 D exp D cos 2πx d exp(1) x d [ 30, 30], x = (0, 0,..., 0), f(x ) = 0 First De Jong s function (sphere model) - unimodal, continuous, convex f(x) = D x d [ 5.12, 5.12], x = (0, 0,..., 0), f(x ) = 0 Griewank s function - multimodal, nonseparable f(x) = D 2 x d 4000 D x 2 d ( ) xd cos + 1 d x d [ 400, 400], x = (0, 0,..., 0), f(x ) = 0 Rastrigin s function - multimodal, separable f(x) = 10 D + D [x 2 d 10 cos(2πx d )] x d [ 5.12, 5.12], x = (0, 0,..., 0), f(x ) = 0 Rosenbrock s function (banana valley) - unimodal, nonseparable f(x) = D 1 [ 100(x 2 d x d+1 ) 2 + (1 x d ) 2] x d [ 2.048, 2.048], x = (1, 1,..., 1), f(x ) = 0 Schwefel s function - multimodal, the global minimum distant from the next best local minima f(x) = D x d sin( x d ) x d [ 500, 500], x = (s, s,..., s), s = , f(x ) = D The most reliable results (and at the same time the second best ones in time consumption) provided the algorithm DEBR18 with 18 competing settings of control parameters, all the combinations of (CR = 0, CR = 0.5, and CR = 1) and (F = 0.5, F = 0.8, and F = 1) in the both DE/rand/1/bin and DE/best/2/bin strategy. Parameters for the competition of settings were set to n 0 = 2, and δ = 1/(5 H). Here we present the comparison of this algorithm with standard DE and three other adaptive DE algorithms published recently. The tests were preformed for all the functions at four levels of dimension D of search spaces like in [19]. One hundred of independent runs were carried out for each function and level of D. The search for the global minimum was stopped, if f max f min < 1e 07 or the number of objective function evaluations exceeds the input upper limit D. Population size was set to NP = max(20, 2 D) in all the tested algorithms except BREST2, where population size recommended in [3], i.e. NP = 10 D was used. The input parameters for self-adaptation in BREST1 and BREST2 algorithm were set as follows, τ 1 = τ 2 = 0.1, F l = 0.1, and F u = 0.9. In the standard DE the recommended control parameter values F = 0.8 and CR = 0.5 were used. In adaptive DE-Ali the control parameters were set to CR = 0.5 and F min = 0.45.

4 Table 1: Comparison of DE algorithms Algorithm DEBR18 DER DE-Ali BREST1 BREST2 Function D ne RP rne RP rne RP rne RP rne RP ackley dejong griewank rastrig rosen schwefel ackley dejong griewank rastrig rosen schwefel ackley dejong griewank rastrig rosen schwefel ackley dejong griewank rastrig rosen schwefel RP rne DEBR18 DER DE_ALI BREST1 BREST2 DEBR18 DER DE_ALI BREST1 BREST2 Figure 1: Comparison of DE algorithms Boxplots of RP and rne The results of comparison of the algorithms are summarized in Fig. 1. The values for all the test tasks are given in Table 1 in more detail. The reliability of the search in columns RP is the percentage of runs, where the minimum value of objective function obtained by the search duplicates at least four digits, when compared with the right result. The time consumption is expressed as the average number (ne) of the objective function evaluations needed to reach the stopping condition. For easier comparison of the algorithms the (ne) values are given only for DEBR18 algorithm, and the relative change of ne in percents when compared with DEBR18 is presented for the other algorithms in the columns denoted rne in the tables. Therefore, the negative values of rne mean smaller time consumption with respect to DEBR18, the positive values larger one. For example, the value rne = 50 means half ne, the value rne = 100 means that ne is twice larger, when compared with DEBR18. The number of the objective function evaluations can be easily recalculated as ne = ne 0 (1 + rne/100), where ne 0 is the appropriate number of objective function evaluations for DEBR18. As it is apparent from Fig. 1, DEBR18 is the most reliable algorithm among the algorithms in the test. The second best is BREST1, both in reliability and in convergence rate. BREST2 is highly reliable except two tasks with D = 30, but its convergence is much slower in comparison with DEBR18. Competitive setting of control parameters proved to be efficient adaptive scheme.

5 3 Control Random Search with Competing Heuristics In an additive nonlinear regression model, the elements of random vector Y are expressed as follows Y i = g(x i, β) + ε i, i = 1, 2,..., n, (7) where x T i = (x 1, x 2,..., x k ) is i-th row of regressor matrix X, β is vector of parameters, g is a given function nonlinear in parameters, and ε i s are iid random variables with zero means. The estimation of parameters by the least squares method means to find such estimates of β that minimize the residual sum of squares Q(β) given by the following equation Q(β) = n [Y i g(x i, β)] 2. (8) i=1 Due to the fact that the function Q(β) need not be unimodal, the estimation of β is the global optimization problem. Iterative deterministic algorithms (e.g. Levenberg-Marquardt) used in standard statistical packages often fail when searching for the true solution of the problem. Several statistical packages were tested [16] on NIST tasks of higher-level-difficulty [8]. For approximately one half of the tasks, the algorithms either completely failed or resulted in a significant disagreement with the true parameter values. A stochastic algorithm based on controlled random search (CRS) was proposed for the estimation of nonlinear-regression parameters. The CRS algorithm was published originally by Price [9]. The reflection known from simplex method [7] was used for generating a new trial point. There are several modifications of the CRS algorithm, which were successfully used in solving the global optimization problems [1]. This algorithm can be written in pseudo-code as follows. Algorithm 2. Control Random Search 1 generate P (population of N points in D); 2 find x max (point with max function value); 3 repeat 4 generate a new trial point y using a heuristic; 5 if f(y) < f(x max ) then 6 x max := y; 7 find new x max ; 8 endif 9 until stopping condition; The role of a heuristic mentioned at line 4 can play any non-deterministic rule generating a new trial point y S. There are many different heuristics that can be used and, moreover, the heuristics can alternate during the course of search. Four competing heuristics were used in the implementation of the CRS algorithm for the nonlinearregression parameter estimation. Three of them are based on a randomized reflection in the simplex Σ (d + 1 points chosen from P ) proposed by [4]. A new trial point y is generated from the simplex by the relation y = g + U (g x H ), (9) where x H = arg maxx Σ f(x) and g is the centroid of remaining d points of the simplex. The multiplication factor U is a random variable distributed uniformly in [s, α s), α > 0 and s being input parameters, 0 < s < α/2. All the d+1 points of simplex are chosen at random from P in two heuristics: the first heuristic uses α = 2, and s = 0.5, the second one uses α = 5, and s = 1.5. Regarding the third heuristic, one point of the simplex is the point of P with the minimum objective function value and the remaining d points of the simplex S are chosen at random from remaining points of P. Input parameters of this heuristic are set to α = 2 and s = 0.5. The fourth competing heuristic is based on differential evolution, see (3) and (4) with F min = 0.4 and C = 0.9. The algorithm based on the competition of all four heuristics is denoted CRS4. The rules for the competition of heuristics are similar to those described in Section 2, the success is weighted by its relative change in objective function value w h = f max max(f(y), f min ) f max f min. (10) Thus w h (0, 1] and the corresponding probability q h is evaluated as q h = W h + w 0 H j=1 (W j + w 0 ), (11) where W h is the sum of w h in previous searching steps and w 0 > 0 is an input parameter of the algorithm. The collection of NIST datasets [8] was used as a benchmark. This collection contains 27 nonlinear regression datasets (tasks) ordered according to their level of difficulty (lower 8 tasks, average 11 tasks, and higher 8 tasks). The corresponding nonlinear regression models are of exponential or rational types, the number of parameters ranging from 2 to 9. Most of the datasets (18 of 27 tasks) result from experimental studies, the remaining ones are generated artificially. The total number of observations varies in a wide range, from 6 to 250. Several algorithms for the estimation of parameters were compared. One of them is a modification of Levenberg-Marquardt algorithm implemented in nlinfit procedure of Statistical Toolbox [6]. with default values of its control parameters. The other algorithms

6 Table 2: Comparison of algorithms in estimation of nonlinear-regression parameters Algorithm nlinfit CRS4 CRS4e DER REFL1 Task Level Result RP ne RP rne RP rne RP rne chwirut1 lower OK chwirut2 lower OK danwood lower OK gauss1 lower OK gauss2 lower OK lanczos3 lower OK misra1a lower OK misra1b lower OK enso middle OK gauss3 middle OK hahn1 middle OK kirby2 middle OK lanczos1 middle L lanczos2 middle OK mgh17 middle X misra1c middle OK misra1d middle OK nelson middle OK roszman1 middle OK bennett5 higher L boxbod higher X eckerle4 higher L mgh09 higher L mgh10 higher L rat42 higher OK rat43 higher OK thurber higher OK in the tests are population-based, differential evolution (DER) and two variants of the CRS with com- RP rne CRS4 CRS4e DER REFL1 CRS4 CRS4e DER REFL1 Figure 2: Comparison of stochastic algorithms used in nonlinear regression Boxplots of RP and rne petition(crs4 and CRS4e) and one variant without competition consisting of one local-search heuristic, randomized reflection with α = 2, and s = 0.5 (REFL1). These algorithms were tested in 100 repeated runs for each task. Search spaces S for the individual tasks are given in [18]. The common control parameters of all the population-based algorithms were set up as follows: population size N = 10 d, stopping condition Rmax 2 Rmin 2 < ε or ne d, where ε = (except CRS4e, see later), Rmax 2 and Rmin 2 are maximum and minimum values of the determination index R 2 in population, R 2 = 1 Q( ˆβ)/ n i=1 (Y i Y ) 2. The algorithm CRS4e (described in [18]) contains the same four heuristics as CRS4 and, moreover, provides a procedure for adapting the value ε in the stopping condition. The value ε starts from input value ε 0 and can be decreased, if 1 R 2 max < γ ε, where γ 1 is

7 another input parameter. Values of input parameters of CRS4e were set to ε 0 = and γ = The results of our experiments are summarized in Table 2. The quantities reported in this table have similar meaning as in the Section 2, rne denoting relative change in ne, when compared with CRS4. The overall comparison of all the stochastic algorithms used in testing is shown in Fig. 2. The algorithm CRS4 is reliable enough for most tasks except lanczos1 and lanczos2 (caused by their extremely low values of 1 R 2 ). As regards the other tasks, RP is less than 100 only in four tasks, but it always exceeds 85 %. The best results in both RP and ne were achieved by using CRS4e, where the use of adaptive stopping condition decreases ne by one tenth to one third in most cases and increases RP in tasks with small value of 1 R 2. 4 Conclusions Several self-adaptive evolutionary algorithms were described and the experimental results were briefly presented. The results showed that the algorithms, where the search strategy is adapted by competition (using the different control-parameters settings or using several local-search heuristics), outperform the other algorithms in most test tasks. Some of these self-adaptive evolutionary algorithms are implemented in the Matlab program library [20] available on website 1. This program library is free software. Any user of Matlab can download selected function of the library and use and/or modify it under the terms of the GNU General Public License. Although all the algorithms included into the Matlab program library were tested extensively and proved to be high reliable, often with significantly lower time consumption in the comparison with other stochastic algorithms, there is no guarantee of their right performance in other global optimization problems. However, the advantage of these algorithms consists in the fact, that they can run with default values of their control parameters for a wide range of optimization tasks. The presentation of this non-fuzzy topic at fuzzyexperts forum follows two objectives. The first of them is a challenge for fuzzy approach to the self-adaptation in evolutionary algorithms. The second reason is to offer a new robust optimization procedure for application. The self-adaptive evolutionary algorithms described in this paper can be also applied to the optimization of the Takagi-Sugeno fuzzy models. When optimizing them, it is desirable (see [11]): 1 to realize partitioning the premise space of the models, preferably from real data by using an efficient fuzzy clustering algorithm, to optimize the model successively by combining an evolutionary algorithm with the fuzzy rule base reduction and simplification, to check for meeting both partition and search space constraints during the optimization process. Acknowledgement The research was supported by the grant 201/05/0284 of the Czech Grant Agency and by the research scheme MSM of the Institute for Research and Applications of Fuzzy Modeling. References [1] M. M. Ali, Törn, Population set based global optimization algorithms: Some modifications and numerical studies, Computers and Operations Research 31 (2004) [2] T. Bäck, Evolutionary Algorithms in Theory and Practice, Oxford University Press, New York, [3] J. Brest, S. Greimer, B. Boškovič, M. Mernik, V. Žumer, Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems,. IEEE Transactions on Evolutionary Computation 10 (2006) [4] I. Křivý, J. Tvrdík, The controlled random search algorithm in optimizing regression models, Comput. Statist. and Data Anal. 20 (1995) [5] J. Liu, J. Lampinen, A Fuzzy Adaptive Differential Evolution Algortithm. Soft Computing 9 (2005) [6] MATLAB, version 2006b, The MathWorks, Inc., [7] J. A. Nelder, R. Mead, A simplex method for function minimization, Computer J. 7 (1964) [8] NIST, Statistical Reference Datasets: nonlinear regression, NIST Information Technology Laboratory, [9] W. L. Price, A controlled random search procedure for global optimization, Computer J. 20, (1977) [10] K. V. Price, R. Storn, J. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer-Verlag, 2005.

8 [11] H. Roubos, M. Setnes, Compact Fuzzy Models and Classifiers through Model Reduction and Evolutionary Optimization. In: Lance Chambers (Ed.) The Practical Handbook of Genetic Algorithms Applications, chapter 2. New York: Chapman & Hall/CRS 2000, pp [12] J. C. Spall, Introduction to Stochastic Search and Optimization, Wiley-Intersience, [13] R. Storn, K. V. Price, Differential evolution - a Simple and Efficient Heuristic for Global Optimization over Continuous Spaces, J. Global Optimization 11 (1997) [14] J. Tvrdík, L. Mišík, I. Křivý, Competing Heuristics in Evolutionary Algorithms, in: P. Sinčák et al. (Eds.), 2nd Euro-ISCI, Intelligent Technologies - Theory and Applications. IOS Press, Amsterdam 2002, pp [15] J. Tvrdík, Generalized controlled random search and competing heuristics, in: R. Matoušek and P. Ošmera (Eds.), MENDEL 2004, 10th International Conference on Soft Computing. Technical University Press, Brno, 2004, pp [16] J. Tvrdík, I. Křivý, Comparison of algorithms for nonlinear regression estimates, in: J. Antoch (Ed.), COMPSTAT Physica-Verlag, Heidelberg, 2004, pp [17] J. Tvrdík, Competitive Differential Evolution, in: R. Matoušek and P. Ošmera (Eds.), MENDEL 2006, 12th International Conference on Soft Computing. Technical University Press, Brno, 2006, pp [18] J. Tvrdík, I. Křivý, L. Mišík, Adaptive Population-based Search: Application to Estimation of Nonlinear Regression Parameters, Computational Statistics and Data Analysis (2007), In Press, Corrected Proof available online since 9 November 2006, ( B6V8V-4M9HWXT- 3/2/e4fa1077aa80c154b130396b3c486286) [19] J. Tvrdík, Differential Evolution with Competitive Setting of its Control Parameters. TASK Quarterly 11 (2007) (In Press). [20] J. Tvrdík, H. Habiballa, V. Pavliska, Matlab Program Library for Box-Costrained Global Optimization. in: APLIMAT 2007, 7th International Conference on Applied Math. Slovak Technical University, Bratislava, 2007, pp [21] D. H. Wolpert, W. G. Macready, No Free Lunch Theorems for Optimization, IEEE Transactions on Evolutionary Computation 1 (1997) [22] D. Zaharie, Critical Values for the Control Parameter of the Differential Evolution Algorithms, in: R. Matoušek and P. Ošmera (Eds.), MENDEL 2002, 8th International Conference on Soft Computing, Technical University Press, Brno, 2002, pp

Differential Evolution: Competitive Setting of Control Parameters

Differential Evolution: Competitive Setting of Control Parameters Proceedings of the International Multiconference on Computer Science and Information Technology pp. 207 213 ISSN 1896-7094 c 2006 PIPS Differential Evolution: Competitive Setting of Control Parameters

More information

2 Differential Evolution and its Control Parameters

2 Differential Evolution and its Control Parameters COMPETITIVE DIFFERENTIAL EVOLUTION AND GENETIC ALGORITHM IN GA-DS TOOLBOX J. Tvrdík University of Ostrava 1 Introduction The global optimization problem with box constrains is formed as follows: for a

More information

Adaptive Differential Evolution and Exponential Crossover

Adaptive Differential Evolution and Exponential Crossover Proceedings of the International Multiconference on Computer Science and Information Technology pp. 927 931 ISBN 978-83-60810-14-9 ISSN 1896-7094 Adaptive Differential Evolution and Exponential Crossover

More information

COMPETITIVE DIFFERENTIAL EVOLUTION

COMPETITIVE DIFFERENTIAL EVOLUTION COMPETITIVE DIFFERENTIAL EVOLUTION Josef Tvrdík University of Ostrava, Department of Computer Science 30. dubna 22, 701 03 Ostrava, Czech Republic phone: +420/596160231, fax: +420/596120478 e-mail: tvrdik@osu.cz

More information

Modified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds

Modified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds Modified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds Md. Abul Kalam Azad a,, Edite M.G.P. Fernandes b a Assistant Researcher, b Professor Md. Abul Kalam Azad Algoritmi

More information

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning 009 Ninth International Conference on Intelligent Systems Design and Applications A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning Hui Wang, Zhijian Wu, Shahryar Rahnamayan,

More information

A Comparison of Nonlinear Regression Codes

A Comparison of Nonlinear Regression Codes Journal of Modern Applied Statistical Methods Volume 4 Issue 1 Article 31 5-1-2005 A Comparison of Nonlinear Regression Codes Paul Fredrick Mondragon United States Navy Brian Borchers New Mexico Tech Follow

More information

Crossover and the Different Faces of Differential Evolution Searches

Crossover and the Different Faces of Differential Evolution Searches WCCI 21 IEEE World Congress on Computational Intelligence July, 18-23, 21 - CCIB, Barcelona, Spain CEC IEEE Crossover and the Different Faces of Differential Evolution Searches James Montgomery Abstract

More information

Beta Damping Quantum Behaved Particle Swarm Optimization

Beta Damping Quantum Behaved Particle Swarm Optimization Beta Damping Quantum Behaved Particle Swarm Optimization Tarek M. Elbarbary, Hesham A. Hefny, Atef abel Moneim Institute of Statistical Studies and Research, Cairo University, Giza, Egypt tareqbarbary@yahoo.com,

More information

Population Variance Based Empirical Analysis of. the Behavior of Differential Evolution Variants

Population Variance Based Empirical Analysis of. the Behavior of Differential Evolution Variants Applied Mathematical Sciences, Vol. 9, 2015, no. 66, 3249-3263 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2015.54312 Population Variance Based Empirical Analysis of the Behavior of Differential

More information

An Introduction to Differential Evolution. Kelly Fleetwood

An Introduction to Differential Evolution. Kelly Fleetwood An Introduction to Differential Evolution Kelly Fleetwood Synopsis Introduction Basic Algorithm Example Performance Applications The Basics of Differential Evolution Stochastic, population-based optimisation

More information

Dynamic Optimization using Self-Adaptive Differential Evolution

Dynamic Optimization using Self-Adaptive Differential Evolution Dynamic Optimization using Self-Adaptive Differential Evolution IEEE Congress on Evolutionary Computation (IEEE CEC 2009), Trondheim, Norway, May 18-21, 2009 J. Brest, A. Zamuda, B. Bošković, M. S. Maučec,

More information

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Miguel Leon Ortiz and Ning Xiong Mälardalen University, Västerås, SWEDEN Abstract. Differential evolution

More information

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Karol Opara 1 and Jaros law Arabas 2 1 Systems Research Institute, Polish Academy of Sciences 2 Institute of Electronic

More information

THE objective of global optimization is to find the

THE objective of global optimization is to find the Large Scale Global Optimization Using Differential Evolution With Self-adaptation and Cooperative Co-evolution Aleš Zamuda, Student Member, IEEE, Janez Brest, Member, IEEE, Borko Bošković, Student Member,

More information

ARTIFICIAL NEURAL NETWORKS REGRESSION ON ENSEMBLE STRATEGIES IN DIFFERENTIAL EVOLUTION

ARTIFICIAL NEURAL NETWORKS REGRESSION ON ENSEMBLE STRATEGIES IN DIFFERENTIAL EVOLUTION ARTIFICIAL NEURAL NETWORKS REGRESSION ON ENSEMBLE STRATEGIES IN DIFFERENTIAL EVOLUTION Iztok Fister Jr. 1,Ponnuthurai Nagaratnam Suganthan 2, Damjan Strnad 1, Janez Brest 1,Iztok Fister 1 1 University

More information

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear

More information

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn

More information

Differential Evolution Based Particle Swarm Optimization

Differential Evolution Based Particle Swarm Optimization Differential Evolution Based Particle Swarm Optimization Mahamed G.H. Omran Department of Computer Science Gulf University of Science and Technology Kuwait mjomran@gmail.com Andries P. Engelbrecht Department

More information

CONVERGENCE ANALYSIS OF DIFFERENTIAL EVOLUTION VARIANTS ON UNCONSTRAINED GLOBAL OPTIMIZATION FUNCTIONS

CONVERGENCE ANALYSIS OF DIFFERENTIAL EVOLUTION VARIANTS ON UNCONSTRAINED GLOBAL OPTIMIZATION FUNCTIONS CONVERGENCE ANALYSIS OF DIFFERENTIAL EVOLUTION VARIANTS ON UNCONSTRAINED GLOBAL OPTIMIZATION FUNCTIONS G.Jeyakumar 1 C.Shanmugavelayutham 2 1, 2 Assistant Professor Department of Computer Science and Engineering

More information

On the Pathological Behavior of Adaptive Differential Evolution on Hybrid Objective Functions

On the Pathological Behavior of Adaptive Differential Evolution on Hybrid Objective Functions On the Pathological Behavior of Adaptive Differential Evolution on Hybrid Objective Functions ABSTRACT Ryoji Tanabe Graduate School of Arts and Sciences The University of Tokyo Tokyo, Japan rt.ryoji.tanabe@gmail.com

More information

DE/BBO: A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization

DE/BBO: A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization 1 : A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization Wenyin Gong, Zhihua Cai, and Charles X. Ling, Senior Member, IEEE Abstract Differential Evolution

More information

Finding Multiple Global Optima Exploiting Differential Evolution s Niching Capability

Finding Multiple Global Optima Exploiting Differential Evolution s Niching Capability Finding Multiple Global Optima Exploiting Differential Evolution s Niching Capability Michael G. Epitropakis Computational Intelligence Laboratory, Department of Mathematics, University of Patras, Greece.

More information

Toward Effective Initialization for Large-Scale Search Spaces

Toward Effective Initialization for Large-Scale Search Spaces Toward Effective Initialization for Large-Scale Search Spaces Shahryar Rahnamayan University of Ontario Institute of Technology (UOIT) Faculty of Engineering and Applied Science 000 Simcoe Street North

More information

An Improved Differential Evolution Trained Neural Network Scheme for Nonlinear System Identification

An Improved Differential Evolution Trained Neural Network Scheme for Nonlinear System Identification International Journal of Automation and Computing 6(2), May 2009, 137-144 DOI: 10.1007/s11633-009-0137-0 An Improved Differential Evolution Trained Neural Network Scheme for Nonlinear System Identification

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Performance of Differential Evolution Method in Least Squares Fitting of Some Typical Nonlinear Curves

Performance of Differential Evolution Method in Least Squares Fitting of Some Typical Nonlinear Curves MPRA Munich Personal RePEc Archive Performance of Differential Evolution Method in Least Squares Fitting of Some Typical Nonlinear Curves SK Mishra North-Eastern Hill University, Shillong (India) 9. August

More information

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems Saku Kukkonen, Student Member, IEEE and Jouni Lampinen Abstract This paper presents

More information

OPTIMIZATION OF MODEL-FREE ADAPTIVE CONTROLLER USING DIFFERENTIAL EVOLUTION METHOD

OPTIMIZATION OF MODEL-FREE ADAPTIVE CONTROLLER USING DIFFERENTIAL EVOLUTION METHOD ABCM Symposium Series in Mechatronics - Vol. 3 - pp.37-45 Copyright c 2008 by ABCM OPTIMIZATION OF MODEL-FREE ADAPTIVE CONTROLLER USING DIFFERENTIAL EVOLUTION METHOD Leandro dos Santos Coelho Industrial

More information

Multi-start JADE with knowledge transfer for numerical optimization

Multi-start JADE with knowledge transfer for numerical optimization Multi-start JADE with knowledge transfer for numerical optimization Fei Peng, Ke Tang,Guoliang Chen and Xin Yao Abstract JADE is a recent variant of Differential Evolution (DE) for numerical optimization,

More information

Center-based initialization for large-scale blackbox

Center-based initialization for large-scale blackbox See discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/903587 Center-based initialization for large-scale blackbox problems ARTICLE FEBRUARY 009 READS

More information

Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction

Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction 3. Introduction Currency exchange rate is an important element in international finance. It is one of the chaotic,

More information

Bio-inspired Continuous Optimization: The Coming of Age

Bio-inspired Continuous Optimization: The Coming of Age Bio-inspired Continuous Optimization: The Coming of Age Anne Auger Nikolaus Hansen Nikolas Mauny Raymond Ros Marc Schoenauer TAO Team, INRIA Futurs, FRANCE http://tao.lri.fr First.Last@inria.fr CEC 27,

More information

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Zebo Peng Embedded Systems Laboratory IDA, Linköping University TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic

More information

Multi-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm

Multi-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm Multi-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm Sunil Kumar Soni, Vijay Bhuria Abstract The main aim of power utilities is to provide high quality power

More information

Uniform Random Number Generators

Uniform Random Number Generators JHU 553.633/433: Monte Carlo Methods J. C. Spall 25 September 2017 CHAPTER 2 RANDOM NUMBER GENERATION Motivation and criteria for generators Linear generators (e.g., linear congruential generators) Multiple

More information

DESIGN OF MULTILAYER MICROWAVE BROADBAND ABSORBERS USING CENTRAL FORCE OPTIMIZATION

DESIGN OF MULTILAYER MICROWAVE BROADBAND ABSORBERS USING CENTRAL FORCE OPTIMIZATION Progress In Electromagnetics Research B, Vol. 26, 101 113, 2010 DESIGN OF MULTILAYER MICROWAVE BROADBAND ABSORBERS USING CENTRAL FORCE OPTIMIZATION M. J. Asi and N. I. Dib Department of Electrical Engineering

More information

WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY

WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY Kiyoharu Tagawa School of Science and Engineering, Kindai University, Japan tagawa@info.kindai.ac.jp Abstract In real-world optimization problems, a wide

More information

Egocentric Particle Swarm Optimization

Egocentric Particle Swarm Optimization Egocentric Particle Swarm Optimization Foundations of Evolutionary Computation Mandatory Project 1 Magnus Erik Hvass Pedersen (971055) February 2005, Daimi, University of Aarhus 1 Introduction The purpose

More information

Adaptive Generalized Crowding for Genetic Algorithms

Adaptive Generalized Crowding for Genetic Algorithms Carnegie Mellon University From the SelectedWorks of Ole J Mengshoel Fall 24 Adaptive Generalized Crowding for Genetic Algorithms Ole J Mengshoel, Carnegie Mellon University Severinio Galan Antonio de

More information

Research Article Algorithmic Mechanism Design of Evolutionary Computation

Research Article Algorithmic Mechanism Design of Evolutionary Computation Computational Intelligence and Neuroscience Volume 2015, Article ID 591954, 17 pages http://dx.doi.org/10.1155/2015/591954 Research Article Algorithmic Mechanism Design of Evolutionary Computation Yan

More information

Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution

Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution Michael G. Epitropakis, Member, IEEE, Vassilis P. Plagianakos and Michael N. Vrahatis Abstract In

More information

CSC 4510 Machine Learning

CSC 4510 Machine Learning 10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on

More information

A PARAMETER CONTROL SCHEME FOR DE INSPIRED BY ACO

A PARAMETER CONTROL SCHEME FOR DE INSPIRED BY ACO A PARAMETER CONTROL SCHEME FOR DE INSPIRED BY ACO Dražen Bajer, Goran Martinović Faculty of Electrical Engineering, Josip Juraj Strossmayer University of Osijek, Croatia drazen.bajer@etfos.hr, goran.martinovic@etfos.hr

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

Problems of cryptography as discrete optimization tasks

Problems of cryptography as discrete optimization tasks Nonlinear Analysis 63 (5) e831 e837 www.elsevier.com/locate/na Problems of cryptography as discrete optimization tasks E.C. Laskari a,b, G.C. Meletiou c,, M.N. Vrahatis a,b a Computational Intelligence

More information

Optimization Problems

Optimization Problems Optimization Problems The goal in an optimization problem is to find the point at which the minimum (or maximum) of a real, scalar function f occurs and, usually, to find the value of the function at that

More information

NONLINEAR IDENTIFICATION ON BASED RBF NEURAL NETWORK

NONLINEAR IDENTIFICATION ON BASED RBF NEURAL NETWORK DAAAM INTERNATIONAL SCIENTIFIC BOOK 2011 pp. 547-554 CHAPTER 44 NONLINEAR IDENTIFICATION ON BASED RBF NEURAL NETWORK BURLAK, V. & PIVONKA, P. Abstract: This article is focused on the off-line identification

More information

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata Journal of Mathematics and Statistics 2 (2): 386-390, 2006 ISSN 1549-3644 Science Publications, 2006 An Evolution Strategy for the Induction of Fuzzy Finite-state Automata 1,2 Mozhiwen and 1 Wanmin 1 College

More information

arxiv: v1 [cs.ne] 29 Jul 2014

arxiv: v1 [cs.ne] 29 Jul 2014 A CUDA-Based Real Parameter Optimization Benchmark Ke Ding and Ying Tan School of Electronics Engineering and Computer Science, Peking University arxiv:1407.7737v1 [cs.ne] 29 Jul 2014 Abstract. Benchmarking

More information

Constrained Optimization by the Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites

Constrained Optimization by the Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites 2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 6-2, 2006 Constrained Optimization by the Constrained Differential Evolution with Gradient-Based

More information

Differential Evolution: a stochastic nonlinear optimization algorithm by Storn and Price, 1996

Differential Evolution: a stochastic nonlinear optimization algorithm by Storn and Price, 1996 Differential Evolution: a stochastic nonlinear optimization algorithm by Storn and Price, 1996 Presented by David Craft September 15, 2003 This presentation is based on: Storn, Rainer, and Kenneth Price

More information

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1455 1475 ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC

More information

A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape

A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape WCCI 200 IEEE World Congress on Computational Intelligence July, 8-23, 200 - CCIB, Barcelona, Spain CEC IEEE A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape Liang Shen and

More information

Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization.

Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization. nd Workshop on Advanced Research and Technology in Industry Applications (WARTIA ) Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization

More information

A Restart CMA Evolution Strategy With Increasing Population Size

A Restart CMA Evolution Strategy With Increasing Population Size Anne Auger and Nikolaus Hansen A Restart CMA Evolution Strategy ith Increasing Population Size Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2005 c IEEE A Restart CMA Evolution Strategy

More information

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS J. of Electromagn. Waves and Appl., Vol. 23, 711 721, 2009 ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS L. Zhang, F. Yang, and

More information

ESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN ,

ESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN , ESANN'200 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), 25-27 April 200, D-Facto public., ISBN 2-930307-0-3, pp. 79-84 Investigating the Influence of the Neighborhood

More information

An Adaptive Population Size Differential Evolution with Novel Mutation Strategy for Constrained Optimization

An Adaptive Population Size Differential Evolution with Novel Mutation Strategy for Constrained Optimization > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 An Adaptive Population Size Differential Evolution with Novel Mutation Strategy for Constrained Optimization Yuan

More information

OPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM

OPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM Advanced Logistic Systems Vol. 6. No. 1. (2012) pp. 117-126. OPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM LÁSZLÓ KOTA 1 Abstract: In this article I show a firefly optimization

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Improving Differential Evolution Algorithm by Synergizing Different Improvement Mechanisms

Improving Differential Evolution Algorithm by Synergizing Different Improvement Mechanisms Improving Differential Evolution Algorithm by Synergizing Different Improvement Mechanisms M. ALI and M. PANT Indian Institute of Technology Roorkee, India AND A. ABRAHAM Machine Intelligence Research

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Gradient-based Adaptive Stochastic Search

Gradient-based Adaptive Stochastic Search 1 / 41 Gradient-based Adaptive Stochastic Search Enlu Zhou H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology November 5, 2014 Outline 2 / 41 1 Introduction

More information

Conjugate Directions for Stochastic Gradient Descent

Conjugate Directions for Stochastic Gradient Descent Conjugate Directions for Stochastic Gradient Descent Nicol N Schraudolph Thore Graepel Institute of Computational Science ETH Zürich, Switzerland {schraudo,graepel}@infethzch Abstract The method of conjugate

More information

A multistart multisplit direct search methodology for global optimization

A multistart multisplit direct search methodology for global optimization 1/69 A multistart multisplit direct search methodology for global optimization Ismael Vaz (Univ. Minho) Luis Nunes Vicente (Univ. Coimbra) IPAM, Optimization and Optimal Control for Complex Energy and

More information

Application Research of Fireworks Algorithm in Parameter Estimation for Chaotic System

Application Research of Fireworks Algorithm in Parameter Estimation for Chaotic System Application Research of Fireworks Algorithm in Parameter Estimation for Chaotic System Hao Li 1,3, Ying Tan 2, Jun-Jie Xue 1 and Jie Zhu 1 1 Air Force Engineering University, Xi an, 710051, China 2 Department

More information

MEAN-ABSOLUTE DEVIATION PORTFOLIO SELECTION MODEL WITH FUZZY RETURNS. 1. Introduction

MEAN-ABSOLUTE DEVIATION PORTFOLIO SELECTION MODEL WITH FUZZY RETURNS. 1. Introduction Iranian Journal of Fuzzy Systems Vol. 8, No. 4, (2011) pp. 61-75 61 MEAN-ABSOLUTE DEVIATION PORTFOLIO SELECTION MODEL WITH FUZZY RETURNS Z. QIN, M. WEN AND C. GU Abstract. In this paper, we consider portfolio

More information

Metaheuristic algorithms for identification of the convection velocity in the convection-diffusion transport model

Metaheuristic algorithms for identification of the convection velocity in the convection-diffusion transport model Metaheuristic algorithms for identification of the convection velocity in the convection-diffusion transport model A V Tsyganov 1, Yu V Tsyganova 2, A N Kuvshinova 1 and H R Tapia Garza 1 1 Department

More information

A COMPARISON OF PARTICLE SWARM OPTIMIZATION AND DIFFERENTIAL EVOLUTION

A COMPARISON OF PARTICLE SWARM OPTIMIZATION AND DIFFERENTIAL EVOLUTION A COMPARISON OF PARTICLE SWARM OPTIMIZATION AND DIFFERENTIAL EVOLUTION Vu Truong Vu Ho Chi Minh City University of Transport, Faculty of Civil Engineering No.2, D3 Street, Ward 25, Binh Thanh District,

More information

Neural Network to Control Output of Hidden Node According to Input Patterns

Neural Network to Control Output of Hidden Node According to Input Patterns American Journal of Intelligent Systems 24, 4(5): 96-23 DOI:.5923/j.ajis.2445.2 Neural Network to Control Output of Hidden Node According to Input Patterns Takafumi Sasakawa, Jun Sawamoto 2,*, Hidekazu

More information

x 2 i 10 cos(2πx i ). i=1

x 2 i 10 cos(2πx i ). i=1 CHAPTER 2 Optimization Written Exercises 2.1 Consider the problem min, where = 4 + 4 x 2 i 1 cos(2πx i ). i=1 Note that is the Rastrigin function see Section C.1.11. a) What are the independent variables

More information

Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators

Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators Andrea Mambrini 1 University of Birmingham, Birmingham UK 6th June 2013 1 / 33 Andrea Mambrini GSGP: theory-laden

More information

RESOLUTION OF NONLINEAR OPTIMIZATION PROBLEMS SUBJECT TO BIPOLAR MAX-MIN FUZZY RELATION EQUATION CONSTRAINTS USING GENETIC ALGORITHM

RESOLUTION OF NONLINEAR OPTIMIZATION PROBLEMS SUBJECT TO BIPOLAR MAX-MIN FUZZY RELATION EQUATION CONSTRAINTS USING GENETIC ALGORITHM Iranian Journal of Fuzzy Systems Vol. 15, No. 2, (2018) pp. 109-131 109 RESOLUTION OF NONLINEAR OPTIMIZATION PROBLEMS SUBJECT TO BIPOLAR MAX-MIN FUZZY RELATION EQUATION CONSTRAINTS USING GENETIC ALGORITHM

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Blackbox Optimization Marc Toussaint U Stuttgart Blackbox Optimization The term is not really well defined I use it to express that only f(x) can be evaluated f(x) or 2 f(x)

More information

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven

More information

Parameter Sensitivity Analysis of Social Spider Algorithm

Parameter Sensitivity Analysis of Social Spider Algorithm Parameter Sensitivity Analysis of Social Spider Algorithm James J.Q. Yu, Student Member, IEEE and Victor O.K. Li, Fellow, IEEE Department of Electrical and Electronic Engineering The University of Hong

More information

Fuzzy adaptive catfish particle swarm optimization

Fuzzy adaptive catfish particle swarm optimization ORIGINAL RESEARCH Fuzzy adaptive catfish particle swarm optimization Li-Yeh Chuang, Sheng-Wei Tsai, Cheng-Hong Yang. Institute of Biotechnology and Chemical Engineering, I-Shou University, Kaohsiung, Taiwan

More information

Multi-objective approaches in a single-objective optimization environment

Multi-objective approaches in a single-objective optimization environment Multi-objective approaches in a single-objective optimization environment Shinya Watanabe College of Information Science & Engineering, Ritsumeikan Univ. -- Nojihigashi, Kusatsu Shiga 55-8577, Japan sin@sys.ci.ritsumei.ac.jp

More information

A numerical study of some modified differential evolution algorithms

A numerical study of some modified differential evolution algorithms A numerical study of some modified differential evolution algorithms P. Kaelo and.. Ali School of Computational and Applied athematics, Witwatersrand University, Wits - 2050, Johannesburg Abstract odifications

More information

Signal Identification Using a Least L 1 Norm Algorithm

Signal Identification Using a Least L 1 Norm Algorithm Optimization and Engineering, 1, 51 65, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Signal Identification Using a Least L 1 Norm Algorithm J. BEN ROSEN Department of Computer

More information

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C.

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Spall John Wiley and Sons, Inc., 2003 Preface... xiii 1. Stochastic Search

More information

Gaussian bare-bones artificial bee colony algorithm

Gaussian bare-bones artificial bee colony algorithm Soft Comput DOI 10.1007/s00500-014-1549-5 METHODOLOGIES AND APPLICATION Gaussian bare-bones artificial bee colony algorithm Xinyu Zhou Zhijian Wu Hui Wang Shahryar Rahnamayan Springer-Verlag Berlin Heidelberg

More information

Computational statistics

Computational statistics Computational statistics Combinatorial optimization Thierry Denœux February 2017 Thierry Denœux Computational statistics February 2017 1 / 37 Combinatorial optimization Assume we seek the maximum of f

More information

Metaheuristics and Local Search

Metaheuristics and Local Search Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :

More information

THIS paper considers the general nonlinear programming

THIS paper considers the general nonlinear programming IEEE TRANSACTIONS ON SYSTEM, MAN, AND CYBERNETICS: PART C, VOL. X, NO. XX, MONTH 2004 (SMCC KE-09) 1 Search Biases in Constrained Evolutionary Optimization Thomas Philip Runarsson, Member, IEEE, and Xin

More information

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Comparison between cut and try approach and automated optimization procedure when modelling the medium-voltage insulator

Comparison between cut and try approach and automated optimization procedure when modelling the medium-voltage insulator Proceedings of the 2nd IASME / WSEAS International Conference on Energy & Environment (EE'7), Portoroz, Slovenia, May 15-17, 27 118 Comparison between cut and try approach and automated optimization procedure

More information

The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis The Parameters Selection of Algorithm influencing On performance of Fault Diagnosis Yan HE,a, Wei Jin MA and Ji Ping ZHANG School of Mechanical Engineering and Power Engineer North University of China,

More information

Looking Under the EA Hood with Price s Equation

Looking Under the EA Hood with Price s Equation Looking Under the EA Hood with Price s Equation Jeffrey K. Bassett 1, Mitchell A. Potter 2, and Kenneth A. De Jong 1 1 George Mason University, Fairfax, VA 22030 {jbassett, kdejong}@cs.gmu.edu 2 Naval

More information

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine Song Li 1, Peng Wang 1 and Lalit Goel 1 1 School of Electrical and Electronic Engineering Nanyang Technological University

More information

Constrained Real-Parameter Optimization with Generalized Differential Evolution

Constrained Real-Parameter Optimization with Generalized Differential Evolution 2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 Constrained Real-Parameter Optimization with Generalized Differential Evolution

More information

Nonlinear Regression using Particle Swarm Optimization and Genetic Algorithm

Nonlinear Regression using Particle Swarm Optimization and Genetic Algorithm International Journal Computer Applications (5 ) Volume 153 No, November 1 Nonlinear Regression using Particle Swarm Optimization and Genetic Pakize Erdoğmuş Düzce University, Computer Engineering Department,

More information

Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights

Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights Frank Neumann 1 and Andrew M. Sutton 2 1 Optimisation and Logistics, School of Computer Science, The

More information

Available online at ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95

Available online at  ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95 Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 20 (2013 ) 90 95 Complex Adaptive Systems, Publication 3 Cihan H. Dagli, Editor in Chief Conference Organized by Missouri

More information

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier Seiichi Ozawa 1, Shaoning Pang 2, and Nikola Kasabov 2 1 Graduate School of Science and Technology,

More information