Unsupervised Fuzzy Tournament Selection

Size: px
Start display at page:

Download "Unsupervised Fuzzy Tournament Selection"

Transcription

1 Applied Mathematical Sciences, Vol. 5, 2011, no. 58, Unsupervised Fuzzy Tournament Selection Khalid Jebari, Abdelaziz El moujahid, Amina Dik, Abdelaziz Bouroumi and Aziz Ettouhami Laboratoire Conception et Systèmes (Microélectronique et Informatique) Faculty of Sciences, Mohamed V-Agdal University, UM5, Rabat, Morocco {khalid.jebari, elmoujahid, {bouroumi, Abdelaziz Bouroumi Modeling and Simulation Laboratory, Ben Msik Faculty of Sciences Hassan II Mohammedia-Casablanca University, UH2MC Casablanca, Morocco Abstract Tournament selection has been widely used and studied in evolutionary algorithms. The size of tournament is a crucial parameter for this method. It influences on the algorithm convergence, the population diversity and the solution quality. This paper presents a new technique to adjust this parameter dynamically using fuzzy unsupervised learning. The efficiency of the proposed technique is shown by using several benchmark multimodal test functions. Keywords: Tournament Selection, Tournament Size, Fuzzy Clustering, Unsupervised Learning, Genetic Algorithms, Evolutionary Algorithms, Fuzzy C-means 1 Introduction Genetic algorithms (GA) have proven, in recent decades, their effectiveness to solve many complicated problems in the real world. However, their performance depends heavily on certain parameters such as: probability of crossover, probability of mutation, population size and selection pressure. The ultimate goal is the good choice of these parameters [10] to maintain a dynamic balance between exploration and exploitation. This balance can be adjusted by:

2 2864 K. Jebari et al probability of crossover and probability of mutation that explore other areas of research space; population size that influences the diversity of the population; selection pressure that controls the selection of individuals from the current population to produce a new population in the next generation [22]. Research was therefore oriented towards techniques to dynamically adjust these parameters to improve the quality of the solution. In this paper, we are interested in the tournament selection due to its wide use in various optimization problems using genetic algorithms [1]. The standard tournament selection method consists in [12]: 1. randomly selecting k individuals from the population; // (k > 1, k: Tournament size) 2. selecting the individual who has the best fitness value from selected individuals in step 1; 3. Repeating steps 1. and 2. N times // (N: Population size). Tournament selection depends largely on the parameter k. Several studies [1, 5, 6, 12, 13, 19, 20, 21] have shown the importance of the choice of k. The difficulty lies in the fact how to determine k, taking into consideration the state of the GA evolution. Choosing a large value of k leads to a strong pressure selection and the GA may converge to a local optimum. If k is small, we have a problem with individuals who may have a better value of adaptation without being candidates for selection by tournament [21]. Hence the purpose of this paper is to determine k in a dynamic way by using a process of unsupervised fuzzy clustering, taking into consideration the state of the GA evolution. The value of k is the number of the clusters given by the cluster algorithm. Section II is devoted to the selection tournament. Our method is described in Section III. The numerical results are given and discussed in Section IV. Finally, section V contains our concluding and remarks. 2 Tournament selection The tournament selection was attributed to Wetzel in an unpublished work, then it has been studied by Brindle in his doctoral thesis in 1981 [12]. Subsequently, several researchers were interested in this method [5, 6, 11, 16, 18]. More recently we find [14, 15, 23, 24, 25]. We can describe this method by the algorithm in Fig. 1:

3 Unsupervised fuzzy tournament selection // N : p o p u l a t i o n s i z e 2 t rand : array o f i n t e g e r c o n t a i n i n g the i n d i c e s o f 3 i n d i v i d u a l s in the population 4 t i n d s e l e c t e d : an array o f i n d i v i d u a l s i n d i c e s 5 who w i l l be s e l e c t e d 6 l = 0 7 For ( i =0; i < k ; i ++) 8 { 9 S h u f f l e t rand elements ; 10 For ( j =0; j < N; j += k ) 11 { 12 I 1 = t rand ( j ) ; 13 For (m=1; m < k ; m++) 14 { 15 I 2 = t rand ( j+m) ; 16 i f ( f ( I 1 ) < f ( I 2 ) ) I 1 =I 2 ; 17 // f ( I i ) : F i t n e s s o f i n d i v i d u a l I i 18 } 19 t i n d s e l e c t e d ( l ) = I 1 ; 20 l += 1 ; 21 } 22 } Figure 1: Tournament Selection Several studies have looked at the behavior of the tournament selection and the concept of selection pressure. Goldberg introduced takeover time as a criterion for comparison [12]. It is defined as the number of generations required for a single best individual to fill the entire population by using only the selection operator. Mühlenbein and Schlierkamp in Breeder Genetic Algorithm (BGA) have used the term selection pressure to measure progress in the population [20]. Blickle studied several methods of selection [5]. He introduced several comparison criteria based on the fitness distribution : Reproduction rate, Loss of diversity and Selection intensity [6]. There are several techniques to evaluate the selection pressure. We ve collected the most used: Takeover time is [12]: k: Tournament size, N: Population size t = 1 [ln(n) + ln(ln(n))] (1) ln(k)

4 2866 K. Jebari et al Figure 2: Takeover Time t(k) for Tournament Selection Figure 3: Loss of Diversity P d (k) for Tournament Selection Loss of diversity [5] is proportion of individuals not selected during selection phase: P d = k 1 k 1 k k k 1 (2) Motoki [19] recalculated the loss of diversity and demonstrated that: P d = 1 ( N 1 jk (j 1) k ) N (3) N j=1 N k Selection intensity used by Bäck [2], Mühlenbein [20] and Blickle [6] : I 2 ( ( )) ln(k) ln 4.14 ln(k) (4) Selection intensity, takeover time, reproduction rate and loss of diversity depend on the size of the tournament. This shows the importance of this

5 Unsupervised fuzzy tournament selection 2867 Figure 4: Selection Intensity I(k) for Tournament Selection parameter to adjust the selection pressure. Miller and Goldberg stressed the importance of this parameter even in the presence of the noise effect [18]. We then present some techniques to find k: Goldberg and Deb [12] studied some values of k, having the form: k = 2 i (,...,5). They started by k = 2 and showed that the tournament selection with size 2 is equivalent to rank selection. Julstrom and Robinson [16] introduced a tournament selection based on weight. The probability that individual j is selected is: w: weight of individual j P j = wk j (1 w) 1 w k (5) k = r: exponential normalization factor N(1 r) 1 r N (6) Filipovi c et al. [11] studied Fine-grained tournament selection. They showed that the standard tournament selection does not allow precise setting of the balance between exploration and exploitation. In their method, the tournament size is not fixed, but takes a value among a set of values. Sokolov and Whitley [23] have developed another variant of tournament selection which they have named Unbiased tournament selection to solve the problem of individuals not sampled. They proposed a function to modify the ranking of individuals. But they did not discuss the problem of choosing the parameter k.

6 2868 K. Jebari et al P. Vajda et al. [24] proposed Deterministic tournament-size control (DTC), the tournament size is a function of generation t, but their empirical formula depends on two parameters P 1 and P 2. The tournament size increases linearly from P 1 to P 2 for the first 1,000 generations, and binds to P 2 after. k(t) = { t(p2 p 1 ) + p t {1,2, } p 2 otherwise (7) They chose: p 1, p 2 in {2,7,17}. They proposed also Fuzzy Tournament Selection, where k fuzzified using the following equation: k t = (µ + 0.5) k t 1 (8) the parameter µ is calculated by the rules of fuzzy logic. 3 Proposed method To dynamically choose k, our idea is to precede the selection phase by a classification stage that detects the presence of homogeneous groupings to determine the number of clusters. This idea comes from the fact that if we know the number of clusters of the population, we have an idea of diversity of the population. So we will be able to say how we are going to increase the selection pressure, decrease it and adjust the proportion of individuals not sampled. This explains the use of the number of clusters detected as a value of k. In addition, as lack of information on population, the fuzzy clustering offers a better possibility of modeling and management of overlapping clusters. Thus our approach is to introduce a fuzzy clustering based on two existing algorithms. The first is unsupervised fuzzy learning (UFL) [7], is a learning phase to automatically detect the number of clusters (c) present in the population (Fig. 5). The second is the fuzzy c-means (FCM) proposed by Bezdek [4] for an optimization phase that improves the distribution of individuals in c clusters. A third phase is a validation (VAL) using the normalized partition entropy defined as [4]: h(u) = 1 1 c [u ij log(u ij )] (9) log(c) n i=0 j=0 U : matrix of membership degrees; c : N umber of clusters to evaluate the quality scores obtained in order to choose the best distribution.

7 Unsupervised fuzzy tournament selection 2869 In each generation of GA, we applied our selection technique that we named Unsupervised Fuzzy Tournament Selection (FUT). FUT uses UFL-FCM- VAL, described in Fig. 6 to find the number of clusters of the population used as the tournament size. The pseudo-code of FUT is presented in Fig. 7. The GA modified is given in Fig I i : i n d i v i d u a l, 1 i N, with p i s I n d i v i d u a l dimension, N: population s i z e 4 5 choose : a s i m i l a r i t y t h r e s h o l d S th 6 I n i t i a l i z a t i o n : c =1, V 1 =I 1, V 1 c l u s t e r Center f o r the f i r s t Class 7 f o r meseasuring the s i m i l a r i t y beteween I i and I j : 8 S(i, j) = 1 d(i i, I j ) p ; d(i i, I j ) Euclidean distance measure (10) 9 For i =2 to n do 10 { 11 I f ( max th ) 1 j c { 12 c = c + 1 ; 13 V c = I i ; 14 } 15 Else 16 Update V j by using V j = (u jk ) m I k k=1 ; 1 j c; m = 2; (11) (u jk ) m k=1 19 u jk membership degree of I k to class V j r eturn c, u and V 23 } u jk = 1 d(i j, I k ) p ; (12) Figure 5: UFL

8 2870 K. Jebari et al 1 S th step = C optimum = 2 // C optimum : Number o f c l u s t e r s f o r 3 // a minimal entropy 4 h min = 1 ; // h min : Minimal entropy 5 S th : s i m i l a r i t y t h r e s h o l d 6 S th min = 0. 1 ; S th max = ; 7 8 For ( S th =S th min ; S th <S max ; S th += S th step ) 9 { 10 Apply UFL f o r population o f i n d i v i d u a l s 11 Apply FCM f o r population o f i n d i v i d u a l s 12 C a l c u l a t e h ( ) Using U and V (Eq. (9 ) ) i f ( h min >h ( ) ) {h min =h ( ) ; C optimum = c } } return C optimum to use as tournament s i z e k Figure 6: UFL-FCM-VAL 4 Numerical Results and Discussions We compared our technique (FUT) with : Standard tournament selection (FTS), we chose the values for k: 4,8,16. Deterministic tournament size (DTC): k is given by (8). we present numerical results to the problem of genetic optimization on a set of 14 well-known test functions commonly used in genetic algorithms (see appendix A). We calculated, for the three methods (FTS, DTC and FUT), relative error and takeover time. We chose the following parameters: Random initialization with population size N= 100; the coding we used is the real code. For the crossover operator, we used Simulated Binary Crossover SBX [8]. For the mutation, we considered the polynomial mutation [9]. To simulate other tests, we also used the Gaussian mutation [17] of adding Gaussian noise to each value of genes that form the chromosome (candidate solution) and the uniform mutation is to select random genes and mutate them, ie, changing their values in the area of the test function.

9 Unsupervised fuzzy tournament selection k = c // k : Tournament s i z e 3 // c : Number o f c l u s t e r s given by UFL FCM VAL ( Fig. 6) 4 // N p o p u l a t i o n s i z e 5 t rand : array o f i n t e g e r c o n t a i n i n g the i n d i c e s o f 6 i n d i v i d u a l s in the population 7 t i n d s e l e c t e d : an array o f i n d i v i d u a l s i n d i c e s who 8 w i l l be s e l e c t e d 9 l = For ( i =0; i<k ; i ++) 12 { 13 S h u f f l e t rand elements ; 14 For ( j =0; j<n; j=j+k ) 15 { 16 I 1 = t rand ( j ) ; 17 For (m=1; m<k ; m++) 18 { 19 I 2 =t rand ( j+m) ; 20 i f ( f ( I 1 )< f ( I 2 ) ) I 1 = I 2 21 // f ( I i ) : F i t n e s s o f i n d i v i d u a l I i 22 } 23 t i n d s e l e c t e d ( l ) = I 1 24 l = l + 1 ; 25 } 26 } Figure 7: FUT For the stopping, criterion we have chosen as the maximum number of generations t max = 100. Another stop criterion is used: percentage of optimum in population (Eq. (13)). number of optimum N A summary of these parameters is in Table < 95 (13) Table 2 shows the results of an aspect considered in this paper is to evaluate the quality of the optimum provided by the three techniques. We proposed, in this work, to measure this quality by using relative error in percentage (PER) (Eq. (14)) P ER = f f = f f (14) f f, in Eq. (14), is the optimum provided by the algorithm for each technique, f the actual optimum, which is a priori known.

10 2872 K. Jebari et al 1 t=0 // t : Generation number 2 Random i n i t i a l i z a t i o n o f i n d i v i d u a l s P( t ) 3 Evaluate P( t ) 4 c o n d i t i o n 1= Eq. (13) 5 tmax=100 //Maximum o f g e n e r a t i o n s 6 While ( c o n d i t i o n 1 AND t < tmax ) Do 7 { 8 t += 1 9 S e l e c t P( t ) from P( t 1) by FUT 10 Apply Crossover f o r P( t ) 11 Apply mutation f o r P( t ) 12 // Population a f t e r c r o s s o v e r and mutation 13 Create new population P ( t ) P( t)=p ( t ) 16 Evaluate P( t ) 17 C a l c u l a t e the p r o p o r t i o n o f optimum in population 18 } 19 Give the best i n d i v i d u a l ( s ) Figure 8: GA modified Population size 100 Crossover SBX Crossover probability 0.97 Mutation polynomial Mutation probability 0.03 Coding Real Maximum number of generations 100 Table 1: AG Parameters

11 Unsupervised fuzzy tournament selection 2873 Functions Technique used Best Technique FTS DTC FUT f FUT,DTC f FUT f FUT f FUT,DTC f FUT,FTS f FUT f FUT f FUT,DTC f FUT f FUT f FUT,DTC,FTS f FUT f FUT f FUT Table 2: Relative Errors in Percentage of the Optima provided by the GA for each technique We note that our technique provides significantly better results compared to FTS that is the standard selection method and to DTC. Our second comparison is devoted to the takeover time. This criterion can be directly calculated by the Goldberg equation for the case of FTS. But we have experimentally determined takeover time for each technique. Indeed, table 3 presents the number of generations needed for the GA converges, using only the selection operator, for each technique and test functions. We note that the results found for FTS and DTC are independent of the function. We can conclude that these two techniques do not respect the functions characteristics that differ according to their distributions, and their nature unimodal or multimodal. While our technique has an acquisition time that depends on the nature of the test function. Initially the population is diverse, so a high number of clusters is produced. therefore, the takeover time is low and the selection pressure is high. Over generations, if diversity is big, takeover time decreases and vice versa. In contrast to DTC, the takeover time which depends only on generation number, the values of takeover time are almost identical for the different test functions. While for the FTS, the takeover time is the same for each test functions and for each generation. Fig. 9 shows the variation of takeover time for 3 functions. Ackley function (f1) is a multimodal function, cosine mixture function (f2) which has an unimodal function in the interval [-1, 1], but its optimum value depends on its

12 2874 K. Jebari et al Fonction FTS (4) FTS (8) FTS (16) DTC FUT f f f f f f f f f f f f f f Table 3: Takeover Time for test functions Ackley Function Cosinus Mixture Function Griewank Function Genration FTS(4) DTC FUT FTS(4) DTC FUT FTS(4) DTC FUT Table 4: Loss of Diversity for f1, f2, f4 size and Griewank function (f4), a unimodal function not convex. Selection pressure is the same whatever the test function for the FTS, it is a function of generation for the DTC, whereas the FUT depends on the nature of the function. The loss of diversity (Eq. (3)) is a function of k and N. In our simulation N is fixed, k is variable for DTC and FUT and fixed for the FTS. For FUT, the value of k represents the number of clusters present in the population over many generations. So, as shown in table 4, we have a variation in the FUT column, resulting in high diversity at the beginning of the evolution, which decreases over generations. We give an indication of the different k values over generations for three test functions: f1=[16,10,8,6,6,5,4,3,2,2,2,2,3,3,2,2]; f2=[16,10,5,4,4,3,3,2,2,2,2,2,2,2,2,2]; f4=[16,8,5,4,3,2,2,2,2,2,2,2,2,2,2,2]; We note that the loss of diversity for the FTS is the same whatever functions. For the DTC, k varies depending on the number of generations. But

13 Unsupervised fuzzy tournament selection 2875 Figure 9: Takeover Time for 3 techniques in the case of the three functions chosen, the GA has already converged even before the size of the tournament changed its values. While our technique respects the nature of the distribution and nature of the problem. To understand the variation in FUT according to the nature of the function, we studied the proportion of individuals having the best fitness value for three functions: Sphere function (f13), a unimodal function, Weierstrass function (f16), a multimodal function of dimension 1 and Goldstein-Price Problem, a multimodal function of dimension 2. For both types of functions studied, unimodal or multimodal, Bäck and Hoffmeister [3] have reported the following remarks : Multimodal function requires an explorative character of the search. To achieve this behaviour, a low pressure selection can be used in order to maintain a large genotypic diversity; Unimodal function requires a high pressure selection for forcing the search space into the gradient direction and exploiting the search space better. Unfortunately, for a real word problem, we haven t any ideas about the fitness function properties. To solve this problem, FUT provides a powerful mechanism by respecting the function topology. This mechanism is clarified in experimental result of Fig. 10. For the sphere function represented in Fig. 10a, which is an unimodal function, we note that, in the first generations for the GA process, FUT has

14 2876 K. Jebari et al Figure 10: Proportion of the best individual for FTS, DTD and FUT Ackley Function Cosinus Mixture Function Griewank Function Generation FTS(4) DTC FUT FTS(4) DTC FUT FTS(4) DTC FUT Table 5: Probability of individuals not sampled for the functions: f1, f2, f4 a significant diversity compared to the others techniques. Afterwards, during the GA evolution, the deflection angle of FUT curve shows that FUT has a strong selection pressure. For Both multimodal functions in Fig. 10b and Fig. 10c, we note that FUT has a lower selection pressure compared to FTS and DTC. Now we considered the problem of individuals not sampled by tournament selection. The probability that an individual is not sampled is given by Eq. (7) For our technique the probability of individuals not sampled is almost null at the beginning of the algorithm, and grows over the generations. Our technique could make a balance between the selection pressure and the probability of individuals not sampled. When the tournament size is high, the selection pressure is high too. On the other hand, if the tournament size is high, the probability of the individuals not sampled is low. This balance cannot be regulated by the parameter setting with the number of the generations such as with the DTC, but with a study based on the clustering of the population

15 Unsupervised fuzzy tournament selection 2877 such as FUT. 5 Conclusion In this paper we showed the importance of tournament size. We have proposed a technique to determine this parameter which is based on unsupervised fuzzy clustering. We have shown that our technique can not only solve the problem of tournament size but also to fill the gap in the selection tournament which is the percentage of individuals not sampled. We used several benchmark functions to show that our technique adjusts dynamically, across generations, the pressure of selection, the diversity of the population and can solve the problem of individuals not sampled. So our technique was able to strike a balance between exploitation and exploration. This technique can also be implemented in genetic programming for solving difficult problems that require a method to dynamically adjust the selection pressure. In future work, we propose a similar technique to dynamically adjust other parameters of genetic algorithms, namely the probability of mutation and crossover probability.

16 2878 K. Jebari et al Appendix A: Test Functions Name Description Dim Characteristics f1 ( Ackley Problem ) ( ) n=5 Multimodal x n i ) 1 cos(2πx n i ) minf(x) = 20 e e e High dimension 30 x i 30, x = (0, 0,..., 0), f(x ) = 0 Secable, Regular f2 Cosinus mixture Problem n=10 Unimodal, Local solution= minf(x) = cos(5πx i )with 1 x i 1, x = (0, 0,..., 0), f(x ) = 0.1 n Global solution x2 i 0.1 n f3 Goldstein-Price Problem n=2 Several minf(x) = [1 + (x 0 + x 1 + 1) 2 (19 14x 0 + 3x x 1 + 6x 0 x 1 + 3x 2 1 )][30+ Local Minima (2x 0 3x 1 ) 2 (18 32x x x 1 36x 0 x x 2 1 )] 2 x i 2, x = (0, 1)f(x ) = 3 f4 Griewank Problem n=5 Unimodal; n minf(x) = x2 i cos( x i ) High Dimension i 600 x i 600, x = (0, 0,..., 0), f(x ) = 0 Non convex f5 ( Levy et Montalvo Problem 1 ) n=5 Multimodal n 1 minf(x) = n π 10 sin 2 (πy 1 ) + (y i 1) 2 [ sin 2 (πy i+1 )] + (y n 1) 2 y i = (x i + 1), 5 x i 5, x = ( 1, 1,..., 1), f(x ) = 0 f6 ( Levy et Montalvo Problem 2 n=5 Multimodal n 1 [ ] [ ] ) minf(x) = 0.1 sin 2 (3πx 1 ) + (x i 1) sin 2 (3πx i+1 ) + (x n 1) sin 2 (2πx n) 5 x i 5, x = (1, 1,..., 1), f(x ) = 0 f7 Paviani Problem n=10 With several ( 10 [ 2] 10 ) 0.2 minf(x) = (ln(x i 2)) 2 + (ln(10 x i )) + x i local minimas and 2 x i 10, x ( , ,..., ), f(x ) = one global minima f8 Rastrigin Problem n=10 multimodal complex with [ ] minf(x) = 10n + x 2 i 10 cos(2πx i) a very large number of 5.12 x i 5.12, x = (0, 0,.., 0), f(x ) = 0 regularly distributed Optima f9 Rosenbrock problem n=2 Unimodal n 1 [ 2] minf(x) = 100(x i+1 x 2 i )2 + (x i 1) Non convex 30 x i 30, x = (1, 1,..., 1), f(x ) = 0 f10 Schewefel problem l n=2 minf(x) = n ( ) x i sin x i 500 x i 500, x = (420.97, ,..., ), f(x ) = 0 f11 Easom problem n=3 minf(x) = [ n cos(x i ) ( 100 x i 100, x = (π, π,..., π), f(x ) = 1 f12 Ellipsoidal problem n=10 minf(x) = exp ( ))] (x i π) 2 Multimodal Unimodal (x i i) 2, n x i n, x = (1, 2,..., n), f(x ) = 0 Unimodal

17 Unsupervised fuzzy tournament selection 2879 Name Description Dim characteristics f13 Sphere Problem n=5 Unimodal minf(x) = x 2 i local solution= 5.12 x i 5.12, x = (0, 0,..., 0), f(x ) = 0 global solution f14 ( Generalized penalized problem 1 ) n=5 Multimodal n 1 minf(x) = n π 10 sin 2 (πy i ) + (y i 1) 2 [ sin 2 (πy i+1 )] + (y n 1) 2 + u(x i, 10, 100, 4) y i = (x i + 1), 5 x i 5, x = ( 1, 1,..., 1), f(x ) = 0 f15 Generalized penalized problem 2 n=5 Multimodal ( minf(x) = 0.1 sin 2 (3πx 1 ) + + (x i 1) 2 [1 + sin 2 (3πx i )] + (x n 1) 2 [1 + sin 2 (2πx n)] u(x i, 10, 100, 4) 5 x i 5, x{ = (1, 1,..., 1), f(x ) = 0 k (x a) m if x > a with u(x, a, k, m) = k (x a) m if x < a 0 otherwise f16 Weierstrass function n=1 Multimodal w b,s (x) = b i(s 2) sin(b i x) b > 1, 1 s 2, in our case (1 i 30,s = 1.7, b = 5) non differentiable ) Continue References [1] T. Bäck. Selective pressure in evolutionary algorithms: a characterization of selection mechanisms. In Evolutionary Computation, IEEE World Congress on Computational Intelligence., Proceedings of the First IEEE Conference on (June 1994), pp vol.1. [2] T. Bäck. Generalized convergence models for tournament- and (mu, lambda)-selection. In ICGA (1995), L. J. Eshelman, Ed., Morgan Kaufmann, pp [3] T. Bäck and F. Hoffmeister. Extended selection mechanisms in genetic algorithms. In Proceedings of the Fourth International Conference on Genetic Algorithms (1991), Morgan Kaufmann, pp [4] J. Bezdek. Pattern recognition with fuzzy objective function algorithms. Plenum Press, New York, [5] T. Blickle and L. Thiele. A comparison of selection schemes used in genetic algorithms. TIK-Report 11, TIK Institut fur Technische Informatik und Kommunikationsnetze, Computer Engineering and Networks Laboratory, ETH, Swiss Federal Institute of Technology, Gloriastrasse 35, 8092 Zurich, Switzerland, Dec [6] T. Blickle and L. Thiele. A mathematical analysis of tournament selection. In Proceedings of the Sixth International Conference on Ge-

18 2880 K. Jebari et al netic Algorithms (San Francisco, CA, 1995), L. Eshelman, Ed., Morgan Kaufmann, pp [7] A. Bouroumi, M. Limouri, and A. Essaïd. Unsupervised fuzzy learning and cluster seeking. Intelligent Data Analysis 4 (September 2000), [8] K. Deb and R. Agrawal. Simulated binary crossover for continuous search space. Complex systems 9, 2 (1995), [9] K. Deb and A. Kumar. Real-coded Genetic Algorhithms with Simulated Binary Crossover: Studies on Multimodel and Multiobjective Problems. Complex Systems 9, 6 (1995), [10] A. E. Eiben, R. Hinterding, and Z. Michalewicz. Parameter control in evolutionary algorithms. IEEE Transations on Evolutionary Computation 3, 2 (July 1999), [11] V. Filipović, J. Kratica, D. Tošić, and I. Ljubić. Fine grained tournament selection for the simple plant location problem. In Proceedings of the 5th Online World Conference on Soft Computing Methods in Industrial Applications-WSC5 (2000), pp [12] D. E. Goldberg and K. Deb. A comparative analysis of selection schemes used in genetic algorithms. In Foundations of Genetic Algorithms (San Mateo, 1991), G. J. E. Rawlins, Ed., Morgan Kaufmann, pp [13] P. Hancock. A comparison of selection mechanisms. In Handbook of Evolutionary Computation, T. Bäck, D. B. Fogel, and Z. Michalewicz, Eds. IOP Publishing and Oxford University Press., Bristol, UK, [14] K. Hingee and M. Hutter. Equivalence of probabilistic tournament and polynomial ranking selection. In Evolutionary Computation, CEC (IEEE World Congress on Computational Intelligence). IEEE Congress on (2008), pp [15] R. Huber and T. Schell. Mixed size tournament selection. Soft Computing - A Fusion of Foundations, Methodologies and Applications 6 (2002), /s [16] B. Julstrom and D. Robinson. Simulating exponential normalization with weighted k-tournaments. In Evolutionary Computation, Proceedings of the 2000 Congress on (2000), vol. 1, pp vol.1. [17] Z. Michalewicz. Genetic algorithms + data structures = evolution programs (3rd ed.). Springer-Verlag, London, UK, 1996.

19 Unsupervised fuzzy tournament selection 2881 [18] B. L. Miller and D. E. Goldberg. Genetic algorithms, tournament selection, and the effects of noise. Urbana 51 (Dec ), [19] T. Motoki. Calculating the expected loss of diversity of selection schemes. Evolutionary Computation 10, 4 (2002), [20] H. Mühlenbein and D. Schlierkamp-Voosen. Predictive models for the breeder genetic algorithm, I: Continuous parameter optimization. Evolutionary Computation 1, 1 (1993), [21] R. Poli and W. B. Langdon. Backward-chaining evolutionary algorithms. Artificial Intelligence 170, 11 (2006), [22] C. R. Reeves and J. E. Rowe. Genetic Algorithms: Principles and Perspectives: A Guide to GA Theory. Kluwer Academic Publishers, Norwell, MA, USA, [23] A. Sokolov and D. Whitley. Unbiased tournament selection. In Proceedings of the 2005 conference on Genetic and evolutionary computation (New York, NY, USA, 2005), GECCO 05, ACM, pp [24] P. Vajda, Á. E. Eiben, and W. Hordijk. Parameter control methods for selection operators in genetic algorithms. In PPSN (2008), vol of Lecture Notes in Computer Science, Springer, pp [25] H. Xie, M. Zhang, and P. Andreae. Another investigation on tournament selection: modelling and visualisation. In GECCO 07: Proceedings of the 9th annual conference on Genetic and evolutionary computation (London, 7-11 July 2007), vol. 2, ACM Press, pp Received: October, 2010

Performance Evaluation of Best-Worst Selection Criteria for Genetic Algorithm

Performance Evaluation of Best-Worst Selection Criteria for Genetic Algorithm Mathematics and Computer Science 2017; 2(6): 89-97 http://www.sciencepublishinggroup.com/j/mcs doi: 10.11648/j.mcs.20170206.12 ISSN: 2575-6036 (Print); ISSN: 2575-6028 (Online) Performance Evaluation of

More information

Gene Pool Recombination in Genetic Algorithms

Gene Pool Recombination in Genetic Algorithms Gene Pool Recombination in Genetic Algorithms Heinz Mühlenbein GMD 53754 St. Augustin Germany muehlenbein@gmd.de Hans-Michael Voigt T.U. Berlin 13355 Berlin Germany voigt@fb10.tu-berlin.de Abstract: A

More information

An Evolutionary approach for solving Shrödinger Equation

An Evolutionary approach for solving Shrödinger Equation An Evolutionary approach for solving Shrödinger Equation Khalid jebari 1, Mohammed Madiafi and Abdelaziz Elmoujahid 3 1 LCS Laboratory Faculty of Sciences Rabat Agdal, University Mohammed V, UM5A Rabat,

More information

Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem

Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem Research Collection Working Paper Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem Author(s): Laumanns, Marco; Thiele, Lothar; Zitzler, Eckart;

More information

Effects of the Use of Non-Geometric Binary Crossover on Evolutionary Multiobjective Optimization

Effects of the Use of Non-Geometric Binary Crossover on Evolutionary Multiobjective Optimization Effects of the Use of Non-Geometric Binary Crossover on Evolutionary Multiobjective Optimization Hisao Ishibuchi, Yusuke Nojima, Noritaka Tsukamoto, and Ken Ohara Graduate School of Engineering, Osaka

More information

Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization.

Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization. nd Workshop on Advanced Research and Technology in Industry Applications (WARTIA ) Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization

More information

CONTRASTING MAIN SELECTION METHODS IN GENETIC ALGORITHMS

CONTRASTING MAIN SELECTION METHODS IN GENETIC ALGORITHMS CONTRASTING MAIN SELECTION METHODS IN GENETIC ALGORITHMS Alfonso H., Cesan P., Fernandez N., Minetti G., Salto C., Velazco L. Proyecto UNLPAM-9/F9 Departamento de Informática - Facultad de Ingeniería Universidad

More information

Beta Damping Quantum Behaved Particle Swarm Optimization

Beta Damping Quantum Behaved Particle Swarm Optimization Beta Damping Quantum Behaved Particle Swarm Optimization Tarek M. Elbarbary, Hesham A. Hefny, Atef abel Moneim Institute of Statistical Studies and Research, Cairo University, Giza, Egypt tareqbarbary@yahoo.com,

More information

A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape

A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape WCCI 200 IEEE World Congress on Computational Intelligence July, 8-23, 200 - CCIB, Barcelona, Spain CEC IEEE A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape Liang Shen and

More information

Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape

Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape Liang Shen Department of Computer Science Aberystwyth University Ceredigion, SY23 3DB UK lls08@aber.ac.uk Jun He Department

More information

Evolutionary Computation

Evolutionary Computation Evolutionary Computation - Computational procedures patterned after biological evolution. - Search procedure that probabilistically applies search operators to set of points in the search space. - Lamarck

More information

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS J. of Electromagn. Waves and Appl., Vol. 23, 711 721, 2009 ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS L. Zhang, F. Yang, and

More information

Lecture 06: Niching and Speciation (Sharing)

Lecture 06: Niching and Speciation (Sharing) Xin Yao 1 Lecture 06: Niching and Speciation (Sharing) 1. Review of the last lecture Constraint handling using the penalty and repair methods Stochastic ranking for constraint handling 2. Why niching 3.

More information

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Miguel Leon Ortiz and Ning Xiong Mälardalen University, Västerås, SWEDEN Abstract. Differential evolution

More information

2 Differential Evolution and its Control Parameters

2 Differential Evolution and its Control Parameters COMPETITIVE DIFFERENTIAL EVOLUTION AND GENETIC ALGORITHM IN GA-DS TOOLBOX J. Tvrdík University of Ostrava 1 Introduction The global optimization problem with box constrains is formed as follows: for a

More information

Differential Evolution Based Particle Swarm Optimization

Differential Evolution Based Particle Swarm Optimization Differential Evolution Based Particle Swarm Optimization Mahamed G.H. Omran Department of Computer Science Gulf University of Science and Technology Kuwait mjomran@gmail.com Andries P. Engelbrecht Department

More information

Fitness Inheritance in Multi-Objective Optimization

Fitness Inheritance in Multi-Objective Optimization Fitness Inheritance in Multi-Objective Optimization Jian-Hung Chen David E. Goldberg Shinn-Ying Ho Kumara Sastry IlliGAL Report No. 2002017 June, 2002 Illinois Genetic Algorithms Laboratory (IlliGAL) Department

More information

Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress

Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Petr Pošík Czech Technical University, Faculty of Electrical Engineering, Department of Cybernetics Technická, 66 7 Prague

More information

Adaptive Generalized Crowding for Genetic Algorithms

Adaptive Generalized Crowding for Genetic Algorithms Carnegie Mellon University From the SelectedWorks of Ole J Mengshoel Fall 24 Adaptive Generalized Crowding for Genetic Algorithms Ole J Mengshoel, Carnegie Mellon University Severinio Galan Antonio de

More information

Finding Multiple Global Optima Exploiting Differential Evolution s Niching Capability

Finding Multiple Global Optima Exploiting Differential Evolution s Niching Capability Finding Multiple Global Optima Exploiting Differential Evolution s Niching Capability Michael G. Epitropakis Computational Intelligence Laboratory, Department of Mathematics, University of Patras, Greece.

More information

Dynamic Optimization using Self-Adaptive Differential Evolution

Dynamic Optimization using Self-Adaptive Differential Evolution Dynamic Optimization using Self-Adaptive Differential Evolution IEEE Congress on Evolutionary Computation (IEEE CEC 2009), Trondheim, Norway, May 18-21, 2009 J. Brest, A. Zamuda, B. Bošković, M. S. Maučec,

More information

Matrix Analysis of Genetic Programming Mutation

Matrix Analysis of Genetic Programming Mutation See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/255568988 Matrix Analysis of Genetic Programming Mutation Article in Lecture Notes in Computer

More information

When to Use Bit-Wise Neutrality

When to Use Bit-Wise Neutrality When to Use Bit-Wise Neutrality Tobias Friedrich Department 1: Algorithms and Complexity Max-Planck-Institut für Informatik Saarbrücken, Germany Frank Neumann Department 1: Algorithms and Complexity Max-Planck-Institut

More information

Program Evolution by Integrating EDP and GP

Program Evolution by Integrating EDP and GP Program Evolution by Integrating EDP and GP Kohsuke Yanai and Hitoshi Iba Dept. of Frontier Informatics, Graduate School of Frontier Science, The University of Tokyo. 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654,

More information

Burden and Benefits of Redundancy

Burden and Benefits of Redundancy Burden and Benefits of Redundancy Karsten Weicker Institute of Computer Science University of Stuttgart Breitwiesenstr. 7565 Stuttgart, Germany Nicole Weicker Institute of Computer Science University of

More information

Mating Restriction and Niching Pressure: Results from Agents and Implications for General EC

Mating Restriction and Niching Pressure: Results from Agents and Implications for General EC Mating Restriction and Niching Pressure: Results from Agents and Implications for General EC R.E. Smith and Claudio Bonacina The Intelligent Computer Systems Centre, Faculty of Computing Engineering, and

More information

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms Yang Yu and Zhi-Hua Zhou National Laboratory for Novel Software Technology Nanjing University, Nanjing 20093, China

More information

Fuzzy adaptive catfish particle swarm optimization

Fuzzy adaptive catfish particle swarm optimization ORIGINAL RESEARCH Fuzzy adaptive catfish particle swarm optimization Li-Yeh Chuang, Sheng-Wei Tsai, Cheng-Hong Yang. Institute of Biotechnology and Chemical Engineering, I-Shou University, Kaohsiung, Taiwan

More information

Evolution Strategies for Constants Optimization in Genetic Programming

Evolution Strategies for Constants Optimization in Genetic Programming Evolution Strategies for Constants Optimization in Genetic Programming César L. Alonso Centro de Inteligencia Artificial Universidad de Oviedo Campus de Viesques 33271 Gijón calonso@uniovi.es José Luis

More information

Fitness distributions and GA hardness

Fitness distributions and GA hardness Fitness distributions and GA hardness Yossi Borenstein and Riccardo Poli Department of Computer Science University of Essex Abstract. Considerable research effort has been spent in trying to formulate

More information

Convergence Time for Linkage Model Building in Estimation of Distribution Algorithms

Convergence Time for Linkage Model Building in Estimation of Distribution Algorithms Convergence Time for Linkage Model Building in Estimation of Distribution Algorithms Hau-Jiun Yang Tian-Li Yu TEIL Technical Report No. 2009003 January, 2009 Taiwan Evolutionary Intelligence Laboratory

More information

A Restart CMA Evolution Strategy With Increasing Population Size

A Restart CMA Evolution Strategy With Increasing Population Size Anne Auger and Nikolaus Hansen A Restart CMA Evolution Strategy ith Increasing Population Size Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2005 c IEEE A Restart CMA Evolution Strategy

More information

An Evolutionary Programming Based Algorithm for HMM training

An Evolutionary Programming Based Algorithm for HMM training An Evolutionary Programming Based Algorithm for HMM training Ewa Figielska,Wlodzimierz Kasprzak Institute of Control and Computation Engineering, Warsaw University of Technology ul. Nowowiejska 15/19,

More information

Crossover Gene Selection by Spatial Location

Crossover Gene Selection by Spatial Location Crossover Gene Selection by Spatial Location ABSTRACT Dr. David M. Cherba Computer Science Department Michigan State University 3105 Engineering Building East Lansing, MI 48823 USA cherbada@cse.msu.edu

More information

Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators

Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators Andrea Mambrini 1 University of Birmingham, Birmingham UK 6th June 2013 1 / 33 Andrea Mambrini GSGP: theory-laden

More information

Adaptive Differential Evolution and Exponential Crossover

Adaptive Differential Evolution and Exponential Crossover Proceedings of the International Multiconference on Computer Science and Information Technology pp. 927 931 ISBN 978-83-60810-14-9 ISSN 1896-7094 Adaptive Differential Evolution and Exponential Crossover

More information

A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems

A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems Jakob Vesterstrøm BiRC - Bioinformatics Research Center University

More information

Egocentric Particle Swarm Optimization

Egocentric Particle Swarm Optimization Egocentric Particle Swarm Optimization Foundations of Evolutionary Computation Mandatory Project 1 Magnus Erik Hvass Pedersen (971055) February 2005, Daimi, University of Aarhus 1 Introduction The purpose

More information

Evolutionary Algorithms How to Cope With Plateaus of Constant Fitness and When to Reject Strings of The Same Fitness

Evolutionary Algorithms How to Cope With Plateaus of Constant Fitness and When to Reject Strings of The Same Fitness Evolutionary Algorithms How to Cope With Plateaus of Constant Fitness and When to Reject Strings of The Same Fitness Thomas Jansen and Ingo Wegener FB Informatik, LS 2, Univ. Dortmund, 44221 Dortmund,

More information

Bounded Approximation Algorithms

Bounded Approximation Algorithms Bounded Approximation Algorithms Sometimes we can handle NP problems with polynomial time algorithms which are guaranteed to return a solution within some specific bound of the optimal solution within

More information

Preferred citation style for this presentation

Preferred citation style for this presentation Preferred citation style for this presentation Vitins, B.J. (2010) Grammar-Based Network Construction, presented at the Seminar Modeling Complex Socio-Economic Systems and Crises 5, ETH Zurich, Zurich,

More information

The particle swarm optimization algorithm: convergence analysis and parameter selection

The particle swarm optimization algorithm: convergence analysis and parameter selection Information Processing Letters 85 (2003) 317 325 www.elsevier.com/locate/ipl The particle swarm optimization algorithm: convergence analysis and parameter selection Ioan Cristian Trelea INA P-G, UMR Génie

More information

Stochastic Velocity Threshold Inspired by Evolutionary Programming

Stochastic Velocity Threshold Inspired by Evolutionary Programming Stochastic Velocity Threshold Inspired by Evolutionary Programming Zhihua Cui Xingjuan Cai and Jianchao Zeng Complex System and Computational Intelligence Laboratory, Taiyuan University of Science and

More information

Genotype-Fitness Correlation Analysis for Evolutionary Design of Self-Assembly Wang Tiles

Genotype-Fitness Correlation Analysis for Evolutionary Design of Self-Assembly Wang Tiles Genotype-Fitness Correlation Analysis for Evolutionary Design of Self-Assembly Wang Tiles Germán Terrazas and Natalio Krasnogor Abstract In a previous work we have reported on the evolutionary design optimisation

More information

Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 2015 Learning Based Competition Problems

Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 2015 Learning Based Competition Problems Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 05 Learning Based Competition Problems Chao Yu, Ling Chen Kelley,, and Ying Tan, The Key Laboratory of Machine Perception

More information

Looking Under the EA Hood with Price s Equation

Looking Under the EA Hood with Price s Equation Looking Under the EA Hood with Price s Equation Jeffrey K. Bassett 1, Mitchell A. Potter 2, and Kenneth A. De Jong 1 1 George Mason University, Fairfax, VA 22030 {jbassett, kdejong}@cs.gmu.edu 2 Naval

More information

CLASSICAL gradient methods and evolutionary algorithms

CLASSICAL gradient methods and evolutionary algorithms IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 2, NO. 2, JULY 1998 45 Evolutionary Algorithms and Gradient Search: Similarities and Differences Ralf Salomon Abstract Classical gradient methods and

More information

Improving on the Kalman Swarm

Improving on the Kalman Swarm Improving on the Kalman Swarm Extracting Its Essential Characteristics Christopher K. Monson and Kevin D. Seppi Brigham Young University, Provo UT 84602, USA {c,kseppi}@cs.byu.edu Abstract. The Kalman

More information

Robust Multi-Objective Optimization in High Dimensional Spaces

Robust Multi-Objective Optimization in High Dimensional Spaces Robust Multi-Objective Optimization in High Dimensional Spaces André Sülflow, Nicole Drechsler, and Rolf Drechsler Institute of Computer Science University of Bremen 28359 Bremen, Germany {suelflow,nd,drechsle}@informatik.uni-bremen.de

More information

Behaviour of the UMDA c algorithm with truncation selection on monotone functions

Behaviour of the UMDA c algorithm with truncation selection on monotone functions Mannheim Business School Dept. of Logistics Technical Report 01/2005 Behaviour of the UMDA c algorithm with truncation selection on monotone functions Jörn Grahl, Stefan Minner, Franz Rothlauf Technical

More information

Black Box Search By Unbiased Variation

Black Box Search By Unbiased Variation Black Box Search By Unbiased Variation Per Kristian Lehre and Carsten Witt CERCIA, University of Birmingham, UK DTU Informatics, Copenhagen, Denmark ThRaSH - March 24th 2010 State of the Art in Runtime

More information

Crossover and the Different Faces of Differential Evolution Searches

Crossover and the Different Faces of Differential Evolution Searches WCCI 21 IEEE World Congress on Computational Intelligence July, 18-23, 21 - CCIB, Barcelona, Spain CEC IEEE Crossover and the Different Faces of Differential Evolution Searches James Montgomery Abstract

More information

Application of a GA/Bayesian Filter-Wrapper Feature Selection Method to Classification of Clinical Depression from Speech Data

Application of a GA/Bayesian Filter-Wrapper Feature Selection Method to Classification of Clinical Depression from Speech Data Application of a GA/Bayesian Filter-Wrapper Feature Selection Method to Classification of Clinical Depression from Speech Data Juan Torres 1, Ashraf Saad 2, Elliot Moore 1 1 School of Electrical and Computer

More information

Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction

Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction 3. Introduction Currency exchange rate is an important element in international finance. It is one of the chaotic,

More information

arxiv: v1 [cs.ne] 29 Jul 2014

arxiv: v1 [cs.ne] 29 Jul 2014 A CUDA-Based Real Parameter Optimization Benchmark Ke Ding and Ying Tan School of Electronics Engineering and Computer Science, Peking University arxiv:1407.7737v1 [cs.ne] 29 Jul 2014 Abstract. Benchmarking

More information

Gradient-based Adaptive Stochastic Search

Gradient-based Adaptive Stochastic Search 1 / 41 Gradient-based Adaptive Stochastic Search Enlu Zhou H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology November 5, 2014 Outline 2 / 41 1 Introduction

More information

ACTA UNIVERSITATIS APULENSIS No 11/2006

ACTA UNIVERSITATIS APULENSIS No 11/2006 ACTA UNIVERSITATIS APULENSIS No /26 Proceedings of the International Conference on Theory and Application of Mathematics and Informatics ICTAMI 25 - Alba Iulia, Romania FAR FROM EQUILIBRIUM COMPUTATION

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Principles of Pattern Recognition. C. A. Murthy Machine Intelligence Unit Indian Statistical Institute Kolkata

Principles of Pattern Recognition. C. A. Murthy Machine Intelligence Unit Indian Statistical Institute Kolkata Principles of Pattern Recognition C. A. Murthy Machine Intelligence Unit Indian Statistical Institute Kolkata e-mail: murthy@isical.ac.in Pattern Recognition Measurement Space > Feature Space >Decision

More information

Evolving more efficient digital circuits by allowing circuit layout evolution and multi-objective fitness

Evolving more efficient digital circuits by allowing circuit layout evolution and multi-objective fitness Evolving more efficient digital circuits by allowing circuit layout evolution and multi-objective fitness Tatiana Kalganova Julian Miller School of Computing School of Computing Napier University Napier

More information

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear

More information

A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization

A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization A Non-Parametric Statistical Dominance Operator for Noisy Multiobjective Optimization Dung H. Phan and Junichi Suzuki Deptartment of Computer Science University of Massachusetts, Boston, USA {phdung, jxs}@cs.umb.edu

More information

Plateaus Can Be Harder in Multi-Objective Optimization

Plateaus Can Be Harder in Multi-Objective Optimization Plateaus Can Be Harder in Multi-Objective Optimization Tobias Friedrich and Nils Hebbinghaus and Frank Neumann Max-Planck-Institut für Informatik, Campus E1 4, 66123 Saarbrücken, Germany Abstract In recent

More information

Toward Effective Initialization for Large-Scale Search Spaces

Toward Effective Initialization for Large-Scale Search Spaces Toward Effective Initialization for Large-Scale Search Spaces Shahryar Rahnamayan University of Ontario Institute of Technology (UOIT) Faculty of Engineering and Applied Science 000 Simcoe Street North

More information

THE objective of global optimization is to find the

THE objective of global optimization is to find the Large Scale Global Optimization Using Differential Evolution With Self-adaptation and Cooperative Co-evolution Aleš Zamuda, Student Member, IEEE, Janez Brest, Member, IEEE, Borko Bošković, Student Member,

More information

Representation and Hidden Bias II: Eliminating Defining Length Bias in Genetic Search via Shuffle Crossover

Representation and Hidden Bias II: Eliminating Defining Length Bias in Genetic Search via Shuffle Crossover Representation and Hidden Bias II: Eliminating Defining Length Bias in Genetic Search via Shuffle Crossover Abstract The traditional crossover operator used in genetic search exhibits a position-dependent

More information

Large Scale Continuous EDA Using Mutual Information

Large Scale Continuous EDA Using Mutual Information Large Scale Continuous EDA Using Mutual Information Qi Xu School of Computer Science University of Birmingham Email: qxx506@student.bham.ac.uk Momodou L. Sanyang School of Computer Science University of

More information

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata Journal of Mathematics and Statistics 2 (2): 386-390, 2006 ISSN 1549-3644 Science Publications, 2006 An Evolution Strategy for the Induction of Fuzzy Finite-state Automata 1,2 Mozhiwen and 1 Wanmin 1 College

More information

An Analysis of Diploidy and Dominance in Genetic Algorithms

An Analysis of Diploidy and Dominance in Genetic Algorithms An Analysis of Diploidy and Dominance in Genetic Algorithms Dan Simon Cleveland State University Department of Electrical and Computer Engineering Cleveland, Ohio d.j.simon@csuohio.edu Abstract The use

More information

Fast Evolution Strategies. Xin Yao and Yong Liu. University College, The University of New South Wales. Abstract

Fast Evolution Strategies. Xin Yao and Yong Liu. University College, The University of New South Wales. Abstract Fast Evolution Strategies Xin Yao and Yong Liu Computational Intelligence Group, School of Computer Science University College, The University of New South Wales Australian Defence Force Academy, Canberra,

More information

Genetic Algorithms. Seth Bacon. 4/25/2005 Seth Bacon 1

Genetic Algorithms. Seth Bacon. 4/25/2005 Seth Bacon 1 Genetic Algorithms Seth Bacon 4/25/2005 Seth Bacon 1 What are Genetic Algorithms Search algorithm based on selection and genetics Manipulate a population of candidate solutions to find a good solution

More information

Relation between Pareto-Optimal Fuzzy Rules and Pareto-Optimal Fuzzy Rule Sets

Relation between Pareto-Optimal Fuzzy Rules and Pareto-Optimal Fuzzy Rule Sets Relation between Pareto-Optimal Fuzzy Rules and Pareto-Optimal Fuzzy Rule Sets Hisao Ishibuchi, Isao Kuwajima, and Yusuke Nojima Department of Computer Science and Intelligent Systems, Osaka Prefecture

More information

Available online at ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95

Available online at  ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95 Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 20 (2013 ) 90 95 Complex Adaptive Systems, Publication 3 Cihan H. Dagli, Editor in Chief Conference Organized by Missouri

More information

Particle Swarm Optimization with Velocity Adaptation

Particle Swarm Optimization with Velocity Adaptation In Proceedings of the International Conference on Adaptive and Intelligent Systems (ICAIS 2009), pp. 146 151, 2009. c 2009 IEEE Particle Swarm Optimization with Velocity Adaptation Sabine Helwig, Frank

More information

Evolutionary Computation Theory. Jun He School of Computer Science University of Birmingham Web: jxh

Evolutionary Computation Theory. Jun He School of Computer Science University of Birmingham Web:   jxh Evolutionary Computation Theory Jun He School of Computer Science University of Birmingham Web: www.cs.bham.ac.uk/ jxh Outline Motivation History Schema Theorem Convergence and Convergence Rate Computational

More information

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Karol Opara 1 and Jaros law Arabas 2 1 Systems Research Institute, Polish Academy of Sciences 2 Institute of Electronic

More information

Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation

Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation Anne Auger & Nikolaus Hansen INRIA Research Centre Saclay Île-de-France Project team TAO University Paris-Sud, LRI (UMR 8623), Bat.

More information

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization Deepak Singh Raipur Institute of Technology Raipur, India Vikas Singh ABV- Indian Institute of Information Technology

More information

Efficient Discretization Scheduling in Multiple Dimensions. Laura A. Albert and David E. Goldberg. IlliGAL Report No February 2002

Efficient Discretization Scheduling in Multiple Dimensions. Laura A. Albert and David E. Goldberg. IlliGAL Report No February 2002 Efficient Discretization Scheduling in Multiple Dimensions Laura A. Albert and David E. Goldberg IlliGAL Report No. 2002006 February 2002 Illinois Genetic Algorithms Laboratory University of Illinois at

More information

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems Thore Graepel and Nicol N. Schraudolph Institute of Computational Science ETH Zürich, Switzerland {graepel,schraudo}@inf.ethz.ch

More information

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems

Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems Saku Kukkonen, Student Member, IEEE and Jouni Lampinen Abstract This paper presents

More information

Truncation Selection and Gaussian EDA: Bounds for Sustainable Progress in High-Dimensional Spaces

Truncation Selection and Gaussian EDA: Bounds for Sustainable Progress in High-Dimensional Spaces Truncation Selection and Gaussian EDA: Bounds for Sustainable Progress in High-Dimensional Spaces Petr Pošík Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics

More information

When to use bit-wise neutrality

When to use bit-wise neutrality Nat Comput (010) 9:83 94 DOI 10.1007/s11047-008-9106-8 When to use bit-wise neutrality Tobias Friedrich Æ Frank Neumann Published online: 6 October 008 Ó Springer Science+Business Media B.V. 008 Abstract

More information

Program Evolution by Integrating EDP and GP

Program Evolution by Integrating EDP and GP Program Evolution by Integrating EDP and GP Kohsuke Yanai, Hitoshi Iba Dept. of Frontier Informatics, Graduate School of Frontier Science, The University of Tokyo. 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654,

More information

A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions

A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions Chao Qian,2, Yang Yu 2, and Zhi-Hua Zhou 2 UBRI, School of Computer Science and Technology, University of

More information

Why Topology Matters. Spatial Evolutionary Algorithms Evolution in Space and Time. Island Population Topologies. Main Population Topologies

Why Topology Matters. Spatial Evolutionary Algorithms Evolution in Space and Time. Island Population Topologies. Main Population Topologies Why Topology Matters Spatial Evolutionary Algorithms Evolution in Space and Time Marco Tomassini marco.tomassini@unil.ch University of Lausanne, Switzerland The spatial structure of a population will be

More information

Multiobjective Evolutionary Algorithms. Pareto Rankings

Multiobjective Evolutionary Algorithms. Pareto Rankings Monografías del Semin. Matem. García de Galdeano. 7: 7 3, (3). Multiobjective Evolutionary Algorithms. Pareto Rankings Alberto, I.; Azcarate, C.; Mallor, F. & Mateo, P.M. Abstract In this work we present

More information

Busy Beaver The Influence of Representation

Busy Beaver The Influence of Representation Busy Beaver The Influence of Representation Penousal Machado *, Francisco B. Pereira *, Amílcar Cardoso **, Ernesto Costa ** Centro de Informática e Sistemas da Universidade de Coimbra {machado, xico,

More information

Problem Statement Continuous Domain Search/Optimization. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms.

Problem Statement Continuous Domain Search/Optimization. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms Anne Auger & Nikolaus Hansen INRIA Saclay - Ile-de-France, project team TAO Universite Paris-Sud, LRI, Bat. 49 945 ORSAY

More information

Center-based initialization for large-scale blackbox

Center-based initialization for large-scale blackbox See discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/903587 Center-based initialization for large-scale blackbox problems ARTICLE FEBRUARY 009 READS

More information

The Dispersion Metric and the CMA Evolution Strategy

The Dispersion Metric and the CMA Evolution Strategy The Dispersion Metric and the CMA Evolution Strategy Monte Lunacek Department of Computer Science Colorado State University Fort Collins, CO 80523 lunacek@cs.colostate.edu Darrell Whitley Department of

More information

Convergence Rates for the Distribution of Program Outputs

Convergence Rates for the Distribution of Program Outputs Convergence Rates for the Distribution of Program Outputs W. B. Langdon Computer Science, University College, London, Gower Street, London, WCE 6BT, UK W.Langdon@cs.ucl.ac.uk http://www.cs.ucl.ac.uk/staff/w.langdon

More information

On the Effects of Locality in a Permutation Problem: The Sudoku Puzzle

On the Effects of Locality in a Permutation Problem: The Sudoku Puzzle On the Effects of Locality in a Permutation Problem: The Sudoku Puzzle Edgar Galván-López and Michael O Neill Abstract We present an analysis of an application of Evolutionary Computation to the Sudoku

More information

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Tadahiko Murata 1, Shiori Kaige 2, and Hisao Ishibuchi 2 1 Department of Informatics, Kansai University 2-1-1 Ryozenji-cho,

More information

Machine Learning Lecture 5

Machine Learning Lecture 5 Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory

More information

Chapter 8: Introduction to Evolutionary Computation

Chapter 8: Introduction to Evolutionary Computation Computational Intelligence: Second Edition Contents Some Theories about Evolution Evolution is an optimization process: the aim is to improve the ability of an organism to survive in dynamically changing

More information

Landscapes and Other Art Forms.

Landscapes and Other Art Forms. Landscapes and Other Art Forms. Darrell Whitley Computer Science, Colorado State University Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives H. Ishibuchi N. Akedo H. Ohyanagi and Y. Nojima Behavior of EMO algorithms on many-objective optimization problems with correlated objectives Proc. of 211 IEEE Congress on Evolutionary Computation pp.

More information