ε Constrained CCPSO with Different Improvement Detection Techniques for Large-Scale Constrained Optimization
|
|
- Beverly Miles
- 5 years ago
- Views:
Transcription
1 216 49th Hawaii International Conference on System Sciences ε Constrained CCPSO with Different Improvement Detection Techniques for Large-Scale Constrained Optimization Chen Peng, Qing Hui Department of Electrical and Computer Engineering, University of Nebrasa-Lincoln Lincoln, NE , USA s: Abstract Although there are many studies on large-scale unconstrained optimization (e.g., with 1 to 1 variables) and small-scale constrained optimization (e.g., with 1 to 3 variables) using nature-inspired algorithms (e.g., evolutionary algorithms and swarm intelligence algorithms), no publicly available nature-inspired algorithm is developed for large-scale constrained optimization. In this paper, we combine a cooperative coevolutionary particle swarm optimization (CCPSO) algorithm with the ε constrained method to solve large-scale real-valued constrained optimization problems. The εccpso framewor is proposed, and three different algorithms based on the framewor, i.e.,, εccpsow and, are developed. The proposed algorithms compare favorably to the state-of-the-art constrained optimization algorithm εdeag on large-scale problems. The experimental results further suggest that with adaptive improvement detection technique is highly competitive compared with the other algorithms considered in this wor for solving large-scale realvalued constrained optimization problems. I. INTRODUCTION Nature-inspired meta-huristics such as evolutionary algorithms and swarm intelligence algorithms have been demonstrated to be effective optimization techniques [1], [2], especially for complicated problems such as nonconvex nonlinear optimizations with an unnown objective function. Hence, in recent years there have been many studies on nature-inspired algorithms for large-scale (e.g., 1 1D, D as dimensions) unconstrained optimization [3] [8], e.g., the MLCC method [8], CCPSO2 algorithm [6], and methods based on variable interactions [3], [7]. There also has been a lot of attention on algorithms for constrained optimization at smaller scale (e.g., 1 3D) [9] [11]. However, large-scale constrained optimization using nature-inspired algorithms is still a relatively new and under-explored area, and to the authors best nowledge, by far there is no nature-inspired algorithm nown to be capable of solving a broad spectrum of large-scale real-valued constrained optimization problems. Clearly, for solving general large-scale constrained optimization problems, we need to face both the difficulties in searching the minimum of the large-dimensional objective function and locating the feasible region defined by the large-dimensional constraint functions. Many practical applications, however, need such optimization techniques. For example, in power grid systems, identifying highly critical lines are crucial for system safety and security, and such vulnerability analysis usually involves solving optimization problems of a large number of variables [12]. A typical IEEE Three-Area RTS-96 system has a total of 73 buses and 185 transmission lines, and the standard IEEE 118-bus system has a total of 185 lines and 19 generation buses [12], [13]. Here we combine an algorithm in large-scale unconstrained optimization area, i.e., CCPSO2, with an effective constrained optimization technique, i.e., the ε constrained method, and propose a new framewor, i.e., εccpso, to solve large-scale constrained optimization problems. On one hand, CCPSO2 is a cooperative coevolutionary particle swarm optimization (CCPSO) algorithm using a random grouping technique, based on the cooperative coevolution (CC) strategy [14], [15], and has shown promising solution capability for large-scale unconstrained optimization. On the other hand, in [9] for 1 3D constrained optimization, the ε constrained method is adopted by a differential evolutionary (DE) algorithm, i.e., εdeag, which shows competitive results in the CEC 1 constrained real-parameter optimization competition [16], [17]. Experimental results in later sections will show that our proposed algorithms compare favorably to the εdeag algorithm on large-scale constrained optimization problems. In summary, the contributions of this paper are listed as follows. 1) This paper is, to the authors best nowledge, the first published wor on large-scale constrained optimization using nature-inspired algorithms (e.g., evolutionary algorithms and particle swarm optimization) tested by a comprehensive benchmar problem set. 2) A hybridized optimization algorithm framewor, i.e., εccpso, is proposed for large-scale constrained optimization, by combining the CCPSO2 algorithm and ε constrained method together. 3) Three different algorithms based on the εccpso framewor are proposed for large-scale constrained optimization, i.e., with ε-criteria based improvement detection, εccpsow with fixed-size improvement detection window, and with adaptive improvement detection window /16 $ IEEE DOI 1.119/HICSS
2 4) Eleven benchmar problems proposed for the CEC 1 special session on constrained real-parameter optimization (i.e., CEC 1CRPO) are extended from a maximum of 3D to a maximum of 1D for our computer experiments. 5) A comprehensive study is carried out comparing the proposed algorithms with the state-of-the-art constrained optimization algorithm εdeag on the eleven large-scale benchmar problems, with an emphasis on among the other three εccpso algorithms. This paper is organized as follows. Section II introduces and formulates the general real-valued constrained optimization problems. Section III introduces several relevant techniques, i.e., particle swarm optimization (PSO), cooperative coevolution, and ε constrained method, as bases for our algorithms. In Section IV, the εccpso framewor and the ε level controlling method in εccpso are described. Furthermore, three different algorithms based on the εccpso framewor, i.e.,, εccpsow, and, are proposed. Section V presents comprehensive and comparative results on the proposed algorithms and the εdeag algorithm for the eleven benchmar problems of large-scale constrained optimization. II. CONSTRAINED OPTIMIZATION PROBLEMS Many industrial applications are essentially constrained optimization problems, with the physical or environmental constraints formulated as equality or inequality relationships. For example, the balanced coordinated resource allocation design of a networ system [18], the vulnerability analysis of power grids [12], and the optimization of cascading failure protection in complex networs [19] can all be formulated as constrained nonlinear optimization problems. Moreover, the above mentioned applications are all potentially largescale problems, since a networ can have a large number of nodes. Generally speaing, a constrained optimization problem can be expressed in the following formulation minimize: subject to: f (x), g i (x), i = 1,...M g h j (x)=, j = 1,...M h x min x x max, = 1,...,D, where x =(x 1,x 2,...,x D ) R D is a D dimensional vector, and f (x) : R D R is called an objective function. The problem has M g inequality constraints g i (x) and M h equality constraints h j (x) =. Each element x R of the vector x is also called a variable, and has a lower bound x min and an upper bound x max. Usually the equality constraints are relaxed into inequality constraints of the form h j ( )= h j ( ) h δ, (1) where the tolerance level h δ specifies how much the equality constraints are relaxed. III. RELATED STUDIES To solve large-scale constrained optimization problems, our εccpso framewor combines a cooperative coevolutionary PSO algorithm [6] with the ε constrained method [9]. The update law of a classical PSO algorithm is given as follows v i,d (t + 1)= ωv i,d (t)+c 1 r (i) 1 (t)(p i,d(t) x i,d (t))+ c 2 r (i) 2 (t)(g d(t) x i,d (t)), (2) x i,d (t + 1)= x i,d (t)+v i,d (t + 1), where x i,d, v i,d,andp i,d denote the dth dimension of the position x i, velocity v i, and personal best p i of particle i, respectively; and g d denotes the dth dimension of the global best g of the swarm. The parameters ω, c 1, and c 2 are called inertia weight, cognitive attraction, and social attraction, respectively. In addition, r (i) 1 and r (i) 2 are random variables independently and uniformly sampled from [,1] for particle i in each iteration. In the terminology for many nature-inspired algorithms, an iteration t as in (2) is also called a generation. A. Cooperative coevolution The idea of cooperative coevolution (CC) is to partition the dimensions/variables space into certain groups and evolve the current group s variables using the best solutions found in the other groups [14], [15]. To maximize the performance of CC algorithms, correlated variables should be partitioned into the same group, and uncorrelated variables into different groups. To this end, different grouping strategies have been proposed and investigated [3], [4], [6] [8]. For example, in multilevel cooperative coevolution (MLCC) [8], at each generation a group size s is probabilistically chosen from a set S according to the recorded performance of each group size in S, then the variables are divided into K = n/s groups. In CCPSO2 [6], this procedure is simplified by choosing a new group size s randomly in uniform distribution from S only if there is no improvement to the global best g in the last generation. Competitive results have shown the effectiveness of CCPSO2 for large-scale unconstrained continuous optimization. Instead of using the classical update law (2), Ref. [6] shows that a Cauchy and Gaussian-based update law with ring topology wors better with the random grouping cooperative coevolutionary PSO. The update law is shown as follows { p i,d (t)+c i (t) p i,d (t) l i,d (t), if rand < σ, x i,d (t +1)= l i,d (t)+n i (t) p i,d (t) l i,d (t), otherwise, (3) where rand denotes a uniform random function on (,1), C i and N i are random variables following the standard Cauchy distribution and the standard Gaussian distribution 1712
3 for particle i, independently sampled at each generation t. Besides, l i,d is the dth dimension of the neighborhood s best l i of particle i. The probability of choosing Cauchy update is specified by a user-defined parameter σ [,1], andthe probability of choosing the Gaussian update is 1 σ. B. The ε constrained method There are many constraint handling methods for natureinspired-algorithms, as introduced in [11]. Some of the common ones include penalty method, stochastic raning, ε-constrained method, multi-objective approaches, ensemble of constraint-handling techniques, etc. In this paper, the ε-constrained method [1] is chosen to be the constraint handling component of our algorithm, mainly because of its simplicity and competitive performance, as already demonstrated in many wors [9], [1], [2], [21]. Besides, in the CEC 1 competition & special session on constrained realparameter optimization, an ε-constrained differential evolution algorithm with an archive and gradient-based mutation (edeag) demonstrated highly competitive results [9], and was the winner of the competition. In the ε-constrained method, for any ε, the ε level comparison < ε between two pairs of particles objective values and constraint violations ( f 1,φ 1 )and(f 2,φ 2 )isdefined as follows f 1 < f 2, if φ 1,φ 2 ε; ( f 1,φ 1 ) < ε ( f 2,φ 2 ) f 1 < f 2, if φ 1 = φ 2 ; φ 1 < φ 2, otherwise, where the constraint violation can be calculated in various ways. Here we use the following constraint violation function M g i=1 (max{,g i(x)}) 2 + M h j=1 (max{, h j (x)}) 2, φ(x)= if x [x min,x max ], = 1,...,D,, otherwise, where the inequality constraint h j ( )= h j ( ) h δ isa relaxation for the equality constraint h j ( ) with tolerance h δ. The definition of the ε level comparison < ε allows transforming a constrained optimization problem into an unconstrained problem. In case of ε =, < ε is equivalent to the comparison < of objective function values. In case of ε =, < ε is equivalent to the lexicographic order comparison where constraint violation precedes objective function values. Furthermore, it can be proved that for a certain ε value, the order of the particles is well-defined, i.e., if ( f 1,φ 1 ) < ε ( f 2,φ 2 ) and ( f 2,φ 2 ) < ε ( f 3,φ 3 ),then( f 1,φ 1 ) < ε ( f 3,φ 3 ). More properties of the ε level comparison can be found in [9], [1]. IV. ε CONSTRAINED CCPSO This section introduces the εccpso framewor and the novel ε level control method in this framewor. Furthermore, for different new group size selection methods, three different algorithms based on the εccpso framewor are proposed. A. εccpso framewor The framewor of εccpso is shown in Algorithm 1. As in CCPSO2, the dimensions of the solutions are divided into K groups, with each having size s (d = Ks). Here, the jth group part of the dimensions of a vector z is denoted as z.d j, j [1...K], andb( j,z) returns the vector (g.d 1,g.d 2,...,g.d j 1,z,g.d j+1,...,g.d K ).InεCCPSO, the comparisons between objective function values in CCPSO2 are replaced by the ε-comparisons between the pairs of objective function values and constraint violations. Furthermore, using ε-comparison, the notion of ε-minimizations, i.e., min ε and arg min ε, can be introduced. For example, min ε [ f (z i ),φ(z i )] denotes the minimal pair [ f (z i ),φ(z i )] by the criteria of ε-comparison. Let N (i) denote the neighborhood particles of particle i. Since we use a ring topology, we have N (1) ={n,1,2}, N (n) ={n 1,n,1} and N (i)={i 1,i,i + 1} for i = 2,...n 1. Furthermore, for the clarity of representation, let us define P(x) = [ f (x),φ(x)] as the pair of objective value and constraint violation corresponding to solution x. Thus, i = arg min ε i N (i) [ f (z i ),φ(z i )] = arg min ε P(z i ) i N (i) denotes the neighborhood best of particle z i by means of ε-comparison. Algorithm 1 Pseudocode for εccpso 1: Initialize each particle solution. Let initial personal best be the same as the initial solution. 2: Initialize ε = ε() for constraint handling; 3: Compute the initial global best g and neighborhood s best l based on ε-minimization; 4: Randomly choose a group size s from S and let K = n/s (the jth group part of the dimensions of vector z is denoted as z.d j, j [1...K]); 5: repeat 6: if global best has not been improved then 7: randomly choose a group size s from S and let K = D/s; 8: Ind randomly permutate indices from [1,...,D]; index vector 9: Construct K groups using Ind; 1: for each group j [1,...,K] do 11: for each particle i [i,...,n] do 12: Evaluate b( j,x i.d j ) and b( j,p i.d j ) 13: Update ε based on the current FES; 14: if P[b( j,x i.d j )] < ε P[b( j,p i.d j )] then 15: p i.d j x i.d j ; 16: if P[b( j,p i.d j )] < ε P(g) then 17: g.d j p i.d j ; 18: end for 19: for each particle i [i,...,n] do 2: l i.d j p i.d j,wherei = arg min ε P[b( j,p i.d j )]; i N (i) 21: end for 22: end for 23: Update positions of all particles using (3); 24: until Stop criteria met 1713
4 B. Controlling ε level in εccpso In an algorithm utilizing the ε constrained method, the ε level needs to be controlled such that it gradually reduces from a large value (corresponding to large tolerance of constraint violations) to zero (corresponding to zero tolerance of constraint violations). Here, based on the ε level control method used in [9], we adopt a modified version for largescale constrained optimization, which is shown as follows ( ) ε()= 1 N 1 2 N φ(x i ()) + min φ(x i()) i=1 i=1,...,n ( ) ε() 1 t cp ( t) ε( t)= T c, < t Tc,, t > T c, c p ( t)= { max { c min p, logε λ logε() log(1 T λ /T c ) }, t T λ, c min p cp (T c t), t > T λ, cp = cmin p c p (T λ ) T c T λ. where t is the current fitness evaluations, i.e., FES, and the initial ε level ε() is chosen to be the middle of the best value and average value of initial constraint violations of the particles. Besides, T c =.95Max FES is the fitness evaluations when ε is decreased to zero, i.e., ε(t c )=, where Max FES is the maximum number of fitness evaluations, and T λ is the fitness evaluations when the ε level is decreased to ε λ, i.e., ε λ = ε(t λ ). The control parameter c p is set fixed for t T λ, and is gradually reduced to its specified minimum value c min p when t > T λ, with c p (T c )=. C. εccpso with adaptive improvement detection In unconstrained CCPSO2, a new group size s is selected from S if the global best has not been improved in the last generation. Note that since in unconstrained optimization, the global best can only be changed to improve the objective function, we can consider any update of the global best as an improvement. However, when combining CCPSO2 with the ε constrained method, a problem of detecting the improvements arises. In εccpso for constrained optimization, new global best solution can also be chosen for the improvement of constraint violations. Now we consider several different ways of determining whether the global best has been improved or not in the last generation. First, let be the εccpso algorithm that detects the improvement of global best using ε- comparison, i.e., the global best is considered improved if P(g new ) < ε P(g old ),whereg new denotes the new global best and g old denotes the old global best. This is reasonable based on the new ε-criteria. Thus, from Algorithm 1, we now that any update of the global best is considered as an improvement in. This is directly derived from and is similar to the original CCPSO2 algorithm. Secondly, let εccpsow be the εccpso algorithm that detects the improvement of the global best using the ordinal comparison of objective function values (i.e., fitness values) with an improvement detection window of a specified size. In εccpsow, the new global best is considered to have been improved if the new global best objective value is smaller than the minimum of the global best fitness values in the last w generations, i.e., f (g new ) < min{ f (g) : g S g (w)}, where S g denotes the set of global best in the last w generations, and w is called the size of the improvement detection window. The rationality behind the design of εccpsow is that in a constrained optimization, both the global best fitness value and constraint violation should be reduced in order for an update of global best to be called an improvement. However, an update based on ε-comparison with a new ε level does not necessarily reduce the fitness value, thus it is reasonable to compare the current global best fitness value with a certain number of previous global best fitness values to decide whether the update is an improvement. Note that different improvement detection window size has different effect. For some problems, εccpsow with w = 1 yields better performance than using w = 1, while sometimes using w = 1 yields better performance. To automatically select an appropriate improvement detection window size, finally we propose an εccpso method which uses an adaptive detection window, i.e.,. We adopt the adaptive weighting scheme used in [8], where a performance record list R = {r 1,r 2 } is used and is updated according to r i = f (g old) f (g new ). f (g old ) The probability p 1 and p 2 of selecting window size 1 and 1 respectively in each generation is computed as e 7r i p i = e 7r, 1 + e 7r 2 where constant 7 and the natural exponential constant e are empirical values. V. COMPUTER EXPERIMENTS In this section, we show the experimental results of the proposed three different algorithms, i.e.,, εccpsow, and, compared with the stateof-the-art constrained optimization evolutionary algorithm εdeag. First, the eleven benchmar problems and the parameter settings for our algorithms are described. Second, the results of the εccpsow algorithm with w = 1andw = 1 are compared to with adaptive improvement detection technique. Then, and are compared to εdeag on the benchmars of 1D. Finally, we compare the results of and on the benchmars of higher order dimensions, i.e., 5/1D, and show that is more favorable for a general 1714
5 application. All results here are the average of 25 runs of the corresponding algorithms. Table I PARAMETER SETTINGS Name Value N 3 D 1/5/1 σ.5 Max FES 2*D S {2,5,1,5,1} for D<25; {2,5,1,5,1,25} otherwise h δ.1 ε λ.34 c min p 3 T c.95 Max FES T λ.8 Max FES Table II PROPERTIES OF THE ELEVEN CEC 1 PROBLEMS Problem Search range Objective Number of constraints Equalities Inequalities C1 [,1] D N 2N C2 [ 5.12,5.12] D S 1S 2S C3 [ 1,1] D N 1N C4 [ 5,5] D S 2N2S C5 [ 6,6] D S 2S C9 [ 5,5] D N 1S C12 [ 1,1] D S 1N 1S C14 [ 1,1] D N 3S C16 [ 1,1] D N 2S 1S1N C17 [ 1,1] D N 1S 2N C18 [ 5,5] D N 1S 1S The parameter settings for the different εccpso algorithms (i.e.,, εccpsow, and ) are shownintablei. Table III εdeag EXPERIMENTS AT DIFFERENT SCALES C4 C12 Avg. Fitness Max. Const. 7D D D D Unsolvable 35D D E+143 5D Unsolvable 1D Unsolvable A. Benchmars and experimental setup We adopt and extend eleven benchmar problems proposed for the CEC 1 special session on constrained real w = 1 w = (a) C2 of 1D w = 1 w = (c) C2 of 5D w = 1 w = (b) C5 of 1D w = 1 w = (d) C5 of 5D Figure 1. Fitness values of εccpsow with w = 1andw = 1 compared to solving two benchmar problems of 1/5 dimensions parameter optimization (i.e., CEC 1CRPO) [17], i.e., C1, C2, C3, C4, C5, C9, C12, C14, C16, C17, and C18. The properties of these problems are shown in Table II, where S is for separable and N for Non-separable. For example, C4 has a separable objective function, 2 non-separable and 2 separable equality constraints, and no inequality constraint. Detailed problem definitions can be found in [17]. Roughly speaing, these problems are defined in the formulation of (1). Moreover, all the variables of a problem have the same lower bounds and upper bounds, i.e., x min = x min and x max = x max. Besides, a random translation vector o [o min,o max ] D is used, where o min and o max are dependent on the problem. We extend the original CEC 1CRPO problems from a maximum of 3D to a maximum of 1D. For example, in C2, we have x min = 5.12 and x max = From the code of the original C2 problem, min(o) =.4966 and max(o)=.4934, thus we let o min =.5 and o max =.5, and generate o randomly in uniform distribution over the region [o min,o max ] 1. After the translation vectors o for the problems are generated, they are fixed for all experiments. B. Testing different improvement detection window strategies There are cases in which different improvement detection window sizes mae huge differences to the performance of the εccpsow algorithm. As shown in Fig. 1, when solving Problem C2 with εccpsow, using w = 1 results in better performance than using w = 1. However, when solving Problem C5, w = 1 yields better performance (note that 1715
6 Table IV 1D EXPERIMENTS εdeag Avg. Fitness Max. Const. Avg. Fitness Max. Const. Avg. Fitness Max. Const. C E E E E E-1.E+ C E+.E E+.E E-2.E+ C E E E E E E+5 C E E E E+ Unsolvable C E+2.E E E E E+2 C E+3.E E+5.E E+7.E+ C E E E E+3 Unsolvable C E+4.E E+11.E+ 7.77E+12.E+ C E-3.E E-2.E E+.E+ C E+1.E E+1.E E+2.E+ C E+.E E+1.E E+3.E+ in ε constrained optimization, the global best fitness values are not always decreasing, since the particles are compared using new ε-criteria continually). Because of the adaptive improvement detection window, for both C2 and C5, the performance of is in between εccpsow with w = 1 and with w = 1. For the other problems which are not shown in the figure, it is either that using w = 1and w = 1 in εccpsow do not mae much difference, or that the average fitness value of almost coincides to one of the two cases of εccpsow (i.e., w = 1orw = 1). Thus, the algorithm can roughly represent the class of εccpso algorithms with improvement detection window technique in terms of the average fitness value performance. For the rest of the experimental studies, we thus use the algorithm to compare with other types of algorithms on large-scale constrained optimization. C. Comparing and with εdeag Although there is no nature-inspired algorithm nown to be capable of solving a broad class of large-scale real-valued constrained optimization problems, constrained optimization algorithms of smaller scales are abundant, and many of them are well studied [11]. The εdeag algorithm is an ε constrained differential evolutionary algorithm which maes use of an archive and gradient-based mutation. It has been shown to be quite efficient for 1/3D real-valued constrained optimization [9]. The code of the algorithm can be downloaded from [22], which is also used in this research. However, as the number of dimensions increases, the efficiency decreases, as shown in Table III for Problems C4 and C12. Furthermore, when the number of dimensions is greater than a certain limit (i.e., 97D for C4 and 4D for C12), overflows occur during the execution of εdeag for these two problems. In Table IV, we compare the average fitness values (Avg. Fitness) and the maximum constraint violations (Max. Const.) of the final global best solution of the 25 runs of the and algorithms with those of the solutions of the εdeag algorithm. As shown in the table, both and surpass εdeag for all problems except C1, where and have not found a final feasible solution. Note that the global bests of and have better fitness values than that of εdeag for all problems; and except for C1, the maximum constraint violations of both and are all smaller than that of εdeag. D. Comparing with The final average global best fitness values (Avg. Fitness) and maximum constraint violation (Max. Const.) of and applied on the eleven benchmar problems of 1/5/1D are shown in Tables IV, V, and VI, respectively. Besides, the evolution process of the average fitness values of and solving six of the benchmars (i.e., C3, C4, C9, C14, C16, and C18) of 1/5D are shown in Fig. 2 and Fig. 3. From the results we can see that for six of the benchmars, i.e., C1, C2, C4, C12, C16, and C17, in the cases of 1/5D, the performance of the two algorithms are too close to tell which is better. However, for the set of problems that outperforms, i.e., C3, C5, C9, C14, and C18, the performance gaps are clear. We would thus conclude that although is better in some situations, is more favorable for more general optimization problems. VI. CONCLUSION Large-scale constrained optimization using natureinspired algorithms (e.g., PSO, DE) is still a relatively new and under-explored area, and by far there is no publicly available nature-inspired algorithm developed for such a purpose. In this paper, by combining two different techniques from two different areas (i.e., CCPSO2 and the ε constrained method), we propose the εccpso framewor for large-scale real-valued constrained optimization. Based on the εccpso framewor, we further develop three different algorithms with different global best improvement detection techniques. Experimental results show that the 1716
7 (a) C3 (b) C4 (c) C (d) C14 (e) C16 (f) C18 Figure 2. Fitness values of and solving six benchmar problems of 1 dimensions Table V 5D EXPERIMENTS Table VI 1D EXPERIMENTS Avg. Fitness Max. Const. Avg. Fitness Max. Const. C E-1.E E-1.E+ C E+.E E+.E+ C3 7.65E E E E+2 C E+ 7.66E E E+2 C E+2.E E+2.E+ C E+3.E+ 2.82E+7.E+ C E E E E+3 C E+2.E E+5.E+ C E+.E E+.E+ C E+4.E E+4.E+ C E+1.E E+1.E+ Avg. Fitness Max. Const. Avg. Fitness Max. Const. C E-1.E E-1.E+ C E+.E E+.E+ C E E E E+2 C E E E E+2 C E+2.E E+2.E+ C E+3.E E+9.E+ C E E E E+3 C E+2.E E+2.E+ C E+.E E+.E+ C E+4.E E+4.E+ C E+1.E E+1.E+ proposed algorithms compare favorably to the state-of-theart constrained optimization algorithm εdeag on large-scale problems. It is also shown that with adaptive improvement detection technique is more favorable than the other two algorithms for more general problems. REFERENCES [1] A. E. Eiben and J. E. Smith, Introduction to evolutionary computing. Springer Science & Business Media, 23. [2] J. Kennedy, J. F. Kennedy, and R. C. Eberhart, Swarm intelligence. Morgan Kaufmann, 21. [3] M. N. Omidvar, X. Li, Y. Mei, and X. Yao, Cooperative Co- Evolution With Differential Grouping for Large Scale Optimization, IEEE Trans. Evolutionary Computation, vol. 18, no. 3, pp , Jun [4] Y. Mei, X. Li, and X. Yao, Cooperative Coevolution With Route Distance Grouping for Large-Scale Capacitated Arc Routing Problems, IEEE Trans. Evolutionary Computation, vol. 18, no. 3, pp , Jun [5] B. Kazimipour, X. Li, and A. K. Qin, Initialization methods for large scale global optimization, in IEEE Congress on Evolutionary Computation. IEEE, 213, pp [6] X. Li and X. Yao, Cooperatively Coevolving Particle Swarms for Large Scale Optimization, IEEE Trans. Evolutionary Computation, vol. 16, no. 2, pp , Apr [7] W. Chen, T. Weise, Z. Yang, and K. Tang, Large-scale global optimization using cooperative coevolution with variable interaction learning, in Parallel Problem Solving from Nature, PPSN XI. Springer, 21, pp
8 (a) C3-1 (b) C4 (c) C (d) C14 (e) C16 (f) C18 Figure 3. Fitness values of and solving six benchmar problems of 5 dimensions [8] Z. Yang, K. Tang, and X. Yao, Multilevel cooperative coevolution for large scale optimization, in IEEE Congress on Evolutionary Computation. IEEE, 28, pp [9] T. Taahama and S. Saai, Constrained optimization by the ε constrained differential evolution with an archive and gradient-based mutation, in 21 IEEE Congress on Evolutionary Computation (CEC). IEEE, 21, pp [1], Constrained optimization by ε constrained particle swarm optimizer with ε-level control, in Soft Computing as Transdisciplinary Science and Technology. Springer Berlin Heidelberg, 25, pp [11] E. Mezura-Montes and C. A. Coello Coello, Constrainthandling in nature-inspired numerical optimization: past, present and future, Swarm and Evolutionary Computation, vol. 1, no. 4, pp , 211. [12] L. Zhao and B. Zeng, Vulnerability analysis of power grids with line switching, IEEE Trans. Power Systems, vol. 28, no. 3, pp , Aug [13] C. Grigg, et al, The IEEE Reliability Test System A report prepared by the Reliability Test System Tas Force of the Application of Probability Methods Subcommittee, IEEE Trans. Power Systems, vol. 14, no. 3, pp , Aug [14] M. A. Potter and K. A. De Jong, A cooperative coevolutionary approach to function optimization, in Parallel Problem Solving from Nature PPSN III. Springer, 1994, pp [15] F. van den Bergh and A. P. Engelbrecht, A Cooperative approach to particle swarm optimization, Evolutionary Computation, IEEE Trans., vol. 8, no. 3, pp , Jun. 24. [16] Competition & special session on constrained real-parameter optimization, files/cec1-const/cec1-const.htm, accessed: [17] R. Mallipeddi and P. N. Suganthan, Problem definitions and evaluation criteria for the CEC 21 competition on constrained real-parameter optimization, Nanyang Technological University, Singapore, 21. [18] Q. Hui and H. Zhang, Optimal balanced coordinated networ resource allocation using swarm optimization, IEEE Trans. Systems, Man, and Cybernetics: Systems, vol. 45, no. 5, pp , 215. [19] Y.-F. Li, G. Sansavini, and E. Zio, Non-dominated sorting binary differential evolution for the multi-objective optimization of cascading failures protection in complex networs, Reliability Engineering & System Safety, vol. 111, pp , 213. [2] J. Brest, Constrained real-parameter optimization with ε- self-adaptive differential evolution, in Constraint-Handling in Evolutionary Optimization, ser. Studies in Computational Intelligence, E. Mezura-Montes, Ed. Springer Berlin Heidelberg, 29, vol. 198, pp [21] S, Zeng, et al, A lower-dimensional-search evolutionary algorithm and its application in constrained optimization problems, in IEEE Congress on Evolutionary Computation. IEEE, 27, pp [22] Tetsuyui Taahama s Home Page, hiroshima-cu.ac.jp/ taahama/eng/index.html, accessed:
Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing
International Conference on Artificial Intelligence (IC-AI), Las Vegas, USA, 2002: 1163-1169 Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing Xiao-Feng
More informationHybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].
Hybrid particle swarm algorithm for solving nonlinear constraint optimization problems BINGQIN QIAO, XIAOMING CHANG Computers and Software College Taiyuan University of Technology Department of Economic
More informationConstrained Optimization by the Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites
2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 6-2, 2006 Constrained Optimization by the Constrained Differential Evolution with Gradient-Based
More informationBeta Damping Quantum Behaved Particle Swarm Optimization
Beta Damping Quantum Behaved Particle Swarm Optimization Tarek M. Elbarbary, Hesham A. Hefny, Atef abel Moneim Institute of Statistical Studies and Research, Cairo University, Giza, Egypt tareqbarbary@yahoo.com,
More informationPARTICLE swarm optimization (PSO) is one powerful and. A Competitive Swarm Optimizer for Large Scale Optimization
IEEE TRANSACTIONS ON CYBERNETICS, VOL. XX, NO. X, XXXX XXXX 1 A Competitive Swarm Optimizer for Large Scale Optimization Ran Cheng and Yaochu Jin, Senior Member, IEEE Abstract In this paper, a novel competitive
More informationOPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION
OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION Onah C. O. 1, Agber J. U. 2 and Ikule F. T. 3 1, 2, 3 Department of Electrical and Electronics
More informationOPTIMAL POWER FLOW BASED ON PARTICLE SWARM OPTIMIZATION
U.P.B. Sci. Bull., Series C, Vol. 78, Iss. 3, 2016 ISSN 2286-3540 OPTIMAL POWER FLOW BASED ON PARTICLE SWARM OPTIMIZATION Layth AL-BAHRANI 1, Virgil DUMBRAVA 2 Optimal Power Flow (OPF) is one of the most
More informationVerification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization.
nd Workshop on Advanced Research and Technology in Industry Applications (WARTIA ) Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization
More informationACTA UNIVERSITATIS APULENSIS No 11/2006
ACTA UNIVERSITATIS APULENSIS No /26 Proceedings of the International Conference on Theory and Application of Mathematics and Informatics ICTAMI 25 - Alba Iulia, Romania FAR FROM EQUILIBRIUM COMPUTATION
More informationAMULTIOBJECTIVE optimization problem (MOP) can
1 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1 Letters 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Decomposition-Based Multiobjective Evolutionary Algorithm with an Ensemble of Neighborhood Sizes Shi-Zheng
More informationApplication of Teaching Learning Based Optimization for Size and Location Determination of Distributed Generation in Radial Distribution System.
Application of Teaching Learning Based Optimization for Size and Location Determination of Distributed Generation in Radial Distribution System. Khyati Mistry Electrical Engineering Department. Sardar
More informationConstrained Real-Parameter Optimization with Generalized Differential Evolution
2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 Constrained Real-Parameter Optimization with Generalized Differential Evolution
More informationAnalyses of Guide Update Approaches for Vector Evaluated Particle Swarm Optimisation on Dynamic Multi-Objective Optimisation Problems
WCCI 22 IEEE World Congress on Computational Intelligence June, -5, 22 - Brisbane, Australia IEEE CEC Analyses of Guide Update Approaches for Vector Evaluated Particle Swarm Optimisation on Dynamic Multi-Objective
More informationUsefulness of infeasible solutions in evolutionary search: an empirical and mathematical study
Edith Cowan University Research Online ECU Publications 13 13 Usefulness of infeasible solutions in evolutionary search: an empirical and mathematical study Lyndon While Philip Hingston Edith Cowan University,
More informationFuzzy adaptive catfish particle swarm optimization
ORIGINAL RESEARCH Fuzzy adaptive catfish particle swarm optimization Li-Yeh Chuang, Sheng-Wei Tsai, Cheng-Hong Yang. Institute of Biotechnology and Chemical Engineering, I-Shou University, Kaohsiung, Taiwan
More informationON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS
J. of Electromagn. Waves and Appl., Vol. 23, 711 721, 2009 ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS L. Zhang, F. Yang, and
More informationFuzzy Cognitive Maps Learning through Swarm Intelligence
Fuzzy Cognitive Maps Learning through Swarm Intelligence E.I. Papageorgiou,3, K.E. Parsopoulos 2,3, P.P. Groumpos,3, and M.N. Vrahatis 2,3 Department of Electrical and Computer Engineering, University
More informationFinding Robust Solutions to Dynamic Optimization Problems
Finding Robust Solutions to Dynamic Optimization Problems Haobo Fu 1, Bernhard Sendhoff, Ke Tang 3, and Xin Yao 1 1 CERCIA, School of Computer Science, University of Birmingham, UK Honda Research Institute
More informationB-Positive Particle Swarm Optimization (B.P.S.O)
Int. J. Com. Net. Tech. 1, No. 2, 95-102 (2013) 95 International Journal of Computing and Network Technology http://dx.doi.org/10.12785/ijcnt/010201 B-Positive Particle Swarm Optimization (B.P.S.O) Muhammad
More informationDifferential Evolution Based Particle Swarm Optimization
Differential Evolution Based Particle Swarm Optimization Mahamed G.H. Omran Department of Computer Science Gulf University of Science and Technology Kuwait mjomran@gmail.com Andries P. Engelbrecht Department
More informationAn Adaptive Population Size Differential Evolution with Novel Mutation Strategy for Constrained Optimization
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 An Adaptive Population Size Differential Evolution with Novel Mutation Strategy for Constrained Optimization Yuan
More informationThree Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms
Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn
More informationDecomposition and Metaoptimization of Mutation Operator in Differential Evolution
Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Karol Opara 1 and Jaros law Arabas 2 1 Systems Research Institute, Polish Academy of Sciences 2 Institute of Electronic
More informationA Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning
009 Ninth International Conference on Intelligent Systems Design and Applications A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning Hui Wang, Zhijian Wu, Shahryar Rahnamayan,
More informationPerformance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems
Performance Assessment of Generalized Differential Evolution 3 with a Given Set of Constrained Multi-Objective Test Problems Saku Kukkonen, Student Member, IEEE and Jouni Lampinen Abstract This paper presents
More informationBinary Particle Swarm Optimization with Crossover Operation for Discrete Optimization
Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization Deepak Singh Raipur Institute of Technology Raipur, India Vikas Singh ABV- Indian Institute of Information Technology
More informationWORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY
WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY Kiyoharu Tagawa School of Science and Engineering, Kindai University, Japan tagawa@info.kindai.ac.jp Abstract In real-world optimization problems, a wide
More informationJournal of Engineering Science and Technology Review 7 (1) (2014)
Jestr Journal of Engineering Science and Technology Review 7 () (204) 32 36 JOURNAL OF Engineering Science and Technology Review www.jestr.org Particle Swarm Optimization-based BP Neural Network for UHV
More informationA Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions
A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions Chao Qian,2, Yang Yu 2, and Zhi-Hua Zhou 2 UBRI, School of Computer Science and Technology, University of
More informationRuntime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights
Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights Frank Neumann 1 and Andrew M. Sutton 2 1 Optimisation and Logistics, School of Computer Science, The
More informationDistributed Particle Swarm Optimization
Distributed Particle Swarm Optimization Salman Kahrobaee CSCE 990 Seminar Main Reference: A Comparative Study of Four Parallel and Distributed PSO Methods Leonardo VANNESCHI, Daniele CODECASA and Giancarlo
More informationCBCC3 A Contribution-Based Cooperative Co-evolutionary Algorithm with Improved Exploration/Exploitation Balance
CBCC3 A Contribution-Based Cooperative Co-evolutionary Algorithm with Improved Exploration/Exploitation Balance Mohammad Nabi Omidvar, Borhan Kazimipour, Xiaodong Li, Xin Yao School of Computer Science,
More informationLevy Differential Evolutionary Particle Swarm Optimization (LEVY DEEPSO)
1 Levy Differential Evolutionary Particle Swarm Optimization (LEVY DEEPSO) Developers: Kartik S. Pandya, CHARUSAT-INDIA S.K. Joshi, MSU-INDIA S.N. Singh, IIT-K-INDIA 2 Particle Swarm Optimization[1] Particles:
More informationThe Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis
The Parameters Selection of Algorithm influencing On performance of Fault Diagnosis Yan HE,a, Wei Jin MA and Ji Ping ZHANG School of Mechanical Engineering and Power Engineer North University of China,
More informationA COMPARISON OF PARTICLE SWARM OPTIMIZATION AND DIFFERENTIAL EVOLUTION
A COMPARISON OF PARTICLE SWARM OPTIMIZATION AND DIFFERENTIAL EVOLUTION Vu Truong Vu Ho Chi Minh City University of Transport, Faculty of Civil Engineering No.2, D3 Street, Ward 25, Binh Thanh District,
More informationReactive Power Compensation for Reliability Improvement of Power Systems
for Reliability Improvement of Power Systems Mohammed Benidris, Member, IEEE, Samer Sulaeman, Student Member, IEEE, Yuting Tian, Student Member, IEEE and Joydeep Mitra, Senior Member, IEEE Department of
More informationThree Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms
Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn
More informationDESIGN AND OPTIMIZATION OF EQUAL SPLIT BROADBAND MICROSTRIP WILKINSON POWER DI- VIDER USING ENHANCED PARTICLE SWARM OPTI- MIZATION ALGORITHM
Progress In Electromagnetics Research, Vol. 118, 321 334, 2011 DESIGN AND OPTIMIZATION OF EQUAL SPLIT BROADBAND MICROSTRIP WILKINSON POWER DI- VIDER USING ENHANCED PARTICLE SWARM OPTI- MIZATION ALGORITHM
More informationParticle Swarm Optimization. Abhishek Roy Friday Group Meeting Date:
Particle Swarm Optimization Abhishek Roy Friday Group Meeting Date: 05.25.2016 Cooperation example Basic Idea PSO is a robust stochastic optimization technique based on the movement and intelligence of
More informationBehavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives
H. Ishibuchi N. Akedo H. Ohyanagi and Y. Nojima Behavior of EMO algorithms on many-objective optimization problems with correlated objectives Proc. of 211 IEEE Congress on Evolutionary Computation pp.
More informationOn the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study
On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study Yang Yu, and Zhi-Hua Zhou, Senior Member, IEEE National Key Laboratory for Novel Software Technology Nanjing University,
More informationIntuitionistic Fuzzy Estimation of the Ant Methodology
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 9, No 2 Sofia 2009 Intuitionistic Fuzzy Estimation of the Ant Methodology S Fidanova, P Marinov Institute of Parallel Processing,
More informationParticle Swarm Optimization with Velocity Adaptation
In Proceedings of the International Conference on Adaptive and Intelligent Systems (ICAIS 2009), pp. 146 151, 2009. c 2009 IEEE Particle Swarm Optimization with Velocity Adaptation Sabine Helwig, Frank
More informationA Method of HVAC Process Object Identification Based on PSO
2017 3 45 313 doi 10.3969 j.issn.1673-7237.2017.03.004 a a b a. b. 201804 PID PID 2 TU831 A 1673-7237 2017 03-0019-05 A Method of HVAC Process Object Identification Based on PSO HOU Dan - lin a PAN Yi
More informationLimiting the Velocity in the Particle Swarm Optimization Algorithm
Limiting the Velocity in the Particle Swarm Optimization Algorithm Julio Barrera 1, Osiris Álvarez-Bajo 2, Juan J. Flores 3, Carlos A. Coello Coello 4 1 Universidad Michoacana de San Nicolás de Hidalgo,
More informationTuning Parameters across Mixed Dimensional Instances: A Performance Scalability Study of Sep-G-CMA-ES
Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Tuning Parameters across Mixed Dimensional Instances: A Performance Scalability
More informationParticle swarm optimization approach to portfolio optimization
Nonlinear Analysis: Real World Applications 10 (2009) 2396 2406 Contents lists available at ScienceDirect Nonlinear Analysis: Real World Applications journal homepage: www.elsevier.com/locate/nonrwa Particle
More informationConstraint Handling Methods for Portfolio Optimization using Particle Swarm Optimization
2015 IEEE Symposium Series on Computational Intelligence Constraint Handling Methods for Portfolio Optimization using Particle Swarm Optimization Stuart G. Reid Department of Computer Science University
More informationResearch Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems
Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear
More informationParameter Sensitivity Analysis of Social Spider Algorithm
Parameter Sensitivity Analysis of Social Spider Algorithm James J.Q. Yu, Student Member, IEEE and Victor O.K. Li, Fellow, IEEE Department of Electrical and Electronic Engineering The University of Hong
More informationGeneralization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms
Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Tadahiko Murata 1, Shiori Kaige 2, and Hisao Ishibuchi 2 1 Department of Informatics, Kansai University 2-1-1 Ryozenji-cho,
More informationSolving the Constrained Nonlinear Optimization based on Imperialist Competitive Algorithm. 1 Introduction
ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.15(2013) No.3,pp.212-219 Solving the Constrained Nonlinear Optimization based on Imperialist Competitive Algorithm
More informationCapacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm
Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm Bharat Solanki Abstract The optimal capacitor placement problem involves determination of the location, number, type
More informationAdaptive Differential Evolution and Exponential Crossover
Proceedings of the International Multiconference on Computer Science and Information Technology pp. 927 931 ISBN 978-83-60810-14-9 ISSN 1896-7094 Adaptive Differential Evolution and Exponential Crossover
More informationPROMPT PARTICLE SWARM OPTIMIZATION ALGORITHM FOR SOLVING OPTIMAL REACTIVE POWER DISPATCH PROBLEM
PROMPT PARTICLE SWARM OPTIMIZATION ALGORITHM FOR SOLVING OPTIMAL REACTIVE POWER DISPATCH PROBLEM K. Lenin 1 Research Scholar Jawaharlal Nehru Technological University Kukatpally,Hyderabad 500 085, India
More informationMulti-objective Quadratic Assignment Problem instances generator with a known optimum solution
Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution Mădălina M. Drugan Artificial Intelligence lab, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels,
More informationComparison of Results on the 2017 CEC Competition on Constrained Real Parameter Optimization
Comparison of Results on the 07 CEC Competition on Constrained Real Parameter Optimization GuohuaWu, R. Mallipeddi, P. N. Suganthan National University of DefenseTechnology, Changsha, Hunan, P.R. China
More informationPerformance Evaluation of IIR Filter Design Using Multi-Swarm PSO
Proceedings of APSIPA Annual Summit and Conference 2 6-9 December 2 Performance Evaluation of IIR Filter Design Using Multi-Swarm PSO Haruna Aimi and Kenji Suyama Tokyo Denki University, Tokyo, Japan Abstract
More informationResearch Article A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution
Mathematical Problems in Engineering Volume 2015, Article ID 769245, 16 pages http://dx.doi.org/10.1155/2015/769245 Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential
More informationApplying Particle Swarm Optimization to Adaptive Controller Leandro dos Santos Coelho 1 and Fabio A. Guerra 2
Applying Particle Swarm Optimization to Adaptive Controller Leandro dos Santos Coelho 1 and Fabio A. Guerra 2 1 Production and Systems Engineering Graduate Program, PPGEPS Pontifical Catholic University
More informationDynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 2015 Learning Based Competition Problems
Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 05 Learning Based Competition Problems Chao Yu, Ling Chen Kelley,, and Ying Tan, The Key Laboratory of Machine Perception
More informationReactive Power Contribution of Multiple STATCOM using Particle Swarm Optimization
Reactive Power Contribution of Multiple STATCOM using Particle Swarm Optimization S. Uma Mageswaran 1, Dr.N.O.Guna Sehar 2 1 Assistant Professor, Velammal Institute of Technology, Anna University, Chennai,
More informationOPTIMAL LOCATION AND SIZING OF DISTRIBUTED GENERATOR IN RADIAL DISTRIBUTION SYSTEM USING OPTIMIZATION TECHNIQUE FOR MINIMIZATION OF LOSSES
780 OPTIMAL LOCATIO AD SIZIG OF DISTRIBUTED GEERATOR I RADIAL DISTRIBUTIO SYSTEM USIG OPTIMIZATIO TECHIQUE FOR MIIMIZATIO OF LOSSES A. Vishwanadh 1, G. Sasi Kumar 2, Dr. D. Ravi Kumar 3 1 (Department of
More informationImproved Shuffled Frog Leaping Algorithm Based on Quantum Rotation Gates Guo WU 1, Li-guo FANG 1, Jian-jun LI 2 and Fan-shuo MENG 1
17 International Conference on Computer, Electronics and Communication Engineering (CECE 17 ISBN: 978-1-6595-476-9 Improved Shuffled Frog Leaping Algorithm Based on Quantum Rotation Gates Guo WU 1, Li-guo
More informationA self-guided Particle Swarm Optimization with Independent Dynamic Inertia Weights Setting on Each Particle
Appl. Math. Inf. Sci. 7, No. 2, 545-552 (2013) 545 Applied Mathematics & Information Sciences An International Journal A self-guided Particle Swarm Optimization with Independent Dynamic Inertia Weights
More informationToward Effective Initialization for Large-Scale Search Spaces
Toward Effective Initialization for Large-Scale Search Spaces Shahryar Rahnamayan University of Ontario Institute of Technology (UOIT) Faculty of Engineering and Applied Science 000 Simcoe Street North
More informationMeta Heuristic Harmony Search Algorithm for Network Reconfiguration and Distributed Generation Allocation
Department of CSE, JayShriram Group of Institutions, Tirupur, Tamilnadu, India on 6 th & 7 th March 2014 Meta Heuristic Harmony Search Algorithm for Network Reconfiguration and Distributed Generation Allocation
More informationParticle swarm optimization (PSO): a potentially useful tool for chemometrics?
Particle swarm optimization (PSO): a potentially useful tool for chemometrics? Federico Marini 1, Beata Walczak 2 1 Sapienza University of Rome, Rome, Italy 2 Silesian University, Katowice, Poland Rome,
More informationCrossover and the Different Faces of Differential Evolution Searches
WCCI 21 IEEE World Congress on Computational Intelligence July, 18-23, 21 - CCIB, Barcelona, Spain CEC IEEE Crossover and the Different Faces of Differential Evolution Searches James Montgomery Abstract
More informationA COMPARATIVE STUDY ON OPTIMIZATION METHODS FOR THE CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS
A COMPARATIVE STUDY ON OPTIMIZATION METHODS FOR THE CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS OZGUR YENIAY Received 2 August 2004 and in revised form 12 November 2004 Constrained nonlinear programming
More informationRegular paper. Particle Swarm Optimization Applied to the Economic Dispatch Problem
Rafik Labdani Linda Slimani Tarek Bouktir Electrical Engineering Department, Oum El Bouaghi University, 04000 Algeria. rlabdani@yahoo.fr J. Electrical Systems 2-2 (2006): 95-102 Regular paper Particle
More informationAvailable online at AASRI Procedia 1 (2012 ) AASRI Conference on Computational Intelligence and Bioinformatics
Available online at www.sciencedirect.com AASRI Procedia ( ) 377 383 AASRI Procedia www.elsevier.com/locate/procedia AASRI Conference on Computational Intelligence and Bioinformatics Chaotic Time Series
More informationA New Uncertain Programming Model for Grain Supply Chain Design
INFORMATION Volume 5, Number, pp.-8 ISSN 343-4500 c 0 International Information Institute A New Uncertain Programming Model for Grain Supply Chain Design Sibo Ding School of Management, Henan University
More informationElectric Load Forecasting Using Wavelet Transform and Extreme Learning Machine
Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine Song Li 1, Peng Wang 1 and Lalit Goel 1 1 School of Electrical and Electronic Engineering Nanyang Technological University
More informationParticle Swarm Optimization of Hidden Markov Models: a comparative study
Particle Swarm Optimization of Hidden Markov Models: a comparative study D. Novák Department of Cybernetics Czech Technical University in Prague Czech Republic email:xnovakd@labe.felk.cvut.cz M. Macaš,
More informationCHEMICAL Reaction Optimization (CRO) [1] is a simple
Real-Coded Chemical Reaction Optimization with Different Perturbation s James J.Q. Yu, Student Member, IEEE Department of Electrical and Electronic Engineering The University of Hong Kong Email: jqyu@eee.hku.hk
More informationPrediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate
www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation
More informationResearch Article The Inertia Weight Updating Strategies in Particle Swarm Optimisation Based on the Beta Distribution
Mathematical Problems in Engineering Volume 2015, Article ID 790465, 9 pages http://dx.doi.org/10.1155/2015/790465 Research Article The Inertia Weight Updating Strategies in Particle Swarm Optimisation
More informationMulti-start JADE with knowledge transfer for numerical optimization
Multi-start JADE with knowledge transfer for numerical optimization Fei Peng, Ke Tang,Guoliang Chen and Xin Yao Abstract JADE is a recent variant of Differential Evolution (DE) for numerical optimization,
More informationResearch Article Multiswarm Particle Swarm Optimization with Transfer of the Best Particle
Computational Intelligence and Neuroscience Volume 2015, Article I 904713, 9 pages http://dx.doi.org/10.1155/2015/904713 Research Article Multiswarm Particle Swarm Optimization with Transfer of the Best
More informationEUSIPCO
EUSIPCO 3 569736677 FULLY ISTRIBUTE SIGNAL ETECTION: APPLICATION TO COGNITIVE RAIO Franc Iutzeler Philippe Ciblat Telecom ParisTech, 46 rue Barrault 753 Paris, France email: firstnamelastname@telecom-paristechfr
More informationEvolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution
Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution Michael G. Epitropakis, Member, IEEE, Vassilis P. Plagianakos and Michael N. Vrahatis Abstract In
More informationGaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress
Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Petr Pošík Czech Technical University, Faculty of Electrical Engineering, Department of Cybernetics Technická, 66 7 Prague
More informationPower Electronic Circuits Design: A Particle Swarm Optimization Approach *
Power Electronic Circuits Design: A Particle Swarm Optimization Approach * Jun Zhang, Yuan Shi, and Zhi-hui Zhan ** Department of Computer Science, Sun Yat-sen University, China, 510275 junzhang@ieee.org
More informationDesign of Higher Order LP and HP Digital IIR Filter Using the Concept of Teaching-Learning Based Optimization
Design of Higher Order LP and HP Digital IIR Filter Using the Concept of Teaching-Learning Based Optimization DAMANPREET SINGH, J.S. DHILLON Department of Computer Science & Engineering Department of Electrical
More informationEvolutionary Multiobjective. Optimization Methods for the Shape Design of Industrial Electromagnetic Devices. P. Di Barba, University of Pavia, Italy
Evolutionary Multiobjective Optimization Methods for the Shape Design of Industrial Electromagnetic Devices P. Di Barba, University of Pavia, Italy INTRODUCTION Evolutionary Multiobjective Optimization
More informationReduction of Random Variables in Structural Reliability Analysis
Reduction of Random Variables in Structural Reliability Analysis S. Adhikari and R. S. Langley Department of Engineering University of Cambridge Trumpington Street Cambridge CB2 1PZ (U.K.) February 21,
More informationTransitional Particle Swarm Optimization
International Journal of Electrical and Computer Engineering (IJECE) Vol. 7, No. 3, June 7, pp. 6~69 ISSN: 88-878, DOI:.59/ijece.v7i3.pp6-69 6 Transitional Particle Swarm Optimization Nor Azlina Ab Aziz,
More informationA Genetic Algorithm Approach for Doing Misuse Detection in Audit Trail Files
A Genetic Algorithm Approach for Doing Misuse Detection in Audit Trail Files Pedro A. Diaz-Gomez and Dean F. Hougen Robotics, Evolution, Adaptation, and Learning Laboratory (REAL Lab) School of Computer
More informationViability Principles for Constrained Optimization Using a (1+1)-CMA-ES
Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES Andrea Maesani and Dario Floreano Laboratory of Intelligent Systems, Institute of Microengineering, Ecole Polytechnique Fédérale de
More informationGlobal minimization with a new filled function approach
2nd International Conference on Electronics, Networ and Computer Engineering ICENCE 206 Global minimization with a new filled function approach Weiiang WANG,a, Youlin SHANG2,b and Mengiang LI3,c Shanghai
More informationA Particle Swarm Based Method for Composite System Reliability Analysis
A Particle Swarm Based Method for Composite System Reliability Analysis Ramesh Earla, Shashi B. Patra, Student Member, IEEE and Joydeep Mitra, Senior Member, IEEE Abstract This paper presents a new method
More informationAn Evolution Strategy for the Induction of Fuzzy Finite-state Automata
Journal of Mathematics and Statistics 2 (2): 386-390, 2006 ISSN 1549-3644 Science Publications, 2006 An Evolution Strategy for the Induction of Fuzzy Finite-state Automata 1,2 Mozhiwen and 1 Wanmin 1 College
More informationProblems of cryptography as discrete optimization tasks
Nonlinear Analysis 63 (5) e831 e837 www.elsevier.com/locate/na Problems of cryptography as discrete optimization tasks E.C. Laskari a,b, G.C. Meletiou c,, M.N. Vrahatis a,b a Computational Intelligence
More informationNonlinear Dynamical Behavior in BS Evolution Model Based on Small-World Network Added with Nonlinear Preference
Commun. Theor. Phys. (Beijing, China) 48 (2007) pp. 137 142 c International Academic Publishers Vol. 48, No. 1, July 15, 2007 Nonlinear Dynamical Behavior in BS Evolution Model Based on Small-World Network
More informationSelf-Organization by Optimizing Free-Energy
Self-Organization by Optimizing Free-Energy J.J. Verbeek, N. Vlassis, B.J.A. Kröse University of Amsterdam, Informatics Institute Kruislaan 403, 1098 SJ Amsterdam, The Netherlands Abstract. We present
More informationMultiple Particle Swarm Optimizers with Diversive Curiosity
Multiple Particle Swarm Optimizers with Diversive Curiosity Hong Zhang, Member IAENG Abstract In this paper we propose a new method, called multiple particle swarm optimizers with diversive curiosity (MPSOα/DC),
More informationEvolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape
Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape Liang Shen Department of Computer Science Aberystwyth University Ceredigion, SY23 3DB UK lls08@aber.ac.uk Jun He Department
More informationThe particle swarm optimization algorithm: convergence analysis and parameter selection
Information Processing Letters 85 (2003) 317 325 www.elsevier.com/locate/ipl The particle swarm optimization algorithm: convergence analysis and parameter selection Ioan Cristian Trelea INA P-G, UMR Génie
More informationInvestigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems
Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Miguel Leon Ortiz and Ning Xiong Mälardalen University, Västerås, SWEDEN Abstract. Differential evolution
More information