Stochastic Velocity Threshold Inspired by Evolutionary Programming

Size: px
Start display at page:

Download "Stochastic Velocity Threshold Inspired by Evolutionary Programming"

Transcription

1 Stochastic Velocity Threshold Inspired by Evolutionary Programming Zhihua Cui Xingjuan Cai and Jianchao Zeng Complex System and Computational Intelligence Laboratory, Taiyuan University of Science and Technology, Shanxi, P.R.China, Abstract Particle swarm optimization (PSO) is a new robust swarm intelligence technique, which has exhibited good performance on well-known numerical test problems. Though many improvements published aims to increase the computational efficiency, there are still many works need to do. Inspired by evolutionary programming theory, this paper proposes a self-adaptive particle swarm optimization in which the velocity threshold dynamically changes during the course of a simulation, and two further techniques are designed to avoid badly adjusted by the self-adaption. Six benchmark functions are used to testify the new algorithm, and the results show the new adaptive PSO clearly leads to better performance, although the performance improvements were found to be dependent on problems. 1. Introduction Particle swarm optimization[1][2] (PSO) is a populationbased, self-adaptive random search optimization technique based on the simulation of animal social behaviors such as fish schooling, bird flocking, etc. Due to the ease of implementation and fast convergence speed, PSO has been successfully applied with many areas including: data mining [3][4][5][6][7], image compression[8][9],ad Hoc Networks design[10], multi-objective optimization[11] etc. To make the PSO more effective and efficient, many improvements have been proposed and the research can be categorized into four parts: algorithm, topology, parameters, and combination with the other evolutionary computation techniques. There still exists a large room to making algorithm more effectiveness by providing a proportional parameter selection principle. Many published works deal with parameter selection principles, though few are concerned about velocity threshold v max.largev max increases the search region, enhancing the global search capability, as well as small v max decreases the search region, adjusting the search direction of each particle frequently. Since then, a proportional threshold v max selection principle can balance the exploitation and exploration capability of PSO, utilizing more information about search directions. Inspire by the evolutionary programming theory, this paper introduces an adaptive version of PSO modified threshold v max dynamically. Section 2 gives a briefly description of particle swarm optimization and evolutionary programming, and the similarity comparison between these two evolutionary computation techniques are proposed in section3. The details of the adaptive PSO is discussed in section4. In section5, six wellknown benchmark functions are used to test the performance of the new algorithm. Finally, further research aspects are proposed. 2. Particle Swarm Optimization and Evolutionary Programming Like other evolutionary computation techniques, particle swarm optimization maintains one population to evolve. Each individual (called particle),owns two characters: position and velocity, represents a potential solution of the search space. The velocity vector of each particle represents the forthcoming motion tendency information and the update equation of particle j of standard PSO at time t +1 is presented in equation (1): v jk (t +1)=wv jk (t) +c 1 r 1 (p jk (t) x jk (t)) (1) +c 2 r 2 (p gk (t) x jk (t)) and the corresponding position vector is updated by x jk (t +1)=x jk (t) +v jk (t +1) (2) where the k th dimensional variable of velocity vector V j (t+ 1) = (v j1 (t +1);v j2 (t +1); :::; v jn (t +1)) (n denotes the dimension of problem space) is limited by jv jk (t +1)j <v max (3) where v jk (t) and x jk (t) are the k th dimensional variables of velocity and position vectors of particle j at time t, p jk (t) and p gk (t) are the k th dimensional variables of historical positions found by particle j and the whole swarm at time t respectively. w is inertia weight between 0 and 1, accelerator coefficients c 1 and c 2 are known as constants, r 1 and r 2 are two random numbers generated with uniform distribution within (0; 1).

2 The velocity vector of each particle consists of three parts: social part c 1 r 1 (p jk (t) x jk (t)), cognitive part c 2 r 2 (p gk (t) x jk (t)) and momentum part wv jk (t) while the performance of a PSO is determined with balancing among these parts. The momentum part provides the necessary momentum for particles to roam across the search space. The cognitive part represents the personal thinking of each particle while the social part represents the collaborative effect of the particles. Initially, a population of particles is generated with random positions, then random velocities are assigned to each particle. The fitness of each particle is then evaluated according to a predefined objective function. At each generation, the velocity of each particle is calculated according to (1) and (3) and the position for the next function evaluation is updated according to (2). Each time if a particle finds a better position than the previously found best position, its location is stored in memory. If the stop conditions are satisfied, the best position found by the whole swarm is the final solution. The first added parameter is inertia weight introduced by Y.Shi and R.C.Eberhart[12] to balance between the global search and local search. They argued that a large inertia weight facilitates a global search while a small inertia weight facilitates a local search[13][14]. Though Y.L.Zheng etc.[15] proposed a different opinion either global or local search ability associates with a small inertia weight possessing the capability of exploring new space. Meanwhile, a large inertia weight provides the algorithm more chances to be stabilized. To propose a effective strategy of inertia weight, Y.Shi and R.C.Eberhart introduced a technique for adapting the inertia weight dynamically using a fuzzy controller[16][17]. Based on the information about average velocity of a swarm, N.Iwasaki and K.Yasuda finds the success or failure of a search is related to the average of absolute value of velocity. Since then, an adaptive strategy for tuning the inertia weight of the PSO method is provided[18][19]. To ensure convergence, Clerc indicated a version of PSO with constriction factor[20]. The results show that if the constriction factor is correctly chosen, it guarantees the stability of the PSO without the need to bound velocities, while it does not know whether the computational efficiency improved or not. There are still many other improvements aims to computational efficiency and swarm diversity,such as PSO with selection[21], PSO with Gaussian mutation[22], and diversity-guided PSO[23], more details can been found with the corresponding references. Evolutionary programming (EP) was devised by L.J.Fogel in the context of evolving finite state-machines to be used in the prediction of time series [24]. In contrast to genetic algorithm that emphasize crossover, mutation is the main operation in EP. Now, EP has been extended to include more general representations, such as applications involving continuous parameters and combinatorial optimization problems. Since then, EP has been applied successfully for the optimization of real-valued functions. The study results show EP with self-adaptive mutation usually performs better for the test functions[25][26]. Self-adaptive EP for continuous parameter optimization problems is now a widely accepted method in evolutionary computation. It should be noted that EP for continuous parameter optimization has many similarities with evolution strategies, although their development proceeded independently. In conventional EP, each parent generates an offspring via Gaussian mutation and better individuals among parents and offspring are selected as parents of the next generation. The individual of EP is a pair of real-valued vectors (x j ; j ) (j=1,2,...,m) where x j is a position vector while j is a standard deviation vector, and the offspring (x 0 j ; 0 j ) is computed by 0 j (k) = j(k)exp(fi 0 N (0; 1) + fin k (0; 1)) (4) x 0 j (k) =x j(k) + j (k)n k (0; 1) (5) where x j (k) is the k th variable of individual x j (k=1,2,...,n), N k (0; 1) and N (0; 1) are the two random numbers generated with mean zero and standard deviation one while N k (0; 1) is renewed for different p dimension. The factors fi and fi 0 are commonly set to ( 2 p n) 1 and ( p 2n) 1 respectively[27]. In [28], X.Yao introduced a fast EP with Cauchy mutation strategy, and the offspring is computed with x 0 j (k) =x j(k) + j (k)ffi k (t) (6) where ffi k (t) represents a Cauchy random variable with the scale t for each dimension of individual j and the update equation of 0 j (k) is the same as formula (4). Experiments show that Gaussian mutation has a good performance for some unimodal functions and multimodal functions with only a few local optimal points, whereas Cauchy mutation works well on multimodal functions with many local optimal points[28]. 3. The Comparison of Particle Swarm Optimization and Evolutionary Programming Formula (2) implies the k th variable x jk (t+1) of position vector of particle j at time t +1is satisfied with jx jk (t +1) x jk (t)j = jv jk (t +1)j (7) Combined with (3), the above formula means jx jk (t +1) x jk (t)j»v max (8) Formula (8) means x jk (t +1) only falls into the interval [x jk (t) v max ;x jk (t) +v max ] with a constant length of 2v max, no matter the selection of v max is good or not. Since the constant v max is only used to provide a displacement (Fig.1), a proportional schedule of v max can make PSO more effective. Large v max may be better when PSO explores

3 Figure 1. New Position Update with PSO probability densities. It also implies using a dynamic random variable threshold provides a more capabilities no matter exploration and exploitation. Inspired by the above conclusion,one dynamically selfadjusting strategy of velocity threshold v max is needed to improve the performance of PSO. Thanks to the similarity about v max and jk (t), one simple way to choose the value of v max is the self-adaptive method of jk (t) with formula (4) (6). Since then, a new modified version of PSO, called adaptive PSO using EP(APSO,in briefly), is proposed combining the self-adaptive selection principle of EP. It uses position and velocity information, as well as provides more exploration and exploitation capabilities and dynamic adaption of search directions. 4. Adaptive Particle Swarm Optimization Using Evolutionary Programming Figure 2. New Position Update with EP the search space as well as small v max is better if PSO exploits some region. Hence, how to schedule the v max is an interesting problem. In the following part, we compare the PSO and EP firstly, then the problem is solved according to some similarities between these two techniques. In EP, for convenience, we suppose the offspring of parent x jk (t) is x jk (t +1). From formula (5) and (6), the offspring x jk (t +1) falls into the whole axis with some probability density and the length is a random variable (Fig.2). Though the offspring fallen into the interval [x jk (t) jk (t);x jk (t)+ jk (t)] with large probability, as well as other region with small probability. It means the offspring of EP pays more attention to exploration capability if jk (t) larger than v max as well as more exploitation capability on the country. Associated with Fig.1 and Fig.2, the action of coefficient v max is similar with coefficient jk to control the region where new position (offspring) vector fallen with large probability or probability one. Though there are many differences exist: (1) The velocity threshold v max is always a constant as well as the jk is a dynamic random variable changed during the course of a simulation. Obviously, a dynamic random variable offers a more chances adjusting its selection directions to discover the better solutions. (2) The new position vector of PSO is dominated by the interval determined with the current position vector as well as the constant v max, though the offspring vector of EP falls not only within the interval determined with the current individual vector as well as deviation jk,but also extending the search area into whole axis with some The new APSO introduces a different self-adaptive strategy of velocity threshold v max for each dimension of each particle of the swarm, and dynamically adjusts its values using evolutionary programming method. Though the selfadaptive strategy is borrowed from EP, it is a stochastic strategy, and in some cases the selected v max may be bad to provide a balance. Since then, two further strategy to avoid the affection about bad selected v max are also proposed Self-adaptive Strategy of Velocity Threshold Since the velocity threshold for each dimension of each particle is dynamically adjusted, we use (v max ) jk (t) represents the k th dimensional variable of velocity threshold of particle j at time t. Similar with formula (4), the update equation of velocity threshold (v max ) jk (t) is as follows. (v max ) jk (t +1)=(v max ) jk (t)exp(fi 0 N (0; 1) + fin k (0; 1)) (9) where fi and fi 0 are the same as formula (4) Expansion Strategy Since the velocity threshold (v max ) jk (t) is a stochastic process, suppose the velocity threshold (v max ) jk (t +1) at time t +1 is larger than (v max ) jk (t), it means (v max ) jk (t) < (v max ) jk (t +1) (10) Since jv jk (t)j»(v max ) jk (t) (11) We have jv jk (t)j»(v max ) jk (t +1) (12) If the k th dimensional variable of velocity of particle j at time t +1 is satisfied with jv jk (t +1)j»(v max ) jk (t) (13)

4 Then jv jk (t +1)j»(v max ) jk (t +1) (14) is always true no matter which value (v max ) jk (t +1) is. In this case, the self-adaptive strategy of (v max ) jk (t +1) is no affection. Since then, an expansion strategy is used to overcome this shortcoming by expanse the value of v jk (t+1) to larger than (v max ) jk (t) with a probability threshold p E. Though we do not know whether the value of v jk (t +1) is better for small value or large value, the threshold p E does not too small. And this strategy should not be used in the last period to preserve convergence. In this paper, we select the threshold p E decreases linearly from 0.3 to 0.0. This setting only comes from the experimental test, and the expansion strategy is defined as follows. If (random number < p E ):and:(v max ) jk (t) < (v max ) jk (t +1):and:jv jk (t +1)j»(v max ) jk (t) v jk (t +1)=v jk (t +1) (v max) jk (t +1) (v max ) jk (t) 4.3. Hybrid Velocity Threshold Strategy (15) The self-adaptive strategy is a stochastic method, though it enhances the performance greatly (see below). There exist some extreme cases the velocity to be badly adjusted. Since then, one correct mechanism is used to control the velocity addition to the self-adaptive strategy. The mechanism uses the original constant velocity threshold, paralleled with dynamic adaptive velocity threshold to determine the velocity vector, and the better one is chosen as the next velocity vector. In this manner, each particle owns different characters: adaptive velocity (av), original velocity (ov), velocity (v), adaptive position (ap),original position (op),position (p), and adaptive velocity threshold (avt). The APSO is implemented as follows in this study. Step1. Generate the initiate population with m particles, and set the value of (v max ) jk (0) of dimension k of particle j as v 0. The position vector P of each particle is selected within the domain region as well as velocity vector of each particle is chosen within the interval [0; (v max ) jk (0)] uniformly. Step2. Update the position and velocity vectors of each particle with formula (1) and (2). If one random number is less than p E, and the conditions of expansion strategy is satisfied, make k th dimensional variable of adaptive velocity vectors is satisfied jav jk (t +1)j»(v max ) jk (t +1) (16) where (v max ) jk (t +1) represents the k th dimension of velocity threshold of particle j at time t +1, and is updated with formula (9). This strategy gives a dynamic velocity threshold so that the search direction can adaptive changed though the exploration capability not improved. Meanwhile, x jk (t +1) still falls into the interval dominated by x jk (t) and (v max ) jk (t +1) restricting the exploration capability. Since then, an enhanced APSO is used to own a larger global search capability with only adding additional second velocity threshold update equation after formula (9) defined as follows: avt jk (t +1)=(v max ) jk (t +1) P robability Density (17) where P robability Density means some ordinary probability densities such as Gauss and Cauchy. Similarly, the corresponding value of original velocity is determined with jov jk (t +1)j»v max (18) Then the original and adaptive position vectors are computed as follows. ap jk (t +1)=p jk (t) +av jk (t +1) (19) op jk (t +1)=p jk (t) +ov jk (t +1) (20) The velocity and position vectors of particle j are determined according to ρ apjk (t +1); if f (AP (t + 1)) <f(op (t + 1)) ; p jk (t+1) = op jk (t +1), otherwise. (21) ρ avjk (t +1); if f (AP (t + 1)) <f(op (t + 1)) ; v jk (t+1) = ov jk (t +1), otherwise. (22) Step3.Update the historical best position of each particle and the whole swarm. Step4. If the stop criterion is satisfied, output the final best solution of the swarm. Otherwise, goto Step2. 5. Simulation Results The benchmark functions in this section provide a balance of unimodal, multimodal with many local minima and only a few local minima as well as easy and difficult functions. Sphere modal, Schwefel problem 2.22 are the unimodal functions, Schwefel problem 2.26, Ackley and Griewank are multimodal functions with many local minima, while Schefel s foxholes is multimodal functions with only a few local minima. More details can be found in [21]. To give a more detail comparison, six different versions of APSO are designed: APSO with v max computed by formula (9) (ASPO1), APSO1 with additional formula (17) with normal distribution (APSO2), APSO1 with additional formula (17) with uniform distribution (APSO3), APSO1 with additional formula (17) with Cauchy distribution with the scale 1.0,0.5, and 2.0 (APSO4,APSO5,APSO6,respectively). All of the six versions of APSO uses the same coefficients. For each experiment the simulation records the mean (Mean Value), standard deviation (Standard deviation Value),

5 Table 1. Comparison Results of Sphere Model Function SPSO e e e-007 APSO e e e-008 APSO e e e-019 APSO e e e-010 APSO e e e-018 APSO e e e-018 APSO e e e-019 Table 6. Comparison Results of Shekel s Foxholes Function SPSO e e e+000 APSO e e e-001 APSO e e e-001 APSO e e e-001 APSO e e e-001 APSO e e e-001 APSO e e e-001 Table 2. Comparison Results of Schwefel Problem 2.22 Function SPSO e e e-005 APSO e e e-006 APSO e e e-011 APSO e e e-007 APSO e e e-011 APSO e e e-011 APSO e e e-011 Table 3. Comparison Results of Generalized Schwefel Problem 2.26 Function SPSO e e e+003 APSO e e e+004 APSO e e e+004 APSO e e e+004 APSO e e e+004 APSO e e e+004 APSO e e e+004 Table 4. Comparison Results of Ackley s Function SPSO e e e-004 APSO e e e-006 APSO e e e-009 APSO e e e-007 APSO e e e-009 APSO e e e-009 APSO e e e-009 Table 5. Comparison Results of Generalized Griewank Function SPSO e e e-007 APSO e e e-008 APSO e e e+000 APSO e e e-009 APSO e e e+000 APSO e e e+000 APSO e e e+000 best and worst solutions found (Best Solution, Worst Solution, respectively). The coefficients of standard PSO (SPSO) and other versions of APSO are set as follows. The inertia weight w is decreased linearly form 0:9 to 0:4, and two accelerator coefficients are set to 2.0. Total individuals are 100 except Shekel s Foxholes uses only 10, and v max is set to 100% of the upper bound of domain in SPSO as well as the initialized v max set to 3.0 in other APSO. Each experiment the simulation run 30 times while each time the largest evolutionary generation is 1000 except which of Shekel s foxholes is 50. To avoid the velocity threshold falling too low to zero, a low bound 1:0e 5 should be put on v max. The same consideration is given to fi 0 N (0; 1) + fin k (0; 1) to avoid the system overflow, the upper and low bound are set to 1:0e 5 and 1:0e +2,respectively. Table 1 to Table 6 are the comparison results for six benchmark functions. From Table 1 and 2, we found the six versions of APSO are always better than SPSO while APSO1 and APSO3 only surpass the SPSO a little. Though Sphere modal and Schwefel problem 2.22 are the unimodal functions, it implies the APSO2 and APSO4 to APSO6 are commonly fit to solve unimodal functions, and the performance improved significantly. Table 3 to 5 give a set of multimodal functions with many local optima. In table 3,4 and 5, four versions of APSO are surpassed than SPSO except for APSO1 and APSO3. It means the APSO2 and APSO4 to APSO6 can be used to solve the multimodal functions whether epistasis exists or not. Table 6 gives detailed description about multimodal functions with only a few local optima. Though the mean values and the standard deviation implies APSO are always better than SPSO. All in all, APSO2 and APSO4 to APSO6 can work well on unimodal functions, multimodal functions with a few local optima, and multimodal functions with many local optima. 6. Conclusion Inspired by the evolutionary programming, a new version of particle swarm optimization, adaptive PSO using EP, is proposed. New PSO is considered to combing the advantages

6 of PSO and EP using dynamically changed velocity threshold. This character is used provides enhanced exploration capability for increased thresholds as well as exploitation capability for decreased thresholds. New research aspects will include the combing other techniques of EP, and provides some other selection principles of v max. Acknowledgement This paper was supported by the Doctoral Scientific Research Starting Foundation of Taiyuan University of Science and Technology under No , and it was also supported by Shanxi Science Foundation for Young Scientists under Grant References [1] Eberhart, R. C., and Kennedy, J., A new optimizer using particle swarm theory, Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, pp.39-43,1995. [2] Kennedy, J. and Eberhart, R. C., Particle swarm optimization, Proceedings of IEEE International Conference on Neural Networks, IV, pp ,1995. [3] Eberhart, R. C., and Shi, Y., Evolving artificial neural networks, Proceedings of 1998 International Conference on Neural Networks and Brain, Beijing, P.R.China, pp.5-13,1998. [4] Ye B., Sun J., and Xu W.B., Solving the hard Knapsack problem with a binary particle swarm optimization, Lecture Notes in Bioinformatics 4115,Kunming, P.R.China, pp , [5] Hassas,S.,Using swarm intelligence for dynamic web content organizing, Proceedings of 2003 IEEE Swarm Intelligence Symposium, Indianapolis, USA, pp.19-25,2003. [6] Srinivasan,C., Loo,W.H., and Cheu,R.L., Traffic incident detection using particle swarm optimization,proceedings of 2003 IEEE Swarm Intelligence Symposium, Indianapolis, USA, pp ,2003. [11] Zheng Y.L., Ma L.H., Zhang L.Y., and Qian J.X., On the convergence analysis and parameter selection in particle swarm optimization, Proceedings of the Second International Conference on Machine Learning and Cybernetics, Xi an, China,pp ,2003. [12] Shi Y., and Eberhart R.C., Fuzzy adaptive particle swarm optimization, Proceedings of the Congress on Evolutionary Computation, [13] Monson C.K., and Seppi K.D., The Kalman swarm: a new approach to particle motion in swarm optimization,proceedings of the Genetic and Evolutionary Computation Conference,pp , [14] Sun J. etc, Particle swarm optimization with particles having quantum behavior, Proceedings of the IEEE Congress on Evolutionary Computation, pp , [15] Cui Z.H., Zeng J.C., and Cai X.J., A new stochastic particle swarm optimizer,ieee Congress on Evolutionary Computation, pp ,2004. [16] Cui Z.H., Zeng J.C., and Sun G.J.,A fast particle swarm optimization, International Journal of Innovative Computing, Information and Control,2006,2(6): [17] Cui Z.H., Cai X.J., Zeng J.C., and Sun G.J., Predictedvelocity particle swarm optimization using game-theoretic approach, Lecture Notes in Bioinformatics, vol.4115, Kunming, P.R.China, pp , [18] Michalewicz Z., Genetic Algorithm+ Data Structures =Evolution Programs, Springer-Verlag, Berlin,1992. [19] Marcus H., Fitness uniform selection to preserve genetic diversity, Proceedings of IEEE Congress on Evolutionary Computation, pp , [20] Blickle T., and Thiele L., A mathematical analysis of tournament selection, Proceedings of the Sixth International Conference on Genetic Algorithms, San Francisco, California, pp.9-16, [21] Yao X, Liu Y and Lin GM., Evolutionary programming made faster, IEEE Transactions on Evolutionary Computation, 1999, vol.3, no.2, pp [7] Shi, Y. and Eberhart, R. C., A modified particle swarm optimizer, Proceedings of the IEEE International Conference on Evolutionary Computation, Anchorage, Alaska, USA,pp.69-73,1998. [8] Shi Y., and Eberhart R.C., Parameter selection in particle swarm optimization, Proceedings of the 7 th Annual Conference on Evolutionary Programming, pp ,1998. [9] Shi Y., and Eberhart R.C., Empirical study of particle swarm optimization, Proceedings of the Congress on Evolutionary Computation, pp ,1999. [10] Suganthan P N, Particle swarm optimizer with neighbourhood operator, Proceedings of the Congress on Evolutionary Computation, pp ,1999.

Beta Damping Quantum Behaved Particle Swarm Optimization

Beta Damping Quantum Behaved Particle Swarm Optimization Beta Damping Quantum Behaved Particle Swarm Optimization Tarek M. Elbarbary, Hesham A. Hefny, Atef abel Moneim Institute of Statistical Studies and Research, Cairo University, Giza, Egypt tareqbarbary@yahoo.com,

More information

Differential Evolution Based Particle Swarm Optimization

Differential Evolution Based Particle Swarm Optimization Differential Evolution Based Particle Swarm Optimization Mahamed G.H. Omran Department of Computer Science Gulf University of Science and Technology Kuwait mjomran@gmail.com Andries P. Engelbrecht Department

More information

The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis The Parameters Selection of Algorithm influencing On performance of Fault Diagnosis Yan HE,a, Wei Jin MA and Ji Ping ZHANG School of Mechanical Engineering and Power Engineer North University of China,

More information

B-Positive Particle Swarm Optimization (B.P.S.O)

B-Positive Particle Swarm Optimization (B.P.S.O) Int. J. Com. Net. Tech. 1, No. 2, 95-102 (2013) 95 International Journal of Computing and Network Technology http://dx.doi.org/10.12785/ijcnt/010201 B-Positive Particle Swarm Optimization (B.P.S.O) Muhammad

More information

ACTA UNIVERSITATIS APULENSIS No 11/2006

ACTA UNIVERSITATIS APULENSIS No 11/2006 ACTA UNIVERSITATIS APULENSIS No /26 Proceedings of the International Conference on Theory and Application of Mathematics and Informatics ICTAMI 25 - Alba Iulia, Romania FAR FROM EQUILIBRIUM COMPUTATION

More information

Fuzzy adaptive catfish particle swarm optimization

Fuzzy adaptive catfish particle swarm optimization ORIGINAL RESEARCH Fuzzy adaptive catfish particle swarm optimization Li-Yeh Chuang, Sheng-Wei Tsai, Cheng-Hong Yang. Institute of Biotechnology and Chemical Engineering, I-Shou University, Kaohsiung, Taiwan

More information

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS J. of Electromagn. Waves and Appl., Vol. 23, 711 721, 2009 ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS L. Zhang, F. Yang, and

More information

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization Deepak Singh Raipur Institute of Technology Raipur, India Vikas Singh ABV- Indian Institute of Information Technology

More information

Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization.

Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization. nd Workshop on Advanced Research and Technology in Industry Applications (WARTIA ) Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization

More information

Improving on the Kalman Swarm

Improving on the Kalman Swarm Improving on the Kalman Swarm Extracting Its Essential Characteristics Christopher K. Monson and Kevin D. Seppi Brigham Young University, Provo UT 84602, USA {c,kseppi}@cs.byu.edu Abstract. The Kalman

More information

A self-guided Particle Swarm Optimization with Independent Dynamic Inertia Weights Setting on Each Particle

A self-guided Particle Swarm Optimization with Independent Dynamic Inertia Weights Setting on Each Particle Appl. Math. Inf. Sci. 7, No. 2, 545-552 (2013) 545 Applied Mathematics & Information Sciences An International Journal A self-guided Particle Swarm Optimization with Independent Dynamic Inertia Weights

More information

A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape

A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape WCCI 200 IEEE World Congress on Computational Intelligence July, 8-23, 200 - CCIB, Barcelona, Spain CEC IEEE A Mixed Strategy for Evolutionary Programming Based on Local Fitness Landscape Liang Shen and

More information

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing International Conference on Artificial Intelligence (IC-AI), Las Vegas, USA, 2002: 1163-1169 Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing Xiao-Feng

More information

Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape

Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape Liang Shen Department of Computer Science Aberystwyth University Ceredigion, SY23 3DB UK lls08@aber.ac.uk Jun He Department

More information

The particle swarm optimization algorithm: convergence analysis and parameter selection

The particle swarm optimization algorithm: convergence analysis and parameter selection Information Processing Letters 85 (2003) 317 325 www.elsevier.com/locate/ipl The particle swarm optimization algorithm: convergence analysis and parameter selection Ioan Cristian Trelea INA P-G, UMR Génie

More information

PSO with Adaptive Mutation and Inertia Weight and Its Application in Parameter Estimation of Dynamic Systems

PSO with Adaptive Mutation and Inertia Weight and Its Application in Parameter Estimation of Dynamic Systems Vol. 37, No. 5 ACTA AUTOMATICA SINICA May, 2011 PSO with Adaptive Mutation and Inertia Weight and Its Application in Parameter Estimation of Dynamic Systems ALFI Alireza 1 Abstract An important problem

More information

A Method of HVAC Process Object Identification Based on PSO

A Method of HVAC Process Object Identification Based on PSO 2017 3 45 313 doi 10.3969 j.issn.1673-7237.2017.03.004 a a b a. b. 201804 PID PID 2 TU831 A 1673-7237 2017 03-0019-05 A Method of HVAC Process Object Identification Based on PSO HOU Dan - lin a PAN Yi

More information

Limiting the Velocity in the Particle Swarm Optimization Algorithm

Limiting the Velocity in the Particle Swarm Optimization Algorithm Limiting the Velocity in the Particle Swarm Optimization Algorithm Julio Barrera 1, Osiris Álvarez-Bajo 2, Juan J. Flores 3, Carlos A. Coello Coello 4 1 Universidad Michoacana de San Nicolás de Hidalgo,

More information

Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution

Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution Michael G. Epitropakis, Member, IEEE, Vassilis P. Plagianakos and Michael N. Vrahatis Abstract In

More information

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn

More information

A Particle Swarm Optimization (PSO) Primer

A Particle Swarm Optimization (PSO) Primer A Particle Swarm Optimization (PSO) Primer With Applications Brian Birge Overview Introduction Theory Applications Computational Intelligence Summary Introduction Subset of Evolutionary Computation Genetic

More information

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn

More information

Distributed Particle Swarm Optimization

Distributed Particle Swarm Optimization Distributed Particle Swarm Optimization Salman Kahrobaee CSCE 990 Seminar Main Reference: A Comparative Study of Four Parallel and Distributed PSO Methods Leonardo VANNESCHI, Daniele CODECASA and Giancarlo

More information

OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION

OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION Onah C. O. 1, Agber J. U. 2 and Ikule F. T. 3 1, 2, 3 Department of Electrical and Electronics

More information

Applying Particle Swarm Optimization to Adaptive Controller Leandro dos Santos Coelho 1 and Fabio A. Guerra 2

Applying Particle Swarm Optimization to Adaptive Controller Leandro dos Santos Coelho 1 and Fabio A. Guerra 2 Applying Particle Swarm Optimization to Adaptive Controller Leandro dos Santos Coelho 1 and Fabio A. Guerra 2 1 Production and Systems Engineering Graduate Program, PPGEPS Pontifical Catholic University

More information

An Improved Quantum Evolutionary Algorithm with 2-Crossovers

An Improved Quantum Evolutionary Algorithm with 2-Crossovers An Improved Quantum Evolutionary Algorithm with 2-Crossovers Zhihui Xing 1, Haibin Duan 1,2, and Chunfang Xu 1 1 School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191,

More information

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5]. Hybrid particle swarm algorithm for solving nonlinear constraint optimization problems BINGQIN QIAO, XIAOMING CHANG Computers and Software College Taiyuan University of Technology Department of Economic

More information

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Miguel Leon Ortiz and Ning Xiong Mälardalen University, Västerås, SWEDEN Abstract. Differential evolution

More information

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning 009 Ninth International Conference on Intelligent Systems Design and Applications A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning Hui Wang, Zhijian Wu, Shahryar Rahnamayan,

More information

Research Article Multiswarm Particle Swarm Optimization with Transfer of the Best Particle

Research Article Multiswarm Particle Swarm Optimization with Transfer of the Best Particle Computational Intelligence and Neuroscience Volume 2015, Article I 904713, 9 pages http://dx.doi.org/10.1155/2015/904713 Research Article Multiswarm Particle Swarm Optimization with Transfer of the Best

More information

Particle swarm optimization (PSO): a potentially useful tool for chemometrics?

Particle swarm optimization (PSO): a potentially useful tool for chemometrics? Particle swarm optimization (PSO): a potentially useful tool for chemometrics? Federico Marini 1, Beata Walczak 2 1 Sapienza University of Rome, Rome, Italy 2 Silesian University, Katowice, Poland Rome,

More information

Toward Effective Initialization for Large-Scale Search Spaces

Toward Effective Initialization for Large-Scale Search Spaces Toward Effective Initialization for Large-Scale Search Spaces Shahryar Rahnamayan University of Ontario Institute of Technology (UOIT) Faculty of Engineering and Applied Science 000 Simcoe Street North

More information

A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network JIANG Hai University of Chinese Academy of Sciences National Astronomical Observatories, Chinese Academy of Sciences

More information

Finding Robust Solutions to Dynamic Optimization Problems

Finding Robust Solutions to Dynamic Optimization Problems Finding Robust Solutions to Dynamic Optimization Problems Haobo Fu 1, Bernhard Sendhoff, Ke Tang 3, and Xin Yao 1 1 CERCIA, School of Computer Science, University of Birmingham, UK Honda Research Institute

More information

Adaptive Differential Evolution and Exponential Crossover

Adaptive Differential Evolution and Exponential Crossover Proceedings of the International Multiconference on Computer Science and Information Technology pp. 927 931 ISBN 978-83-60810-14-9 ISSN 1896-7094 Adaptive Differential Evolution and Exponential Crossover

More information

Center-based initialization for large-scale blackbox

Center-based initialization for large-scale blackbox See discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/903587 Center-based initialization for large-scale blackbox problems ARTICLE FEBRUARY 009 READS

More information

WIND SPEED ESTIMATION IN SAUDI ARABIA USING THE PARTICLE SWARM OPTIMIZATION (PSO)

WIND SPEED ESTIMATION IN SAUDI ARABIA USING THE PARTICLE SWARM OPTIMIZATION (PSO) WIND SPEED ESTIMATION IN SAUDI ARABIA USING THE PARTICLE SWARM OPTIMIZATION (PSO) Mohamed Ahmed Mohandes Shafique Rehman King Fahd University of Petroleum & Minerals Saeed Badran Electrical Engineering

More information

Situation. The XPS project. PSO publication pattern. Problem. Aims. Areas

Situation. The XPS project. PSO publication pattern. Problem. Aims. Areas Situation The XPS project we are looking at a paradigm in its youth, full of potential and fertile with new ideas and new perspectives Researchers in many countries are experimenting with particle swarms

More information

An Evolutionary Programming Based Algorithm for HMM training

An Evolutionary Programming Based Algorithm for HMM training An Evolutionary Programming Based Algorithm for HMM training Ewa Figielska,Wlodzimierz Kasprzak Institute of Control and Computation Engineering, Warsaw University of Technology ul. Nowowiejska 15/19,

More information

Transitional Particle Swarm Optimization

Transitional Particle Swarm Optimization International Journal of Electrical and Computer Engineering (IJECE) Vol. 7, No. 3, June 7, pp. 6~69 ISSN: 88-878, DOI:.59/ijece.v7i3.pp6-69 6 Transitional Particle Swarm Optimization Nor Azlina Ab Aziz,

More information

Fuzzy Cognitive Maps Learning through Swarm Intelligence

Fuzzy Cognitive Maps Learning through Swarm Intelligence Fuzzy Cognitive Maps Learning through Swarm Intelligence E.I. Papageorgiou,3, K.E. Parsopoulos 2,3, P.P. Groumpos,3, and M.N. Vrahatis 2,3 Department of Electrical and Computer Engineering, University

More information

Discrete Evaluation and the Particle Swarm Algorithm.

Discrete Evaluation and the Particle Swarm Algorithm. Abstract Discrete Evaluation and the Particle Swarm Algorithm. Tim Hendtlass and Tom Rodgers, Centre for Intelligent Systems and Complex Processes, Swinburne University of Technology, P. O. Box 218 Hawthorn

More information

Improved Shuffled Frog Leaping Algorithm Based on Quantum Rotation Gates Guo WU 1, Li-guo FANG 1, Jian-jun LI 2 and Fan-shuo MENG 1

Improved Shuffled Frog Leaping Algorithm Based on Quantum Rotation Gates Guo WU 1, Li-guo FANG 1, Jian-jun LI 2 and Fan-shuo MENG 1 17 International Conference on Computer, Electronics and Communication Engineering (CECE 17 ISBN: 978-1-6595-476-9 Improved Shuffled Frog Leaping Algorithm Based on Quantum Rotation Gates Guo WU 1, Li-guo

More information

Discrete evaluation and the particle swarm algorithm

Discrete evaluation and the particle swarm algorithm Volume 12 Discrete evaluation and the particle swarm algorithm Tim Hendtlass and Tom Rodgers Centre for Intelligent Systems and Complex Processes Swinburne University of Technology P. O. Box 218 Hawthorn

More information

ENHANCING THE CUCKOO SEARCH WITH LEVY FLIGHT THROUGH POPULATION ESTIMATION

ENHANCING THE CUCKOO SEARCH WITH LEVY FLIGHT THROUGH POPULATION ESTIMATION ENHANCING THE CUCKOO SEARCH WITH LEVY FLIGHT THROUGH POPULATION ESTIMATION Nazri Mohd Nawi, Shah Liyana Shahuddin, Muhammad Zubair Rehman and Abdullah Khan Soft Computing and Data Mining Centre, Faculty

More information

Particle Swarm Optimization with Velocity Adaptation

Particle Swarm Optimization with Velocity Adaptation In Proceedings of the International Conference on Adaptive and Intelligent Systems (ICAIS 2009), pp. 146 151, 2009. c 2009 IEEE Particle Swarm Optimization with Velocity Adaptation Sabine Helwig, Frank

More information

ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE BABEŞ-BOLYAI UNIVERSITY Faculty of Computer Science and Mathematics ARTIFICIAL INTELLIGENCE Solving search problems Informed local search strategies Nature-inspired algorithms March, 2017 2 Topics A. Short

More information

A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems

A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems Jakob Vesterstrøm BiRC - Bioinformatics Research Center University

More information

Parameter Sensitivity Analysis of Social Spider Algorithm

Parameter Sensitivity Analysis of Social Spider Algorithm Parameter Sensitivity Analysis of Social Spider Algorithm James J.Q. Yu, Student Member, IEEE and Victor O.K. Li, Fellow, IEEE Department of Electrical and Electronic Engineering The University of Hong

More information

GREENHOUSE AIR TEMPERATURE CONTROL USING THE PARTICLE SWARM OPTIMISATION ALGORITHM

GREENHOUSE AIR TEMPERATURE CONTROL USING THE PARTICLE SWARM OPTIMISATION ALGORITHM Copyright 00 IFAC 5th Triennial World Congress, Barcelona, Spain GREEHOUSE AIR TEMPERATURE COTROL USIG THE PARTICLE SWARM OPTIMISATIO ALGORITHM J.P. Coelho a, P.B. de Moura Oliveira b,c, J. Boaventura

More information

DE/BBO: A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization

DE/BBO: A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization 1 : A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization Wenyin Gong, Zhihua Cai, and Charles X. Ling, Senior Member, IEEE Abstract Differential Evolution

More information

A PSO APPROACH FOR PREVENTIVE MAINTENANCE SCHEDULING OPTIMIZATION

A PSO APPROACH FOR PREVENTIVE MAINTENANCE SCHEDULING OPTIMIZATION 2009 International Nuclear Atlantic Conference - INAC 2009 Rio de Janeiro,RJ, Brazil, September27 to October 2, 2009 ASSOCIAÇÃO BRASILEIRA DE ENERGIA NUCLEAR - ABEN ISBN: 978-85-99141-03-8 A PSO APPROACH

More information

Multi-start JADE with knowledge transfer for numerical optimization

Multi-start JADE with knowledge transfer for numerical optimization Multi-start JADE with knowledge transfer for numerical optimization Fei Peng, Ke Tang,Guoliang Chen and Xin Yao Abstract JADE is a recent variant of Differential Evolution (DE) for numerical optimization,

More information

Int. J. Innovative Computing and Applications, Vol. 1, No. 1,

Int. J. Innovative Computing and Applications, Vol. 1, No. 1, Int. J. Innovative Computing and Applications, Vol. 1, No. 1, 2007 39 A fuzzy adaptive turbulent particle swarm optimisation Hongbo Liu School of Computer Science and Engineering, Dalian Maritime University,

More information

Available online at AASRI Procedia 1 (2012 ) AASRI Conference on Computational Intelligence and Bioinformatics

Available online at  AASRI Procedia 1 (2012 ) AASRI Conference on Computational Intelligence and Bioinformatics Available online at www.sciencedirect.com AASRI Procedia ( ) 377 383 AASRI Procedia www.elsevier.com/locate/procedia AASRI Conference on Computational Intelligence and Bioinformatics Chaotic Time Series

More information

Particle Swarm Optimization. Abhishek Roy Friday Group Meeting Date:

Particle Swarm Optimization. Abhishek Roy Friday Group Meeting Date: Particle Swarm Optimization Abhishek Roy Friday Group Meeting Date: 05.25.2016 Cooperation example Basic Idea PSO is a robust stochastic optimization technique based on the movement and intelligence of

More information

Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 2015 Learning Based Competition Problems

Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 2015 Learning Based Competition Problems Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 05 Learning Based Competition Problems Chao Yu, Ling Chen Kelley,, and Ying Tan, The Key Laboratory of Machine Perception

More information

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms Yang Yu and Zhi-Hua Zhou National Laboratory for Novel Software Technology Nanjing University, Nanjing 20093, China

More information

MODIFIED PARTICLE SWARM OPTIMIZATION WITH TIME VARYING VELOCITY VECTOR. Received June 2010; revised October 2010

MODIFIED PARTICLE SWARM OPTIMIZATION WITH TIME VARYING VELOCITY VECTOR. Received June 2010; revised October 2010 International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 1(A), January 2012 pp. 201 218 MODIFIED PARTICLE SWARM OPTIMIZATION WITH

More information

DRAFT -- DRAFT -- DRAFT -- DRAFT -- DRAFT --

DRAFT -- DRAFT -- DRAFT -- DRAFT -- DRAFT -- Conditions for the Convergence of Evolutionary Algorithms Jun He and Xinghuo Yu 1 Abstract This paper presents a theoretical analysis of the convergence conditions for evolutionary algorithms. The necessary

More information

Crossing Genetic and Swarm Intelligence Algorithms to Generate Logic Circuits

Crossing Genetic and Swarm Intelligence Algorithms to Generate Logic Circuits Crossing Genetic and Swarm Intelligence Algorithms to Generate Logic Circuits Cecília Reis and J. A. Tenreiro Machado GECAD - Knowledge Engineering and Decision Support Group / Electrical Engineering Department

More information

Application Research of Fireworks Algorithm in Parameter Estimation for Chaotic System

Application Research of Fireworks Algorithm in Parameter Estimation for Chaotic System Application Research of Fireworks Algorithm in Parameter Estimation for Chaotic System Hao Li 1,3, Ying Tan 2, Jun-Jie Xue 1 and Jie Zhu 1 1 Air Force Engineering University, Xi an, 710051, China 2 Department

More information

Standard Particle Swarm Optimisation

Standard Particle Swarm Optimisation Standard Particle Swarm Optimisation From 2006 to 2011 Maurice.Clerc@WriteMe.com 2012-09-23 version 1 Introduction Since 2006, three successive standard PSO versions have been put on line on the Particle

More information

Engineering Structures

Engineering Structures Engineering Structures 31 (2009) 715 728 Contents lists available at ScienceDirect Engineering Structures journal homepage: www.elsevier.com/locate/engstruct Particle swarm optimization of tuned mass dampers

More information

Solving Resource-Constrained Project Scheduling Problem with Particle Swarm Optimization

Solving Resource-Constrained Project Scheduling Problem with Particle Swarm Optimization Regular Papers Solving Resource-Constrained Project Scheduling Problem with Particle Swarm Optimization Sylverin Kemmoé Tchomté, Michel Gourgand, Alain Quilliot LIMOS, Campus Scientifique des Cézeaux,

More information

STUDIES of the behaviour of social insects and animals

STUDIES of the behaviour of social insects and animals IJATS INVITED SESSION FOR EDITORS :: INTERNATIONAL JOURNAL OF AGENT TECHNOLOGIES AND SYSTEMS, 6(2), 1-31, APRIL-JUNE 2014 1 Penguins Huddling Optimisation Mohammad Majid al-rifaie Abstract In our everyday

More information

SHORT-TERM traffic forecasting is a vital component of

SHORT-TERM traffic forecasting is a vital component of , October 19-21, 2016, San Francisco, USA Short-term Traffic Forecasting Based on Grey Neural Network with Particle Swarm Optimization Yuanyuan Pan, Yongdong Shi Abstract An accurate and stable short-term

More information

A Novel Approach for Complete Identification of Dynamic Fractional Order Systems Using Stochastic Optimization Algorithms and Fractional Calculus

A Novel Approach for Complete Identification of Dynamic Fractional Order Systems Using Stochastic Optimization Algorithms and Fractional Calculus 5th International Conference on Electrical and Computer Engineering ICECE 2008, 20-22 December 2008, Dhaka, Bangladesh A Novel Approach for Complete Identification of Dynamic Fractional Order Systems Using

More information

A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions

A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions Chao Qian,2, Yang Yu 2, and Zhi-Hua Zhou 2 UBRI, School of Computer Science and Technology, University of

More information

PARTICLE swarm optimization (PSO) is one powerful and. A Competitive Swarm Optimizer for Large Scale Optimization

PARTICLE swarm optimization (PSO) is one powerful and. A Competitive Swarm Optimizer for Large Scale Optimization IEEE TRANSACTIONS ON CYBERNETICS, VOL. XX, NO. X, XXXX XXXX 1 A Competitive Swarm Optimizer for Large Scale Optimization Ran Cheng and Yaochu Jin, Senior Member, IEEE Abstract In this paper, a novel competitive

More information

OPTIMIZATION refers to the study of problems in which

OPTIMIZATION refers to the study of problems in which 1482 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 9, SEPTEMBER 2010 Self-Organizing Potential Field Network: A New Optimization Algorithm Lu Xu and Tommy Wai Shing Chow, Senior Member, IEEE Abstract

More information

Fast Evolution Strategies. Xin Yao and Yong Liu. University College, The University of New South Wales. Abstract

Fast Evolution Strategies. Xin Yao and Yong Liu. University College, The University of New South Wales. Abstract Fast Evolution Strategies Xin Yao and Yong Liu Computational Intelligence Group, School of Computer Science University College, The University of New South Wales Australian Defence Force Academy, Canberra,

More information

The Essential Particle Swarm. James Kennedy Washington, DC

The Essential Particle Swarm. James Kennedy Washington, DC The Essential Particle Swarm James Kennedy Washington, DC Kennedy.Jim@gmail.com The Social Template Evolutionary algorithms Other useful adaptive processes in nature Social behavior Social psychology Looks

More information

Quantum-Inspired Differential Evolution with Particle Swarm Optimization for Knapsack Problem

Quantum-Inspired Differential Evolution with Particle Swarm Optimization for Knapsack Problem JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 31, 1757-1773 (2015) Quantum-Inspired Differential Evolution with Particle Swarm Optimization for Knapsack Problem DJAAFAR ZOUACHE 1 AND ABDELOUAHAB MOUSSAOUI

More information

OPTIMAL POWER FLOW BASED ON PARTICLE SWARM OPTIMIZATION

OPTIMAL POWER FLOW BASED ON PARTICLE SWARM OPTIMIZATION U.P.B. Sci. Bull., Series C, Vol. 78, Iss. 3, 2016 ISSN 2286-3540 OPTIMAL POWER FLOW BASED ON PARTICLE SWARM OPTIMIZATION Layth AL-BAHRANI 1, Virgil DUMBRAVA 2 Optimal Power Flow (OPF) is one of the most

More information

Available online at ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95

Available online at  ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95 Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 20 (2013 ) 90 95 Complex Adaptive Systems, Publication 3 Cihan H. Dagli, Editor in Chief Conference Organized by Missouri

More information

Particle Swarm Optimization of Hidden Markov Models: a comparative study

Particle Swarm Optimization of Hidden Markov Models: a comparative study Particle Swarm Optimization of Hidden Markov Models: a comparative study D. Novák Department of Cybernetics Czech Technical University in Prague Czech Republic email:xnovakd@labe.felk.cvut.cz M. Macaš,

More information

Self-Adaptive Ant Colony System for the Traveling Salesman Problem

Self-Adaptive Ant Colony System for the Traveling Salesman Problem Proceedings of the 29 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 29 Self-Adaptive Ant Colony System for the Traveling Salesman Problem Wei-jie Yu, Xiao-min

More information

A NETWORK TRAFFIC PREDICTION MODEL BASED ON QUANTUM INSPIRED PSO AND WAVELET NEURAL NETWORK. Kun Zhang

A NETWORK TRAFFIC PREDICTION MODEL BASED ON QUANTUM INSPIRED PSO AND WAVELET NEURAL NETWORK. Kun Zhang Mathematical and Computational Applications, Vol. 19, No. 3, pp. 218-229, 2014 A NETWORK TRAFFIC PREDICTION MODEL BASED ON QUANTUM INSPIRED PSO AND WAVELET NEURAL NETWORK Kun Zhang Department of Mathematics,

More information

Nature inspired optimization technique for the design of band stop FIR digital filter

Nature inspired optimization technique for the design of band stop FIR digital filter Nature inspired optimization technique for the design of band stop FIR digital filter Dilpreet Kaur, 2 Balraj Singh M.Tech (Scholar), 2 Associate Professor (ECE), 2 (Department of Electronics and Communication

More information

Adaptive Generalized Crowding for Genetic Algorithms

Adaptive Generalized Crowding for Genetic Algorithms Carnegie Mellon University From the SelectedWorks of Ole J Mengshoel Fall 24 Adaptive Generalized Crowding for Genetic Algorithms Ole J Mengshoel, Carnegie Mellon University Severinio Galan Antonio de

More information

Gravitational Search Algorithm with Dynamic Learning Strategy

Gravitational Search Algorithm with Dynamic Learning Strategy Journal of Information Hiding and Multimedia Signal Processing c 2018 ISSN 2073-4212 Ubiquitous International Volume 9, Number 1, January 2018 Gravitational Search Algorithm with Dynamic Learning Strategy

More information

ESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN ,

ESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN , ESANN'200 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), 25-27 April 200, D-Facto public., ISBN 2-930307-0-3, pp. 79-84 Investigating the Influence of the Neighborhood

More information

Performance Evaluation of IIR Filter Design Using Multi-Swarm PSO

Performance Evaluation of IIR Filter Design Using Multi-Swarm PSO Proceedings of APSIPA Annual Summit and Conference 2 6-9 December 2 Performance Evaluation of IIR Filter Design Using Multi-Swarm PSO Haruna Aimi and Kenji Suyama Tokyo Denki University, Tokyo, Japan Abstract

More information

PROMPT PARTICLE SWARM OPTIMIZATION ALGORITHM FOR SOLVING OPTIMAL REACTIVE POWER DISPATCH PROBLEM

PROMPT PARTICLE SWARM OPTIMIZATION ALGORITHM FOR SOLVING OPTIMAL REACTIVE POWER DISPATCH PROBLEM PROMPT PARTICLE SWARM OPTIMIZATION ALGORITHM FOR SOLVING OPTIMAL REACTIVE POWER DISPATCH PROBLEM K. Lenin 1 Research Scholar Jawaharlal Nehru Technological University Kukatpally,Hyderabad 500 085, India

More information

Power Electronic Circuits Design: A Particle Swarm Optimization Approach *

Power Electronic Circuits Design: A Particle Swarm Optimization Approach * Power Electronic Circuits Design: A Particle Swarm Optimization Approach * Jun Zhang, Yuan Shi, and Zhi-hui Zhan ** Department of Computer Science, Sun Yat-sen University, China, 510275 junzhang@ieee.org

More information

EPSO BEST-OF-TWO-WORLDS META-HEURISTIC APPLIED TO POWER SYSTEM PROBLEMS

EPSO BEST-OF-TWO-WORLDS META-HEURISTIC APPLIED TO POWER SYSTEM PROBLEMS EPSO BEST-OF-TWO-WORLDS META-HEURISTIC APPLIED TO POWER SYSTEM PROBLEMS Vladimiro Miranda vmiranda@inescporto.pt Nuno Fonseca nfonseca@inescporto.pt INESC Porto Instituto de Engenharia de Sistemas e Computadores

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

Research Article The Inertia Weight Updating Strategies in Particle Swarm Optimisation Based on the Beta Distribution

Research Article The Inertia Weight Updating Strategies in Particle Swarm Optimisation Based on the Beta Distribution Mathematical Problems in Engineering Volume 2015, Article ID 790465, 9 pages http://dx.doi.org/10.1155/2015/790465 Research Article The Inertia Weight Updating Strategies in Particle Swarm Optimisation

More information

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1455 1475 ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC

More information

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Tadahiko Murata 1, Shiori Kaige 2, and Hisao Ishibuchi 2 1 Department of Informatics, Kansai University 2-1-1 Ryozenji-cho,

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

Multi-objective approaches in a single-objective optimization environment

Multi-objective approaches in a single-objective optimization environment Multi-objective approaches in a single-objective optimization environment Shinya Watanabe College of Information Science & Engineering, Ritsumeikan Univ. -- Nojihigashi, Kusatsu Shiga 55-8577, Japan sin@sys.ci.ritsumei.ac.jp

More information

A Theoretical and Empirical Analysis of Convergence Related Particle Swarm Optimization

A Theoretical and Empirical Analysis of Convergence Related Particle Swarm Optimization A Theoretical and Empirical Analysis of Convergence Related Particle Swarm Optimization MILAN R. RAPAIĆ, ŽELJKO KANOVIĆ, ZORAN D. JELIČIĆ Automation and Control Systems Department University of Novi Sad

More information

PARTICLE SWARM OPTIMISATION (PSO)

PARTICLE SWARM OPTIMISATION (PSO) PARTICLE SWARM OPTIMISATION (PSO) Perry Brown Alexander Mathews Image: http://www.cs264.org/2009/projects/web/ding_yiyang/ding-robb/pso.jpg Introduction Concept first introduced by Kennedy and Eberhart

More information

Biomimicry of Symbiotic Multi-Species Coevolution for Global Optimization

Biomimicry of Symbiotic Multi-Species Coevolution for Global Optimization Instrumentation and Measurement Technology, 4(3), p.p.90-93 8. K.L.Boyer, A.C.Kak. (987) Color-Encoded Structured Light for Rapid Active Ranging. IEEE Transactions on Pattern Analysis and Machine Intelligence,

More information

Solving the Constrained Nonlinear Optimization based on Imperialist Competitive Algorithm. 1 Introduction

Solving the Constrained Nonlinear Optimization based on Imperialist Competitive Algorithm. 1 Introduction ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.15(2013) No.3,pp.212-219 Solving the Constrained Nonlinear Optimization based on Imperialist Competitive Algorithm

More information

Journal of Engineering Science and Technology Review 7 (1) (2014)

Journal of Engineering Science and Technology Review 7 (1) (2014) Jestr Journal of Engineering Science and Technology Review 7 () (204) 32 36 JOURNAL OF Engineering Science and Technology Review www.jestr.org Particle Swarm Optimization-based BP Neural Network for UHV

More information

Multiple Particle Swarm Optimizers with Diversive Curiosity

Multiple Particle Swarm Optimizers with Diversive Curiosity Multiple Particle Swarm Optimizers with Diversive Curiosity Hong Zhang, Member IAENG Abstract In this paper we propose a new method, called multiple particle swarm optimizers with diversive curiosity (MPSOα/DC),

More information

Forecasting & Futurism

Forecasting & Futurism Article from: Forecasting & Futurism December 2013 Issue 8 A NEAT Approach to Neural Network Structure By Jeff Heaton Jeff Heaton Neural networks are a mainstay of artificial intelligence. These machine-learning

More information