Improved Shuffled Frog Leaping Algorithm Based on Quantum Rotation Gates Guo WU 1, Li-guo FANG 1, Jian-jun LI 2 and Fan-shuo MENG 1

Similar documents
Beta Damping Quantum Behaved Particle Swarm Optimization

An Improved Quantum Evolutionary Algorithm with 2-Crossovers

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization

The particle swarm optimization algorithm: convergence analysis and parameter selection

An Improved Shuffled Frog Leaping Algorithm for Simultaneous Design of Power System Stabilizer and Supplementary Controller for SVC

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS

Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization.

Gravitational Search Algorithm with Dynamic Learning Strategy

Fuzzy adaptive catfish particle swarm optimization

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT

A self-guided Particle Swarm Optimization with Independent Dynamic Inertia Weights Setting on Each Particle

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing

Application Research of Fireworks Algorithm in Parameter Estimation for Chaotic System

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Analysis of Nonlinear Characteristics of Turbine Governor and Its Impact on Power System Oscillation

PSO with Adaptive Mutation and Inertia Weight and Its Application in Parameter Estimation of Dynamic Systems

Artificial Neural Networks. Part 2

Enhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

A Self-adaption Quantum Genetic Algorithm Used in the Design of Command and Control Structure

Dynamic Search Fireworks Algorithm with Covariance Mutation for Solving the CEC 2015 Learning Based Competition Problems

The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

Application of Swarm Intelligent Algorithm Optimization Neural Network in Network Security Hui Xia1

Crossing Genetic and Swarm Intelligence Algorithms to Generate Logic Circuits

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Networks of McCulloch-Pitts Neurons

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION

Max-Margin Ratio Machine

UNCERTAINTY SCOPE OF THE FORCE CALIBRATION MACHINES. A. Sawla Physikalisch-Technische Bundesanstalt Bundesallee 100, D Braunschweig, Germany

Deterministic convergence of conjugate gradient method for feedforward neural networks

Randomized Smoothing Networks

B-Positive Particle Swarm Optimization (B.P.S.O)

ARTIFICIAL INTELLIGENCE

Analysis of Crossover Operators for Cluster Geometry Optimization

Lecture 3a: The Origin of Variational Bayes

ESTIMATION OF RADIATIVE PARAMETERS IN PARTICIPATING MEDIA USING SHUFFLED FROG LEAPING ALGORITHM

Biomimicry of Symbiotic Multi-Species Coevolution for Global Optimization

ACTA UNIVERSITATIS APULENSIS No 11/2006

N-bit Parity Neural Networks with minimum number of threshold neurons

2 Differential Evolution and its Control Parameters

Parameter Sensitivity Analysis of Social Spider Algorithm

Applying Particle Swarm Optimization to Adaptive Controller Leandro dos Santos Coelho 1 and Fabio A. Guerra 2

A proof of topological completeness for S4 in (0,1)

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning

Multiple Particle Swarm Optimizers with Diversive Curiosity

A Particle Swarm Optimization (PSO) Primer

Research Article Multiswarm Particle Swarm Optimization with Transfer of the Best Particle

Multiple Evolutionary Agents for Decision Support

A Generalized Quantum-Inspired Evolutionary Algorithm for Combinatorial Optimization Problems

Part 8: Neural Networks

Pareto-Improving Congestion Pricing on General Transportation Networks

Available online at AASRI Procedia 1 (2012 ) AASRI Conference on Computational Intelligence and Bioinformatics

Self-Adaptive Ant Colony System for the Traveling Salesman Problem

A HYBRID ARTIFICIAL BEE COLONY OPTIMIZATION AND QUANTUM EVOLUTIONARY ALGORITHM FOR CONTINUOUS OPTIMIZATION PROBLEMS

An Improved Driving Scheme in an Electrophoretic Display

OPTIMIZATION refers to the study of problems in which

An Implementation of Compact Genetic Algorithm on a Quantum Computer

Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution

Energy Minimization via a Primal-Dual Algorithm for a Convex Program

ROBUST LINEAR DISCRIMINANT ANALYSIS WITH A LAPLACIAN ASSUMPTION ON PROJECTION DISTRIBUTION

932 Yang Wei-Song et al Vol. 12 Table 1. An example of two strategies hold by an agent in a minority game with m=3 and S=2. History Strategy 1 Strateg

Sorting Network Development Using Cellular Automata

CSC 4510 Machine Learning

EPSO BEST-OF-TWO-WORLDS META-HEURISTIC APPLIED TO POWER SYSTEM PROBLEMS

White Hole-Black Hole Algorithm

Lecture 9 Evolutionary Computation: Genetic algorithms

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7.

URL: < >

TIME DOMAIN ACOUSTIC CONTRAST CONTROL IMPLEMENTATION OF SOUND ZONES FOR LOW-FREQUENCY INPUT SIGNALS

Bloom Filters and Locality-Sensitive Hashing

Research Article Effect of Population Structures on Quantum-Inspired Evolutionary Algorithm

DE/BBO: A Hybrid Differential Evolution with Biogeography-Based Optimization for Global Numerical Optimization

Sensitive Ant Model for Combinatorial Optimization

Gaussian Harmony Search Algorithm: A Novel Method for Loney s Solenoid Problem

The Optimal Resource Allocation in Stochastic Activity Networks via The Electromagnetism Approach

Hydrate Inhibition with Methanol A Review and New Concerns over Experimental Data Presentation

Information Retrieval and Web Search

CHARACTERIZATION OF ULTRASONIC IMMERSION TRANSDUCERS

Egocentric Particle Swarm Optimization

6 Price equation and Selection in quantitative characters

Conceptual and numerical comparisons of swarm intelligence optimization algorithms

Firefly algorithm in optimization of queueing systems

PARTICLE SWARM OPTIMISATION (PSO)

Some Classes of Invertible Matrices in GF(2)

Minimax strategy for prediction with expert advice under stochastic assumptions

The Analytic Hierarchy Process for the Reservoir Evaluation in Chaoyanggou Oilfield

No. 6 Determining the input dimension of a To model a nonlinear time series with the widely used feed-forward neural network means to fit the a

On A Comparison between Two Measures of Spatial Association

Quantum-Inspired Differential Evolution with Particle Swarm Optimization for Knapsack Problem

A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems

Long run input use-input price relations and the cost function Hessian. Ian Steedman Manchester Metropolitan University

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Energy and Power Engineering, 2009, doi: /epe Published Online August 2009 (

be a deterministic function that satisfies x( t) dt. Then its Fourier

Center-based initialization for large-scale blackbox

Journal of Engineering Science and Technology Review 7 (1) (2014)

Early & Quick COSMIC-FFP Analysis using Analytic Hierarchy Process

Lecture 3 Frequency Moments, Heavy Hitters

Solution of Unit Commitment Problem Using Shuffled Frog Leaping Algorithm

Transcription:

17 International Conference on Computer, Electronics and Communication Engineering (CECE 17 ISBN: 978-1-6595-476-9 Improved Shuffled Frog Leaping Algorithm Based on Quantum Rotation Gates Guo WU 1, Li-guo FANG 1, Jian-jun LI and Fan-shuo MENG 1 1 Zhengzhou Information Science and Technology Institute, Zhengzhou, Henan, China Science and Technology on Information Assurance Laboratory, Beijing, China Keyords: Shuffled frog leaping algorithm, Quantum probability amplitude, Quantum optimization. Abstract. Aiming at the search speed and accuracy of the shuffled frog leaping algorithm not high, the idea of variation as integrated into the shuffled frog leaping algorithm. A ne improved shuffled frog leaping algorithm as proposed hich as called quantum frog leaping algorithm. The positions of frog are encoded by the probability amplitudes of quantum bits, the movements of frog are performed by quantum rotation gates, hich achieve particles searching. Through the experiments on six standard functions, simulation results sho the proposed algorithm has high searching efficiency and precision, Moreover, QSFLA as a promising optimization algorithm has strong convergence and high stability. Introduction For years scientists have turned to Nature for inspiration hile solving complex problems. Evolutionary algorithms (EAs are stochastic search methods that mimic the metaphor of natural biological evolution and/or the social behavior of species. The optimized behavior of such species is guided by learning, adaptation and evolution. It is no ell established that pure EAs are not ell suited to fine tuning search in complex combinatorial spaces and that hybridization ith other techniques can greatly improve the efficiency of search [1, ]. The combination of EAs ith local search as named memetic algorithms (MAs in [3]. The shuffled frog leaping algorithm (SFLA combines the benefits of the genetic-based memetic algorithms (MAs and the social behavior-based particle sarm optimization algorithms [4]. In the SFLA, the population consists of a set of frogs (solutions that is partitioned into subsets referred to as memeplexes. The different memeplexes are considered as different cultures of frogs, each performing a local search [5]. Hoever SFLA has the shortages of premature convergence and poor accuracy. To avoid the shortcomings and improve its performance, many SFLA s variations are proposed. The attraction-repulsion mechanism is integrated into SFLA to maintain the subpopulation diversity [6]. A ne search-acceleration factor as introduced into the formulation of the original SFLA [7] and the factor balances the global and local search by idening the global search at the beginning and then searching deeply around promising solutions. A modified SFLA has a better search performance by the memory of former experience [8,9]. Based on the frame structure of the shuffled frog leaping algorithm, an improved algorithm is proposed based on quantum rotation gate. It uses qubits encoding, and achieves the optimal location of the search by the quantum rotation gate. The simulation results sho that the algorithm s optimization capability and efficiency are better than the shuffled frog leaping algorithm. Shuffled Frog-leaping Algorithm The shuffled frog-leaping algorithm is a memetic meta-heuristic that is designed to seek a global optimal solution by performing a heuristic search. This algorithm is based on the evolution of memes carried by individuals and a global exchange of information among the population. In essence, it combines the benefits of the local search tool of the particle sarm optimization, and the idea of mixing information from parallel local searches to move toard a global solution. It is a combination of deterministic and random approaches. The deterministic strategy allos the 175

algorithm to use response surface information effectively to guide the heuristic search. The random elements ensure the flexibility and robustness of the search pattern. The SFLA starts ith an initial population of "F" frogs created randomly ithin the feasible space. For S-dimensional problems, each frog i is represented by S variables as Pi=(pi1, pi,, pis. The frogs are sorted in a descending order according to their fitness. Then, the entire population is divided into m memeplexes, each containing n frogs (i.e. F=m*n. In this process, the first frog goes to the first memeplex, the second frog goes to the second memeplex, frog m goes to the mth memeplex, and frog m+1 goes to the first memeplex, and so on. Within each memeplex, the frogs ith the best and the orst fitness are identified as Pb and P, respectively. Also, the frog ith the global best fitness is identified as Pg. Then, during an evolution process, only the frog ith the orst fitness in each cycle is improved. Accordingly, the position of the frog ith the orst fitness updates its position to catch up ith the best frog as follos: x rand( ( Pb P (1 P x; xmax x xmax ( Where rand( is a random number beteen and 1; and xmax is the maximum step size of a frog s position alloed to be updated. If this process produces a better frog (solution, it replaces the orst frog. Otherise, the calculations in equations (1 are repeated ith respect to the global best frog (i.e. Pg replaces Pb. If no improvement becomes possible in this case, then a ne solution is randomly generated to replace the orst frog. The calculations then continue for a specific number of iterations. P ne Improved Shuffled Frog Leaping Algorithm If the solution of the optimization problem is represented by vector in S dimension space, the optimization problem is represented as min f ( x1, x,..., xs. Where a xi b, [ a, b] is the definition field of objective function, and S is the solution space dimension. The specific steps are as follos: Initial Population In order to ensure the random population initialization process, the frog position is encoded by quantum probability amplitude. It effectively avoids the binary to decimal encoding process. Its coding scheme is: cos( i1 cos( i... cos( is P i sin( i1 sin( i... sin( is (3 Where rand(, rand( is a random number beteen 1 and, 1 i F, 1 j S. ij From this code, e can see that each frog occupies the probability amplitude of quantum state and 1 in ergodic space: P (cos( i1,cos( i,...,cos( i1 is (4 P (sin( i1,sin( i,...,sin( i is Solution Space Transformation Through the above coding, the frog's ergodic space is [-1,1]. In continuous space, the solution space is needed to be converted, so as to calculate the current location of the fitness value. We map the (5 176

to positions of the frog from the space of the unit space [-1,1]n to the solution space of the T optimization problem. If a qubit of the frog P is [, ], its space variable after conversion is: X i 1 [ b(1 a(1 ]/ (6 X i [ b(1 a(1 ]/ (7 Each frog has to solutions. The probability amplitude of quantum state corresponds to X i1, and the probability amplitude of quantum state Frog Status Update 1 corresponds to X i. In the improved algorithm, the position of the frog is changed by the quantum rotation gate. The frog's jump in shuffled frog leaping algorithm converts to the change of quantum rotation gate, and the frog position s change converts to a quantum probability amplitude change. If the optimal location of the optimal frog Pb in the group is the cosine position (Because each frog corresponds to a sinusoidal position and a cosine position, there must be a better location.. So Pb (cos( b1,cos( b,...,cos( bs The orst location of the orst frog P in the group is: (8 P (cos( 1,cos(,...,cos( S (9 So the frog group update rules divides into the qubit argument increment update and the qubit probability amplitude update: 1 Qubit argument increment update: rand ( (1 Where b b b Qubit probability amplitude update: ( b ( ( b b ne cos( cos( sin( cos( ne sin( sin( cos( sin( So the to ne positions of the frog is: cos( sin( (11 P (cos( 1 1,cos(,...,cos( i1 S S Pi (sin( 1 1,sin(,...,sin( S S Thus it can be seen that the phase qubit of frog position can be changed by quantum rotating gate, to realize the frog's to position at the same time update. In this ay, the search scope of the frog can be increased ithout changing the number of populations. It can extend the frog search traversal, and improve the efficiency of the optimization algorithm. Improved Algorithm escription 1 According to the formula (3 F frogs are initialized. According to the formula (6 and (7 the solution space is transformed, then the fitness value of each frog is calculated. Because each frog occupies to positions in the feasible space, each frog 177

corresponds to to fitness values. No the frogs are sort by the better one. The global optimum of frog Pg and grouping are record. 3 Each subgroup need Q local search. The local search procedure is shon belo: (a According to the grouping strategy, the best frog Pb is record, and the orst frog P is record too. (b The ne frog is obtained according to formula (1 and (11. If this process produces a better frog (solution, it replaces the orst frog. Otherise, the calculations in equations (1 are repeated ith respect to the global best frog. If no improvement becomes possible in this case, then a ne solution is randomly generated to replace the orst frog. 4 If the G iteration have been completed or the stop condition is met, the search is stopped. Otherise, continue to the next step 5 Merge all sub groups, return to step. Comparison of Simulation Results Test the performance of intelligent optimization algorithm by benchmark function is one of the most commonly methods. In this paper, 6 benchmark functions are selected to compare the traditional SFLA and the improved algorithm QSFLA. We have compared them ith classical test functions, and observed the efficiency difference beteen them, to verify the function optimization s convergence efficiency, global optimization ability and multi peak searching ability of QSFLA. The benchmark function is shon in Table 1. Table 1. Benchmark functions. Function Formula Range target Sphere x i i 1 i 1 i i [ 1, 1] Rosenbrock [1( x x ( x 1 ] [ 5, 5] i 1 1 xi Grieank x i cos( 1 4 [ 6, 6] i i 1 i 1 1 Ackley (. ( x i ( cos( x i [ 3, 3] i 1 e ( e e Himmelbau ( x y 11 ( x y 7 [ 6 6, ] Schaffer i 1 i.5 i 1 i 1.1 i 1 ( x x (sin (5 ( x x 1 [ 1, 1] The set of algorithm parameters is: the number of group is 3; the number of frogs in the group is ; the number of iteration in the group is 5; the number of maximum global iteration is fixed to 5. The dimension of Himmelbau is, that of the others is 3. And each function runs 5 times independently. The result is summarized in Table. (If the result is smaller than 1e-34, it ill be shon as in matlab. 178

Table. The result of comparison experiment. Function Algorithm Best fitness Mean fitness Std Sphere SFLA 4.6131e-4 9.867584e-3 1.339615e- QSFLA 8.88579e-7 1.33153e-6 1.9556e-7 Rosenbrock SFLA.741e+1 8.5113e+1 7.36363e+1 QSFLA 3.66173e-1 1.71916e+1 1.99568e+1 Grieank SFLA 3.9194e-1 8.137137e-1.4844e-1 QSFLA 6.351753e-3 1.197139e- Ackley SFLA 3.54433e-3 3.864e- 8.38838e- QSFLA.446e-14.469136e-14.893369e-15 Himmelbau SFLA.31719e-7 1.577973e-6 QSFLA 3.368e-9 3.76975e-9 Schaffer SFLA 4.617e-1 8.54441e+ 4.6738e+ QSFLA 1.37e- 5.89571e+ 1.678e+1 As shon in Table, the QSFLA is superior to the traditional SFLA in the search for the best accuracy of the six functions. For functions Sphere, Ackley and Grieank, QSFLA s minimum achievable accuracy is far higher than the traditional SFLA. And mean and standard deviation are smaller. This shos that the QSFLA is not only able to obtain a higher accuracy of the optimal solution, but also the search is stable. For functions Rosenbrock, Ackley and Schaffer, Although the minimum value is only a little smaller than the traditional SFLA, the average value is still small, especially the standard deviation is lo. QSFLA's search ability for these three functions is eak, but it is still better than the traditional SFLA. Figure 1. Sphere. Figure. Rosenbrock. Figure 3. Grieank. Figure 4. Ackley. Figure 5. Himmelbau. Figure 6. Schaffer. 179

Figure 1 to figure 6 shos the evolution of the curve of the six functions. It can be seen from the figures that SFLA is better than QSFLA in the initial stage of the search. With the increase of the evolution algebra, the searching ability of SFLA decreased obviously. But QSFLA is still able to find a better position. This shos that the search of QSFLA more detailed. Although the initial search speed is slo, it maintains a relatively stable speed, but also has a stronger ability to find the best. Conclusions Improved shuffled frog leaping algorithm is proposed in this paper, using the quantum probability amplitude of frog encoding, to extend the ability to traverse the solution space. And based on the quantum rotation gate, the frog's update make the search finer. These ne strategies speed up the search speed and improve the accuracy of the algorithm. The simulation results sho that compared to the basic leapfrog algorithm, this algorithm improves the convergence precision and speed. Whether it is a unimodal function or a multimodal function, the algorithm has a good ability to find the best. At the same time, the improved algorithm has simple search mechanism, strong robustness, high practicability, and easy to operate. Acknoledgments The authors thank the anonymous revieers for their useful comments and suggestions. References [1] J. Culberson. On the futility of blind search: an algorithmic vie of 'no free lunch', Evolutionary Computation Journal, vol. 6 (, pp.19-18, 1998. []. Goldberg, S. Voessner. Optimizing global-local search hybrids, Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-99, San Francisco: Morgan Kaufmann Publishers, 1999, pp. 8. [3] P. Moscato. On evolution, search, optimization, GAs and martial arts: toard memetic algorithms, California Inst. Technol., Pasadena, CA, Tech. Rep. Caltech Concurrent Comput. Prog. Rep. 86, 1989. [4] Kennedy J., Eberhart R. Particle sarm optimization. In: Proceeding of the IEEE international conference on neural netorks, pp. 194 948, 1995. [5] Eusuff M.M., Lansey K.E. Optimization of ater distribution netork design using the shuffled frog leaping algorithm. Water Resour Plan Manage [J], 3, 3, PP: 1-5. [6] Zhao Peng-jun, Liu San-yang. Shuffled frog leaping algorithm for solving complex functions. Application Research of Computers [J], 9. 6(7, PP: 435-437. [7] E. Elbeltagi, T. Hegazy,. Grierson. A modified shuffled frog-leaping optimization algorithm: applications to project management, Structure and Infrastructure Engineering [J], 7.3(1, PP: 53-6. [8] Zheng Shi-Lian,Lou Cai-Yi,Yang Xiao-Niu. Cooperative spectrum sensing for cognitive radios based on a modified shuffled frog leaping algorithm. Acta Physica Sinica [J]. 1.59(5, PP: 3611-3616. [9] Zhang Mo, Resource scheduling in cloud computing environment based on improved shuffled frog leaping algorithm. Computer Applications and Softare [J]., 15, 3(4: 33-333. 18