ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS

Similar documents
Progress In Electromagnetics Research B, Vol. 36, , 2012

The particle swarm optimization algorithm: convergence analysis and parameter selection

Particle Swarm Optimization. Abhishek Roy Friday Group Meeting Date:

M. Rattan Department of Electronics and Communication Engineering Guru Nanak Dev Engineering College Ludhiana, Punjab, India

Beta Damping Quantum Behaved Particle Swarm Optimization

Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization.

ACTA UNIVERSITATIS APULENSIS No 11/2006

Differential Evolution Based Particle Swarm Optimization

Design of a Non-uniform High Impedance Surface for a Low Profile Antenna

Progress In Electromagnetics Research Letters, Vol. 17, , 2010

A Particle Swarm Optimization (PSO) Primer

IMPROVED ARTIFICIAL BEE COLONY FOR DESIGN OF A RECONFIGURABLE ANTENNA ARRAY WITH DISCRETE PHASE SHIFTERS

Fuzzy adaptive catfish particle swarm optimization

Progress In Electromagnetics Research B, Vol. 55, , 2013

Applying Particle Swarm Optimization to Adaptive Controller Leandro dos Santos Coelho 1 and Fabio A. Guerra 2

Limiting the Velocity in the Particle Swarm Optimization Algorithm

A self-guided Particle Swarm Optimization with Independent Dynamic Inertia Weights Setting on Each Particle

The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

DESIGN OF MULTILAYER MICROWAVE BROADBAND ABSORBERS USING CENTRAL FORCE OPTIMIZATION

The Essential Particle Swarm. James Kennedy Washington, DC

B-Positive Particle Swarm Optimization (B.P.S.O)

AN IMPROVED PARTICLE SWARM OPTIMIZATION ALGORITHM FOR PATTERN SYNTHESIS OF PHASED ARRAYS

PSO with Adaptive Mutation and Inertia Weight and Its Application in Parameter Estimation of Dynamic Systems

Swarm Behavior of the Electromagnetics Community as regards Using Swarm Intelligence in their Research Studies

DESIGN AND OPTIMIZATION OF EQUAL SPLIT BROADBAND MICROSTRIP WILKINSON POWER DI- VIDER USING ENHANCED PARTICLE SWARM OPTI- MIZATION ALGORITHM

PARTICLE SWARM OPTIMISATION (PSO)

An Optimized Ku-Band Corrugated Feed Horn Antenna Design for a Cassegrain Reflector Using Multi-Objective Particle Swarm Optimization

Gaussian Harmony Search Algorithm: A Novel Method for Loney s Solenoid Problem

OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION

Egocentric Particle Swarm Optimization

CHEMICAL Reaction Optimization (CRO) [1] is a simple

Application of Teaching Learning Based Optimization for Size and Location Determination of Distributed Generation in Radial Distribution System.

A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

Project 6 - Calculating π and using building models for data

DIRECTIVITY OPTIMIZATION IN PLANAR SUB-ARRAYED MONOPULSE ANTENNA

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Particle swarm optimization (PSO): a potentially useful tool for chemometrics?

Improving on the Kalman Swarm

Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Fuzzy Cognitive Maps Learning through Swarm Intelligence

Improved Shuffled Frog Leaping Algorithm Based on Quantum Rotation Gates Guo WU 1, Li-guo FANG 1, Jian-jun LI 2 and Fan-shuo MENG 1

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization

Optimizing Semiconductor Devices by Self-organizing Particle Swarm

Design and Analysis of Reconfigurable Frequency Selective Surfaces using FDTD

A MODIFIED TAGUCHI S OPTIMIZATION ALGORITHM FOR BEAMFORMING APPLICATIONS

Parameter Sensitivity Analysis of Social Spider Algorithm

Efficient Resonant Frequency Modeling for Dual-Band Microstrip Antennas by Gaussian Process Regression

ELECTROMAGNETIC SCATTERING FROM A CHIRAL- COATED NIHILITY CYLINDER

Power Electronic Circuits Design: A Particle Swarm Optimization Approach *

Reactive Power Contribution of Multiple STATCOM using Particle Swarm Optimization

A Novel Approach for Complete Identification of Dynamic Fractional Order Systems Using Stochastic Optimization Algorithms and Fractional Calculus

EFFICIENT SIMULATIONS OF PERIODIC STRUC- TURES WITH OBLIQUE INCIDENCE USING DIRECT SPECTRAL FDTD METHOD

Particle Swarm Optimization with Velocity Adaptation

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing

The Particle Swarm Explosion, Stability, and Convergence in a Multidimensional Complex Space

Discrete evaluation and the particle swarm algorithm

Distributed Particle Swarm Optimization

PHYSICAL MODELLING AND PARTICLE SWARM DESIGN OF COPLANAR WAVEGUIDE SQUARE SPIRAL INDUCTOR

Stochastic Velocity Threshold Inspired by Evolutionary Programming

Application Research of Fireworks Algorithm in Parameter Estimation for Chaotic System

Center-based initialization for large-scale blackbox

Toward Effective Initialization for Large-Scale Search Spaces

OPTIMAL POWER FLOW BASED ON PARTICLE SWARM OPTIMIZATION

Available online at AASRI Procedia 1 (2012 ) AASRI Conference on Computational Intelligence and Bioinformatics

ESTIMATION OF THE ATMOSPHERIC DUCT FROM RADAR SEA CLUTTER USING ARTIFICIAL BEE COLONY OPTIMIZATION ALGORITHM

New Concept Conformal Antennas Utilizing Metamaterial and Transformation Optics

List of Figures List of Tables. Acknowledgments Abbreviations

EXTENSION OF NESTED ARRAYS WITH THE FOURTH-ORDER DIFFERENCE CO-ARRAY ENHANCEMENT

Finding Robust Solutions to Dynamic Optimization Problems

Mixture Models and EM

Enhancement in Channel Equalization Using Particle Swarm Optimization Techniques

Engineering Structures

OPTIMIZATION refers to the study of problems in which

Optimization Of Cruise Tourist Orbit For Multiple Targets On GEO

Particle swarm optimization approach to portfolio optimization

A COMPACT PI-STRUCTURE DUAL BAND TRANSFORMER

Research Article Multiswarm Particle Swarm Optimization with Transfer of the Best Particle

Discrete Evaluation and the Particle Swarm Algorithm.

STUDY ON THE PROPERTIES OF SURFACE WAVES IN COATED RAM LAYERS AND MONO-STATIC RCSR PERFORMANCES OF A COATED SLAB

A Scalability Test for Accelerated DE Using Generalized Opposition-Based Learning

Transitional Particle Swarm Optimization

Nonlinear Process Identification Using Fuzzy Wavelet Neural Network Based on Particle Swarm Optimization Algorithm

Through-wall Imaging of Conductors by Transverse Electric Wave Illumination

Soft Computing for Terahertz Metamaterial Absorber Design for Biomedical Application

A Method of HVAC Process Object Identification Based on PSO

Archive of SID. Optimal Design of Three-Phase Induction Motor Using Particle Swarm Optimization.

Electric Load Forecasting Using Wavelet Transform and Extreme Learning Machine

Multiple Particle Swarm Optimizers with Diversive Curiosity

Vedant V. Sonar 1, H. D. Mehta 2. Abstract

FIRE FLY AND ARTIFICIAL BEES COLONY ALGO- RITHM FOR SYNTHESIS OF SCANNED AND BROAD- SIDE LINEAR ARRAY ANTENNA

An Improved Quantum Evolutionary Algorithm with 2-Crossovers

H. Oraizi and M. Fallahpour Department of Electrical Engineering Iran University of Science and Technology (IUST) Narmak, Tehran, Iran

An efficient method for tracking a magnetic target using scalar magnetometer array

Adaptive Differential Evolution and Exponential Crossover

A PSO APPROACH FOR PREVENTIVE MAINTENANCE SCHEDULING OPTIMIZATION

STEPPED-FREQUENCY ISAR MOTION COMPENSATION USING PARTICLE SWARM OPTIMIZATION WITH AN ISLAND MODEL

Solving the Constrained Nonlinear Optimization based on Imperialist Competitive Algorithm. 1 Introduction

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Transcription:

J. of Electromagn. Waves and Appl., Vol. 23, 711 721, 2009 ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS L. Zhang, F. Yang, and A. Z. Elsherbeni Center of Applied Electromagnetics Systems Research (CAESR) Department of Electrical Engineering The University of Mississippi University, MS 38677, USA Abstract The particle swarm optimization (PSO), now widely used in the electromagnetics community, is a robust evolutionary method based on the property of swarm intelligence. This paper focuses on the random variable effect in the PSO algorithm, and two random distribution functions, namely, the uniform distribution and Gaussian distribution, are studied and compared in details. It is revealed through the statistic analysis that the Gaussian distributed random variables increase the efficiency of the PSO algorithm as compared to the widely used uniformly distributed random variables. This conclusion has been demonstrated through both a mathematical benchmark function and an antenna array optimization. 1. INTRODUCTION The Particle Swarm Optimization (PSO) technique was first proposed by Kennedy and Eberhart as the derivative of swarm intelligence [1], and was introduced into the antenna community by Robinson and Rahmat-Samii [2]. Recently PSO has been applied in various electromagnetic (EM) applications such as antenna array synthesis, reflector antenna beam shaping, and designs of patch antennas and absorber [3 6]. Basically, the PSO algorithm employs several particles (or agents) to search the best variables combination in an n-dimensional space to achieve a specific optimization goal. Since all particles exchange the information of their best fitness values and corresponding locations, by manipulating the moving directions and Corresponding author: A. Z. Elsherbeni (Elsherbeni@ieee.org).

712 Zhang, Yang, and Elsherbeni speeds accordingly, the whole swarm will move toward a global best value after several iterations. How to balance particles best fitness values and update their velocities will change the trajectory during the searching process. Many parametric studies have been conducted on PSO coefficients to improve the algorithm efficiency [7, 8]. In addition, the boundary condition that deals with particles flying out of the solution space is a unique problem in PSO [2, 9, 10]. Three basic boundary conditions, namely, the absorbing wall, the reflecting wall, and the invisible wall, are illustrated and compared with each other in [2]. In this paper, the boundary condition used is a variation of the absorbing boundary condition: when the particle hit the boundary, its velocity keeps the same, but the particle stays at the boundary to make sure it will not roam outside the solution space. In this paper, the authors investigate the random functions that control the impacts of particles personal best and the swarm global best in the PSO algorithm. Surveying the PSO literatures, most of current PSO implementations are based on the hypothesis that the randomness of swarm behavior fits a uniform distribution in the range of (0, 1). This paper explains the rationality of using Gaussian distributed random variables in PSO. Compared to [11 13] that used an absolute value of a normal-distributed Gaussian variable, this paper proposes a new implementation scheme where the mean and variance of the Gaussian variable can be flexibly selected to improve the PSO efficiency. Detailed comparisons are presented among these three methods, through statistical studies on a mathematic benchmark function and a practical electromagnetic optimization problem. 2. RANDOM VARIABLES IN PSO ALGORITHM An important step in the PSO algorithm is to determine particles velocities. For particle i in the nth iteration, the velocity is updated using the following equation: v i n+1 = ω vi n + c 1 rand 1 () (p i n xi n) + c2 rand 2 () (g n x i n) (1) Equation (1) includes the velocity vn, i current location x i n,personal best (pbest) p i n,globalbest(gbest) g n, and other balance coefficients. The parameter ω is the inertial weight and it determines the extent of which the particle remains along its original course unaffected by the pulls of pbest and gbest. Eberhart and Shi suggested changing this value linearly from 0.9 to 0.4 over the course of an optimization process [7]. Since each particle can remember the location of the personal best fitness value and share the information with other

Random variables in PSO: Gaussian and uniform distributions 713 particles, c 1 and c 2 imitate the weights whether returning to the location of the personal best or exploring the location toward the global best. Early PSO developers suggested both coefficients equal to 2.0, but recent analysis gives another optimal choice for c 1 and c 2 to be 2.8 and 1.3, respectively [8]. Following c 1 and c 2, there are two random functions rand 1 () and rand 2 (). The purpose of introducing these two random functions is to mimic the unpredictable behavior of nature swarms. Generally these two functions represent two separate calls, and in most implementations people use the random function uniformly distributed between 0 and 1 [2]. Thus, the pulling forces of pbest and gbest would vary between 0 and 1 with the uniform probability in the optimization procedure. If we extend our scope to wide areas such as psychology, astronomy, and physics, it is found that many observed random variables and data follow a well known Gaussian distribution [14, 15]. Even though some of them do not exactly fit the Gaussian distributed curve, they can be approximated quite well due to the central limit theorem [16]. This theorem states that the sum of a large number of independent and identically-distributed random variables will be approximately normally distributed if the random variables have a finite variance. Many common attributes such as test scores, heights, etc., roughly follow Gaussian distributions, where a small probability occurs at the high and low ends, and a large probability occur around the middle. Since the PSO algorithm is based on the stochastic property of swarm intelligence, it is nature to consider using the Gaussian distributed random variables instead of the uniformly distributed variables in (1). It is also noticed that in the PSO algorithm the whole swarm may contain 30 or more particles and spend hundreds or even thousands iterations in searching an optimum value. Therefore, the randomness in particles movement may be more suitably described by the Gaussian distribution than by the uniform distribution. The probability density function (PDF) of a Gaussian function (x μ) 2 e used in this paper is defined as: ϕ(x) = 1 σ 2σ 2. There are two 2π parameters in this definition, namely, mean (μ) and standard deviation (σ), to describe the Gaussian PDF [17]. Fig. 1 compares the PDF of the Gaussian distribution and the uniform distribution. Changing the mean value will shift the symmetric axis of the curve and changing the standard deviation will alter the sharpness of the curve. Recall that the traditional random function used in PSO varies between 0 and 1 and its mean value is 0.5, hence, we set the Gaussian PDF with the same mean value, μ =0.5. It is observed from Fig. 1 that more possibilities occur around 0.5 and then gradually decrease to zero towards the two

714 Zhang, Yang, and Elsherbeni 3 2.5 2 σ =1/3 σ =1/4 σ =1/6 Uniform Distribution φ(x) 1.5 1 0.5 0-0.5 0 0.5 1 1.5 x Figure 1. values. Gaussian probability density functions with different σ Table 1. Probabilities between 0 1 for different σ values. σ Probability between 0 1 1/3 86.64% 1/4 95.45% 1/6 99.73% ends. Fig. 1 also shows that a smaller standard deviation (σ) value means a sharper bell curve and a higher probability within the 0 1 region compared to larger σ values. For different σ values, probabilities of the Gaussian random variable occurring between 0 and 1 are listed in Table 1. Now the question is: how to select a suitable σ value for a common particle swarm optimization? Next section will give the statistic analysis of different standard deviation (σ) performances on a mathematic benchmark function. 3. COMPARISON OF UNIFORM AND GAUSSIAN VARIABLES UNDER THE RASTRIGIN S FUNCTION A mathematic benchmark, the Rastrigin s function, f(x) = N ( x 2 i 10 cos(2πx i )+10 ) x i [ 10, 10], N =3, (2) i=1

Random variables in PSO: Gaussian and uniform distributions 715 with a pre-known zero minimum value is utilized to test the performance of the PSO algorithm with two different types of random variables. The Rastrigin function is a typical example of non-linear multimodal function. It was first proposed by Rastrigin [18] as a 2- dimensional function and has been generalized by Mühlenbein in [19]. This function is a fairly difficult problem due to its large search space and large number of local minima. Both Fig. 1 and Table 1 imply that if we further increase σ larger than 1/3, the random variable may occur outside the range [0, 1] with a high possibility. This could greatly influence the relative pulls of pbest and gbest, and also may bring some unstable conditions. Based on the above consideration, the σ range is set from 0.16 to 0.38 with a step of 0.02, and the results are compared with that using the traditional uniform distribution. First we test the average fitness value under a given number of iterations. Two different cases are studied, one using 50 iterations and the other one using 100 iterations. For each case, 100 independent optimization trials are performed for statistic analysis. Then the calculated average global best fitness values of the 100 trials is conducted and compared with the average global fitness value obtained from the uniform distribution. The comparison results are shown in Figs. 2(a) and (b). It is observed that not all 12 σ values are good candidates. Only those σ values with fitness value lower than the result from uniform distribution are considered to be good candidates. Thus, the optimum σ should be smaller than 0.3. Since the minimum of this benchmark problem is known, we could also test the performance of two random variables under a (a) (b) Figure 2. Average fitness comparison of Gaussian and uniform distributions in Rastingrin s function using (a) 50 iterations/trial, (b) 100 iterations/trial.

716 Zhang, Yang, and Elsherbeni fixed fitness criterion. In this experiment, the iteration times that achieve a specified fitness value are recorded for both Gaussian and Uniform PSO. The criteria is set with two values, 0.1 and 0.01, and the maximum iterations are set to 5000 in each trial, which means if after 5000 iterations the fitness is still larger than the criteria, the program will stop and record the final iteration as 5000. A set of 100 independent optimization trials are performed and then the average iteration times are calculated. The comparison results are shown in Figs. 3(a) and (b). When fitness criteria = 0.1, average iteration number is 136 for the uniform distribution and the average iteration number is reduced to 78 for the Gaussian distribution with a value of σ = 0.24, which is a 42.6% reduction. For the fitness criteria = 0.01, the iteration number is reduced by 42.4% for σ =0.24. A suitable σ range is observed around 0.2 0.3 in this experiment. (a) (b) Figure 3. Iteration time comparison of Gaussian and uniform distributions in Rastingrin s function while setting (a) fitness criteria = 0.1, (b) fitness criteria = 0.01. To further illustrate the efficiency of using Gaussian distributed random variables in PSO, three σ values of 0.24, 0.25, and 0.26 are used to optimize the Rastrigin s function through 100 trials with 50 iterations per trial. The three Gaussian distribution convergent curves are displayed in Fig. 4 accompanied by the uniform distribution case. At each iteration point, the corresponding fitness is the average value of the 100 trials. All four curves converge gradually with the increasing iteration time, but three Gaussian distribution curves decrease faster than the uniform distribution curve. Also from this case, the above three σ values could be reserved as suitable σ values for future utilization of Gaussian random variables. The selection of the σ value has also been verified through two other benchmark functions: Griewank function and Rosenbrock function [2]. Both are tested through 100 trials, the results are presented in Fig. 5.

Random variables in PSO: Gaussian and uniform distributions 717 Figure 4. PSO convergence curves of Rastrigin s function using random variables with the Uniform and Gaussian Distributions. (a) (b) Figure 5. PSO convergence curves of (a) Griewank function and (b) Rosenbrock function using random variables with the Uniform and Gaussian Distributions. 4. LINEAR ANTENNA ARRAY SYNTHESIS In this section, the previous studied Gaussian PSO is used to optimize practical electromagnetic problems. It is a ten-element unequallyspaced antenna array with a total length of 5λ. The element locations will be optimized, and the goal is the suppression of the sidelobe level (SLL) to 18.96 db in the area 0 78.5 and 101.5 180 [20, 21]. The SLL is reduced by 6 db compared to that of the equally-spaced linear array. The array factor in this case is: AF (θ) =2 5 cos(kd(n)cos(θ)), (3) n=1

718 Zhang, Yang, and Elsherbeni Figure 6. 10-element array pattern synthesis using PSO with Gaussian random variables. where the wave number k =2π/λ; d(n) are the elements locations. Based on the investigation in Section 3, we use σ = 0.24 for the Gaussian distributed random variable. The final optimized array pattern is then shown in Fig. 6, where all specifications are satisfied. The equally spaced array pattern (Original) is also shown and compared with the optimized array pattern. Fig. 7 compares the convergent curves of using Gaussian and uniform variables. It shows a better performance of Gaussian variables since in the Gaussian case, 38 iterations are consumed but 130 iterations are used in the uniform case. In Fig. 7, the comparison is made for only one trial test. In order to statistically compare the performance between two distributions in this array optimization case, we run this array optimization problem under 100 and 200 independent trials, and in each trial the maximum iteration number is set to 500. The average numbers of the consumed iterations are recorded in Table 2. It is noticed that using Gaussian distributed random variables reduces more than half iterations than using the uniform distributed random variables. The PSO algorithm with the Gaussian distributed random variable exhibits an efficient performance in this antenna array case. Table 2. Statistic analysis of iteration time used between two distribution functions in 10 elements array optimization. Gaussian Uniform 100 Trials 103 iterations 263 iterations 200 Trials 152 iterations 201 iterations

Random variables in PSO: Gaussian and uniform distributions 719 Figure 7. Convergent curve comparison between the uniform and Gaussian variables in 10- element array pattern synthesis. Figure 8. Null controlled pattern of a 20-element linear array optimized using Gaussian PSO. It is worthwhile to point out that Ref. [11 13] also used Gaussian variables in the PSO algorithm. However, they used the absolute value of a Gaussian probability distribution with zero mean and unit variance, i.e., abs(n(0, 1)). The PDF of abs(n(0, 1)) is given by ϕ(x) = 2 2π e x2 2,x 0. To achieve a contiguous expected value as the uniform case, the coefficients c 1 and c 2 are modified accordingly. However, it does not have the flexibility to change the variance of the random function. The reference method is also used in the two optimization problems in this paper. For example, the average iteration numbers of this antenna array optimization are 219 and 207 for 100 and 200 trials using the proposed PDF in [11 13]. As compared to the data in Table 2, it is clear that the reference method has a better performance than the uniform case, but is less desirable than our proposed method. To further demonstrate the validity of the Gaussian PSO, a relatively complex case is attempted here, which has a null controlled radiation pattern. The target is to optimize the excitation coefficients of a symmetric 20-element array [21]. The array factor in this case is: AF (θ) =2 10 n=1 a(n)cos(kd(n)cos(θ)) (4) Now the element is equally spaced with a half wavelength distance. The magnitude of each element should be optimized in a range of [0, 1]. Two nulls are desired to exist in [50,60 ] and [120, 130 ], and their levels should be lower than 55 db. Using Gaussian PSO with

720 Zhang, Yang, and Elsherbeni σ =0.24 the optimal pattern is obtained and presented in Fig. 8. The optimized excitation magnitudes of elements are: [ 0.748, 0.737, 0.655, 0.572, 0.468, 0.360, 0.268, 0.191, 0.096, 0.058]. 5. CONCLUSION This paper compared the PSO performance using three different random functions, the uniform distribution and two different Gaussian distributions. How to choose a suitable mean value and standard deviation of the Gaussian distribution is discussed in details. Through the example of the Rastrigin function, a suitable σ value range is found between 0.2 and 0.3. According to various statistical comparisons of the Rastrigin s function and an antenna array example, it is demonstrated that the PSO algorithm is more efficient when using the new proposed Gaussian random variables than the conventional uniform random variables and other Gaussian random method. REFERENCES 1. Kennedy, J. and R. C. Eberhart, Swarm Intelligence, Morgan Kaufmann, San Francisco, CA, 2001. 2. Robinson, J. and Y. Rahmat-Samii, Particle swarm optimization in electromagnetics, IEEE Trans. Antennas Propagat., Vol. 52, No. 2, 397 407, Feb. 2004. 3. Gies, D. and Y. Rahmat-Samii, Particle swarm optimization for reconfigurable phase-differentiated array design, Microwave Opt. Technol. Lett., Vol. 38, No. 3, 172 175, Aug. 2003. 4. Boeringer, D. and D. Werner, Particle swarm optimization versus genetic algorithms for phased array synthesis, IEEE Trans. Antennas Propagat., Vol. 52, No. 3, 771 779, Mar. 2004. 5. Jin, N. and Y. Rahmat-Samii, Parallel particle swarm optimization and finite difference time-domain (PSO/FDTD) algorithm for multiband and wide-band patch antenna designs, IEEE Trans. Antennas Propagat., Vol. 53, No. 11, 3459 3468, Nov. 2005. 6. Cui, S. and D. Weile, Application of parallel particle swarm optimization scheme to the design of electromagnetic absorbers, IEEE Trans. Antennas Propagat., Vol. 53, No. 11, 3616 3624, Nov. 2005. 7. Eberhart, R. C. and Y. Shi, Evolving artificial neural networks, Proc. 1998 Int. Conf. Neural Networks and Brain, Beijing, China, 1998.

Random variables in PSO: Gaussian and uniform distributions 721 8. Carlisle, A. and G. Dozier, An off-the-shelf PSO, Proc. Workshop on Particle Swarm Optimization, Indianapolis, IN, 2001. 9. Mikki, S. M. and A. A. Kishk, Hybrid periodic boundary condition for particle swarm optimization, IEEE Trans. Antennas Propagat., Vol. 55, No. 11, 3251 3256, Nov. 2007. 10. Xu, S. and Y. Rahmat-Samii, Boundary conditions in particle swarm optimization revisited, IEEE Trans. Antennas Propagat., Vol. 55, No. 3, 760 765, Mar. 2007. 11. Clerc, M. and J. Kennedy, The particle swarm-explosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evolutionary Computation, Vol. 6, No. 1, 58 73, Feb. 2002. 12. Trelea, I. C., The particle swarm optimization algorithm: Convergence analysis and parameter selection, Information Processing Letters, Vol. 85, No. 6, 317 325, 2003. 13. Krohling, R. A., Gaussian swarm: A novel particle swarm optimization algorithm, Proc. IEEE Conf. CIS, 372 376, Singapore, 2004. 14. Ben-Zvi, D. and J. Garfield, The Challenge of Developing Statistical Literacy, Reasoning and Thinking, Springer, 2004. 15. [Online]: Available at http://davidmlane.com/hyperstat/normaldistribution.html. 16. Tijms, H., Understanding Probability: Chance Rules in Everyday Life, Cambridge University Press, Cambridge, 2004. 17. Bryc, W., Normal Distribution Characterizations with Applications, Springer-Verlag, Inc., New York, 1995. 18. Törn, A. and A. Zilinskas, Global optimization, Lecture Notes in Computer Science, No. 350, Springer-Verlag, Berlin, 1989. 19. Mühlenbein, H., D. Schomisch, and J. Born, The parallel genetic algorithm as function optimizer, Parallel Computing, Vol. 17, 619 632, 1991. 20. Jin, N. and Y. Rahmat-Samii, A novel design methodology for aperiodic antenna arrays using particle swarm optimization, Dig. 2006 National Radio Science Meeting, 69, Jan. 2006. 21. Weng, W., F. Yang, and A. Z. Elsherbeni, Linear antenna array synthesis using Taguchi s method: A novel optimization technique in electromagnetics, IEEE Trans. Antennas Propagat., Vol. 55, No. 3, 723 730, Mar. 2007.