Improving Search Space Exploration and Exploitation with the Cross-Entropy Method and the Evolutionary Particle Swarm Optimization

Similar documents
Entropy Enhanced Covariance Matrix Adaptation Evolution Strategy (EE_CMAES)

Cross Entropy. Owen Jones, Cardiff University. Cross Entropy. CE for rare events. CE for disc opt. TSP example. A new setting. Necessary.

The Cross Entropy Method for the N-Persons Iterated Prisoner s Dilemma

Information geometry for bivariate distribution control

Application of the Cross-Entropy Method to Clustering and Vector Quantization

EPSO BEST-OF-TWO-WORLDS META-HEURISTIC APPLIED TO POWER SYSTEM PROBLEMS

Lecture 9 Evolutionary Computation: Genetic algorithms

Levy Differential Evolutionary Particle Swarm Optimization (LEVY DEEPSO)

CSC 4510 Machine Learning

Metaheuristics and Local Search

Parameter Estimation for ODEs using a Cross-Entropy Approach. Bo Wang

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

The Wright-Fisher Model and Genetic Drift

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Ant Colony Optimization: an introduction. Daniel Chivilikhin

Genetic Algorithm: introduction

Hybridizing the Cross Entropy Method: An Application to the Max-Cut Problem

PLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use:

Chapter 8: Introduction to Evolutionary Computation

Beta Damping Quantum Behaved Particle Swarm Optimization

Computational statistics

Simultaneous state and input estimation of non-linear process with unknown inputs using particle swarm optimization particle filter (PSO-PF) algorithm

CSC 578 Neural Networks and Deep Learning

CS Lecture 19. Exponential Families & Expectation Propagation

Supplementary Technical Details and Results

A Tutorial on the Cross-Entropy Method

Metaheuristic algorithms for identification of the convection velocity in the convection-diffusion transport model

Particle Swarm Optimization. Abhishek Roy Friday Group Meeting Date:

Probability Distributions Columns (a) through (d)

Local Search & Optimization

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Artificial Intelligence Heuristic Search Methods

OPTIMIZATION OF MODEL-FREE ADAPTIVE CONTROLLER USING DIFFERENTIAL EVOLUTION METHOD

Novel spectrum sensing schemes for Cognitive Radio Networks

Chapter 5. Chapter 5 sections

Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Evolutionary Functional Link Interval Type-2 Fuzzy Neural System for Exchange Rate Prediction

New Global Optimization Algorithms for Model-Based Clustering

Research Article A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution

Intuitionistic Fuzzy Estimation of the Ant Methodology

Adaptive Generalized Crowding for Genetic Algorithms

Updating ACO Pheromones Using Stochastic Gradient Ascent and Cross-Entropy Methods

Differential Evolution Based Particle Swarm Optimization

STOCHASTIC OPTIMIZATION USING MODEL REFERENCE ADAPTIVE SEARCH. Jiaqiao Hu Michael C. Fu Steven I. Marcus

Submitted by: Ashish Gupta Group 3

Methods for finding optimal configurations

Probability and Statistics

04. Random Variables: Concepts

Sensitive Ant Model for Combinatorial Optimization

Evolutionary Computation: introduction

Efficient Alphabet Partitioning Algorithms for Low-complexity Entropy Coding

DETECTING THE FAULT FROM SPECTROGRAMS BY USING GENETIC ALGORITHM TECHNIQUES

CHAPTER 3 FUZZIFIED PARTICLE SWARM OPTIMIZATION BASED DC- OPF OF INTERCONNECTED POWER SYSTEMS

Minimization of Energy Loss using Integrated Evolutionary Approaches

Multiobjective Optimization of Cement-bonded Sand Mould System with Differential Evolution

Differential Evolution: a stochastic nonlinear optimization algorithm by Storn and Price, 1996

Engineering Structures

An Evolutionary Programming Based Algorithm for HMM training

Concepts and Applications of Kriging. Eric Krause

Integer weight training by differential evolution algorithms

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing

ECE 6540, Lecture 06 Sufficient Statistics & Complete Statistics Variations

Large-Scale 3D En-Route Conflict Resolution

Robust optimization for resource-constrained project scheduling with uncertain activity durations

Variable Objective Search

Sequential Importance Sampling for Rare Event Estimation with Computer Experiments

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

Massive Experiments and Observational Studies: A Linearithmic Algorithm for Blocking/Matching/Clustering

Some Approximations of the Logistic Distribution with Application to the Covariance Matrix of Logistic Regression

Summary of Chapters 7-9

Gaussian predictive process models for large spatial data sets.

Evaluation and Validation

A Model Reference Adaptive Search Method for Global Optimization

Multi-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm

Particle swarm optimization (PSO): a potentially useful tool for chemometrics?

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization

Outline. Ant Colony Optimization. Outline. Swarm Intelligence DM812 METAHEURISTICS. 1. Ant Colony Optimization Context Inspiration from Nature

Appendix A.0: Approximating other performance measures

Population Genetics: a tutorial

Approximating mixture distributions using finite numbers of components

The Union and Intersection for Different Configurations of Two Events Mutually Exclusive vs Independency of Events

A tabu search based memetic algorithm for the maximum diversity problem

Self-Adaptive Ant Colony System for the Traveling Salesman Problem

Reformulation and Sampling to Solve a Stochastic Network Interdiction Problem

Rare Event Simulation using Importance Sampling and Cross-Entropy p. 1

Restarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census

Lecture No. 5. For all weighted residual methods. For all (Bubnov) Galerkin methods. Summary of Conventional Galerkin Method

Computational Intelligence in Product-line Optimization

Hill climbing: Simulated annealing and Tabu search

Standard Particle Swarm Optimisation

A Model Reference Adaptive Search Method for Global Optimization

Chapter 9: Hypothesis Testing Sections

Passing-Bablok Regression for Method Comparison

Particle swarm optimization approach to portfolio optimization

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution

Concepts and Applications of Kriging. Eric Krause Konstantin Krivoruchko

An Introduction to Expectation-Maximization

Metaheuristics. 2.3 Local Search 2.4 Simulated annealing. Adrian Horga

An Analysis on Recombination in Multi-Objective Evolutionary Optimization

Transcription:

1 Improving Search Space Exploration and Exploitation with the Cross-Entropy Method and the Evolutionary Particle Swarm Optimization Leonel Carvalho, Vladimiro Miranda, Armando Leite da Silva, Carolina Marcelino, Elizabeth Wanner leonel.m.carvalho@inesctec.pt 08-08-2018

2 Methodological Approach Combination of two optimization methods Cross-Entropy (CE) Method for exploration Evolutionary Particle Swarm Optimization (EPSO) for exploitation New recombination operator for multi-period constraints EPSO parameters were tuned using an iterative optimization algorithm based on a 2 2 factorial design Mutation Rate τ Communication Probability P Input Controllable Factors x 1 x 2 x p... Metaheuristic... z 1 z 2 z q Uncontrollable Factors Output y

3 CE Method for Optimization Heuristic method proposed by Reuven Rubinstein for combinatorial and continuous non-linear optimization based on the Kullback Leibler divergence Kullback Leibler divergence (relative entropy) is a measure of how one probability distribution diverges from a target one It is based on the concept that optimization problems can be defined as a rare-event estimation problem The probability of locating an optimal or near optimal solution using random search is a rare-event probability

4 CE Method for Optimization Instead of random search, the CE Method sets an adaptive iterative importance sampling process The sampling distribution (Bernoulli, Binomial, Gaussian, Multivariate Bernoulli) is modified in each iteration so that the rare-event becomes more likely to occur The optimization process starts by defining a sampling distribution for the variables followed by an iterated adjustment of the distribution parameters (e.g. mean, standard deviation) according to the performance of an elite subset of the samples

5 CE Method for Optimization Select µ 0 and σ 02, the number of samples per iteration N, the rarity parameter ρ, the smoothing parameter α, k := 0 Do k := k + 1 Generate a sample of X 1,, X N from the sampling distribution N(µ k-1, σ k-12 ) Compute S(X 1 ),, S(X N ) and order the samples from the worst to the best performing ones, i.e. S(X 1 ) < S(X 2 ) < < S(X N ) Compute γ k as the ρ th quantile of the performance values and select N elite =ρn; let ψ be the subset from the ordered set of samples that contains all the N elite samples, i.e., the samples S(X)<γ k For j = 1 to n X 2 ij ( ) µ := 2 X ij µ kj σ : = End For Apply smoothing Until k < k MAX μ kj i Ψ N elite kj i Ψ N elite 2 2 2 k : = α μk + (1 α) μk 1 σk : = α σk + (1 α) σk 1

6 CE Method for Optimization 0.07 k =0 k =8 Probability 0.06 0.05 0.04 0.03 The mean and variance of the distribution is adjusted until there is a concentration of density at the optimum k =16 k =24 0.02 0.01 0-600 -400-200 0 200 400 600 800 1000 1200 Pg1 (MW)

7 EPSO Evolutionary Particle Swarm Optimization (EPSO) is an hybrid between Evolutionary Strategies and Particle Swarm Optimization proposed by Vladimiro Miranda in 2002 Replication: each individual is replicated r times Mutation: the r clones have their weights w mutated Recombination: the r+1 individuals generate one offspring Evaluation: each offspring has its fitness evaluated Selection: the best particle out of the r+1 survives to be part of a new generation

8 Recombination in EPSO Movement Rule XX new = XX + VV new VV new = ww II VV + ww MM XX MM XX + ww CC PP XX GG XX Inertia: movement in the same direction Memory: attraction towards the individual best solution Cooperation: attraction to a region near the global best position ww II VV XX. ww CC PP XX GG XX ww MM XX MM XX. XX GG.. XX RR XX new. XX GG

9 EPSO Particularities Weights are mutated and subjected to selection ww =ww[1 + σnn 0,1 ] Mutation Rate The individuals are attracted to a region near the best solution found XX GG =XX GG [1 + ww GGGG NN 0,1 ] Matrix P acts as a communication barrier VV new = ww II VV + ww MM XX MM XX + ww CC PP XX GG XX 1 0 0 PP = 0 0 0 0 0 XX GG Communication Probability If i j then P ij = 0 ElseIf U(0,1) > p then P ij = 0 Else P ij = 1

10 EPSO Recombination in the 2018 Competition Test Bed B consists of a Dynamic OPF for 6 periods that considers ramp constraints for generators (5 x 53 constraints) Feasible solutions are difficult to obtain since the production in each period can be highly conditioned by the production in the adjacent periods Idea: to compute a new production for the current period based on the recombination of this production with the production of the next period

11 EPSO Recombination in the 2018 Competition Additional recombination rule for generators XX NNNNNN ii,jj : = XX NNNNNN ii,jj δδ + XX NNNNNN ii,jj+1 (1 δδ) 1 0 δ 1 is random variable that follows a Beta(α,β) pdf 0.9 0.8 0.7 0.6 Beta(0.1,0.1) Beta(0.5,0.5) Beta(1,1) Beta(2,2) Beta(4,4) Beta(16,16) Beta(0.5,0.5) was selected P ( < x ) 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

12 EPSO Recombination in the 2018 Competition In EPSO, integer variables are modeled as real variables and rounded for fitness computation To force diversity, after the computation of a new position, each integer in the new position is compared with that of its predecessor and, if all integers in the two positions are equal, then one integer in the new solution is randomly selected to be increased/decreased by the value of 1 with probability 0.5

13 Switch from CE Method to EPSO 10 13 10 12 When to switch from methods? 10 11 10 10 CE Method Fitness 10 9 10 8 10 7 EPSO 10 6 10 5 10 4 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 #Fitness Evaluations 10 4

14 Switch from CE Method to EPSO Trial & error Not very clever but a practical approach for the competition Track the rate of improvement of the best fitness over a number of iterations and switch when the rate becomes inferior a given threshold A new parameter has to be defined Monitor the variance of the CE Method sampling distributions The problem is that the variance can decrease very slowly for variables that can take a wide range of values without significantly affecting the fitness function

15 EPSO Parameter Tuning Iterative method based on 2 2 factorial design and the Two-way ANOVA to guarantee the best performance for EPSO Define maximum allowable interval for τ and P (e.g. [0.2, 0.8] ) Run a 2 2 factorial design (4 experiments) Perform 4 31 runs of EPSO for the 4 combinations of the limit values of τ and P Do Run Two-way ANOVA and select the variable with the highest F-test statistic/the lowest p-value Compute the main effect to determine which limit should be updated Update the limit to the central value of the current interval (e.g. if the output decreases with the variable increase, then set the lower limit to the central value) Run a 2 2 factorial design with the updated limits (+2 experiments) While there is evidence that τ and P affect the output (p-value less than 0.05) or if the difference between interval limits is greater than a value (e.g. 0.1)

16 Parameter Tuning for Minimization Two-way ANOVA results Factor F-test Statistic P-value Main Effect 1st Iteration τ 3.474 0.064-350.73 P 6.675 0.011 486.14 τ P 0.225 0.636 89.17 2nd Iteration τ 5.469 0.021-548.35 P 6.022 0.015 575.42 τ P 1.496 0.223 286.79 3nd Iteration τ 5.508 0.020-448.7 P 1.448 0.231 230.01 τ P 0.958 0.329 187.14 4th Iteration τ 2.849 0.093-306.78 P 2.497 0.116 287.21 τ P 1.807 0.181 244.34 Factor 1st Iteration 2nd 3rd 4th τ (low) 0.2 0.2 0.2 0.5 τ (high) 0.8 0.8 0.8 0.8 P (low) 0.2 0.2 0.2 0.2 P (high) 0.8 0.5 0.35 0.35 5 4.9 4.8 4.7 4.6 4.5 4.4 4.3 x 10 4 Setting values for τ and P Greater than the threshold level of 0.05 4.2 A B C D Box-plots for the final range of τ and P

17 Results Test Bed A Stochastic OPF: 12 independent runs Fitness Case 1 Case 2 Case 3 Case 4 Case 5 Best 80,732.46 67,709.06 55,245.86 84,382.21 71,044.22 Worst 81,547.19 68,923.91 56,683.60 84,880.76 71,128.74 Mean 81,077.07 68,473.43 55,935.62 84,442.94 71,065.91 Standard Deviation 240.41 306.05 418.30 138.71 22.97

18 Results Test Bed A Stochastic OPF: 12 independent runs 10 14 Test Bed A - Case 3 10 12 10 10 log(best Fitness) 10 8 10 6 10 4 0 0.5 1 1.5 2 2.5 3 Evaluation 10 4

19 Results Test Bed A Stochastic OPF: 12 independent runs 10 14 Test Bed A - Case 5 10 12 10 10 log(best Fitness) 10 8 10 6 10 4 0 0.5 1 1.5 2 2.5 3 Evaluation 10 4

20 Results Test Bed B Dynamic OPF: 10 independent runs Objective Fitness Best 753,071.99 773,193.77 Worst 797,029.49 823,684.44 Mean 766,841.73 789,719.58 Standard Deviation 15,175.79 15,618.73

21 Final Remarks Combination of methods to address different stages of the search can greatly improve accuracy and robustness of heuristic methods Systematic parameter tuning is essential to reduce the information required from the user Sometimes tailor-made recombination rules to deal with specific constraints of the optimization problems can lead to better performances

22 Thank you for your attention! Leonel Carvalho leonel.m.carvalho@inesctec.pt