Problem Statement Continuous Domain Search/Optimization. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms.
|
|
- Amy Nelson
- 5 years ago
- Views:
Transcription
1 Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms Anne Auger & Nikolaus Hansen INRIA Saclay - Ile-de-France, project team TAO Universite Paris-Sud, LRI, Bat ORSAY Cedex, France Continuous Domain Search/Optimization Task: minimize a objective function (fitness function, loss function) in continuous domain f : X R n R, x f(x) Black Box scenario (direct search scenario) x f(x) Copyright is held by the author/owner(s). GECCO 8, July, 8, Atlanta, Georgia, USA. ACM /8/ /8/7 gradients are not available or not useful problem domain specific knowledge is used only within the black box, e.g. within an appropriate encoding Search costs: number of function evaluations Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 Content Continuous Domain Search/Optimization Non-Separable Problems Ill-Conditioned Problems Evolution Strategies and EDAs A Search Template Why One-Fifth Success Rule Self-Adaptation Path Length Control 4 Estimation of Distribution 5 Conclusion Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 77 Goal fast convergence to the global optimum... or to a robust solution x solution x with small function value with least search cost there are two conflicting objectives Typical Examples shape optimization (e.g. using CFD) model calibration parameter calibration Problems exhaustive search is infeasible naive random search takes too long deterministic search is not successful / takes too long Approach: stochastic search, Evolutionary Algorithms curve fitting, airfoils biological, physical controller, plants, images Anne Auger & Nikolaus Hansen () Evolution Strategies 4 / 5
2 Metaphors Evolutionary Computation Optimization individual, offspring, parent candidate solution decision variables design variables object variables population set of candidate solutions fitness function objective function loss function cost function generation iteration What Makes a Function Difficult to Solve? Why stochastic search? ruggedness non-smooth, discontinuous, multimodal, and/or noisy function dimensionality non-separability (considerably) larger than three dependencies between the objective variables ill-conditioning cut from -D example, solvable with CMA-ES a narrow ridge... function properties Anne Auger & Nikolaus Hansen () Evolution Strategies 5 / 5 Anne Auger & Nikolaus Hansen () Evolution Strategies 7 / 5 Objective Function Properties We assume f : X R n R to have at least moderate dimensionality, say n, and to be non-linear, non-convex, and non-separable. Additionally, f can be multimodal non-smooth discontinuous ill-conditioned noisy... there are eventually many local optima derivatives do not exist Goal : cope with any of these function properties they are related to real-world problems Separable Problems Definition (Separable Problem) A function f is separable if arg min (x,...,x n) f(x,...,x n ) = ( Non-Separable Problems ) arg min f(x,...),...,arg min f(...,x n ) x x n it follows that f can be optimized in a sequence of n independent -D optimization processes Example: Additively decomposable functions n f(x,...,x n ) = f i (x i ) i= Rastrigin function Anne Auger & Nikolaus Hansen () Evolution Strategies 6 / 5 78 Anne Auger & Nikolaus Hansen () Evolution Strategies 8 / 5
3 Non-Separable Problems Non-Separable Problems Building a non-separable problem from a separable one Ill-Conditioned Problems What Makes a Function Difficult to Solve?... and what can be done Rotating the coordinate system f : x f(x) separable f : x f(rx) non-separable R rotation matrix The Problem Ruggedness What can be done non-local policy, large sampling width (step-size) as large as possible while preserving a reasonable convergence speed R Hansen, Ostermeier, Gawelczyk (995). On the adaptation of arbitrary normal mutation distributions in evolution strategies: The generating set adaptation. Sixth ICGA, pp , Morgan Kaufmann Salomon (996). Reevaluating Genetic Algorithm Performance under Coordinate Rotation of Benchmark Functions; A survey of some theoretical and practical aspects of genetic algorithms. BioSystems, 9():6-78 Anne Auger & Nikolaus Hansen () Evolution Strategies 9 / 5 Dimensionality, Non-Separability Ill-conditioning stochastic, non-elitistic, population-based method recombination operator serves as repair mechanism exploiting the problem structure locality, neighborhood, encoding second order approach changes the neighborhood metric Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 Ill-Conditioned Problems Ill-Conditioned Problems If f is quadratic, f : x x T Hx, ill-conditioned means a high condition number of Hessian Matrix H ill-conditioned means squeezed lines of equal function value Evolution Strategies and EDAs Non-Separable Problems Ill-Conditioned Problems Evolution Strategies and EDAs A Search Template Increased condition number consider the curvature of iso-fitness lines Why One-Fifth Success Rule Self-Adaptation Path Length Control 4 Estimation of Distribution 5 Conclusion Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 79 Anne Auger & Nikolaus Hansen () Evolution Strategies / 5
4 Evolution Strategies and EDAs A Search Template Evolution Strategies and EDAs Stochastic Search Normal Distribution Isotropic Case A black box search template to minimize f : R n R Initialize distribution parameters θ, set population size λ N While not terminate Sample distribution P (x θ) x,...,x λ R n Evaluate x,...,x λ on f Update parameters θ F θ (θ, x,...,x λ, f(x ),...,f(x λ )) probability density.4... Standard Normal Distribution 4 4 D Normal Distribution probability density of -D standard normal distribution.8 Everything depends on the definition of P and F θ deterministic algorithms are covered as well In Evolutionary Algorithms the distribution P is often implicitly defined via operators on a population, in particular, selection, recombination and mutation Natural template for Estimation of Distribution Algorithms Anne Auger & Nikolaus Hansen () Evolution Strategies / D Anne Auger & Nikolaus Hansen () Evolution Strategies 5 / 5 Evolution Strategies and EDAs A Search Template Evolution Strategies and Normal Estimation of Distribution Algorithms New search points are sampled normally distributed as perturbations of m where x i m + σ N i (, C) for i =,...,λ where x i, m R n, σ R +, and C R n n the mean vector m R n represents the favorite solution the so-called step-size σ R + controls the step length the covariance matrix C R n n determines the shape of the distribution ellipsoid The question remains how to update m, C, and σ. Evolution Strategies and EDAs The Multi-Variate (n-dimensional) Normal Distribution Any multi-variate normal distribution N (m, C) is uniquely determined by its mean value m R n and its symmetric positive definite n n covariance matrix C. The mean value m determines the displacement (translation) is the value with the largest density (modal value) the distribution is symmetric about the distribution mean 4 4 probability density.4... Standard Normal Distribution Anne Auger & Nikolaus Hansen () Evolution Strategies 4 / 5 7 Anne Auger & Nikolaus Hansen () Evolution Strategies 6 / 5
5 Evolution Strategies and EDAs The covariance matrix C determines the shape. It has a valuable geometrical interpretation: any covariance matrix can be uniquely identified with the iso-density ellipsoid {x R n x T C x = } Lines of Equal Density Evolution Strategies and EDAs The (µ/µ, λ)-es Non-elitist selection and intermediate (weighted) recombination Given the i-th solution point x i = m + σ N i (, C) }{{} =: y i = m + σ y i Let x i:λ the i-th ranked solution point, such that f(x :λ ) f(x λ:λ ). The new mean reads m µ w i x i:λ = m + σ i= µ w i y i:λ i= } {{ } =: y w N `m, σ I m + σn (, I) one degree of freedom σ components of N (, I) are independent standard normally distributed N `m, D m + D N (, I) N (m, C) m + C N (, I) n degrees of freedom (n + n)/ degrees of freedom components are components are independent, scaled correlated... CMA where w w µ >, µ i= w i = The best µ points are selected from the new solutions (non-elitistic) and weighted intermediate recombination is applied. Anne Auger & Nikolaus Hansen () Evolution Strategies 7 / 5 Anne Auger & Nikolaus Hansen () Evolution Strategies 9 / 5 Evolution Strategies and EDAs Why Evolution Strategies Why? (µ +, λ) µ: # parents, λ: # offspring + selection in {parents} {offspring}, selection in {offspring} ( + )-ES Sample one offspring from parent m x = m + σ N (, C) If x better than m select f(x) = n i= x i in [.,.8] n for n = m x... why? Anne Auger & Nikolaus Hansen () Evolution Strategies 8 / 5 7 Anne Auger & Nikolaus Hansen () Evolution Strategies / 5
6 Why Why Why? Why? f(x) = n i= x i f(x) = n i= x i in [.,.8] n for n = in [.,.8] n for n = Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 Why Why Why? Why?. f(x) = n i= x i in [.,.8] n for n = normalized progress.5..5 normalized step size evolution window for the step-size on the sphere function evolution window refers to the step-size interval where reasonable performance is observed Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 7 Anne Auger & Nikolaus Hansen () Evolution Strategies 4 / 5
7 Methods for Why One-fifth success rule One-Fifth Success Rule /5-th success rule ab, often applied with + -selection increase step-size if more than % of the new solutions are successful, decrease otherwise σ-self-adaptation c, applied with, -selection mutation is applied to the step-size and the better one, according to the objective function value, is selected simplified global self-adaptation path length control d (Cumulative Step-size Adaptation, CSA) e, applied with, -selection a Rechenberg 97, Evolutionsstrategie, Optimierung technischer Systeme nach Prinzipien der biologischen Evolution, Frommann-Holzboog b Schumer and Steiglitz 968. Adaptive step size random search. IEEE TAC c Schwefel 98, Numerical Optimization of Computer Models, Wiley d Hansen & Ostermeier, Completely Derandomized Self-Adaptation in Evolution Strategies, Evol. Comput. 9() Proba of success (p s ) / /5 Proba of success (p s ) too small e Ostermeier et al 994, Step-size adaptation based on non-local use of selection information, PPSN IV Anne Auger & Nikolaus Hansen () Evolution Strategies 5 / 5 Anne Auger & Nikolaus Hansen () Evolution Strategies 7 / 5 One-Fifth Success Rule One-Fifth Success Rule One-fifth success rule One-fifth success rule increase σ decrease σ Let p s : # of successful offspring / generation ( σ σ exp p ) s p target Increase σ if p s > p target p target Decrease σ if p s < p target ( + )-ES p target = /5 IF offspring better parent p s = ELSE p s = Anne Auger & Nikolaus Hansen () Evolution Strategies 6 / 5 7 Anne Auger & Nikolaus Hansen () Evolution Strategies 8 / 5
8 Self-Adaptation Path Length Control Self-adaptation in a (, λ)-es Path Length Control The Equations MUTATEfor i =,...λ step-size σ i σ exp(τ N(, )) parent x i x + σ i N(, I) EVALUATE SELECT Best offspring x with its step-size σ Rationale Unadapted step-size won t produce successive good individuals The step-size are adjusted by the evolution itself Initialize m R n, σ R +, evolution path p σ =, set c σ 4/n, d σ. m m + σy w where y w = µ i= w i y i:λ update mean p σ ( c σ ) p σ + ( c σ ) µw y w }{{}}{{} accounts for c σ accounts for w ( ( )) i cσ p σ σ σ exp d σ E N (, I) update step-size }{{} > p σ is greater than its expectation... CMA in a nutshell Anne Auger & Nikolaus Hansen () Evolution Strategies 9 / 5 Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 Path Length Control The Concept Path Length Control Measure the length of the evolution path the pathway of the mean vector m in the generation sequence x i = m + σ y i m m + σy w Evolution Strategies and EDAs decrease σ increase σ 4 Estimation of Distribution 5 Conclusion loosely speaking steps are perpendicular under random selection (in expectation) perpendicular in the desired situation (to be most efficient) Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 74 Anne Auger & Nikolaus Hansen () Evolution Strategies / 5
9 Rank-One Update m m + σy w, y w = µ i= w i y i:λ, y i N i (, C) covariance matrix adaptation C ( c cov)c + c covµ wy wy T w learns all pairwise dependencies between variables off-diagonal entries in the covariance matrix reflect the dependencies conducts a principle component analysis (PCA) of steps y w, sequentially in time and space eigenvectors of the covariance matrix C are the principle components / the principle axes of the mutation ellipsoid new distribution, C.8 C +. y w y T w the ruling principle: the adaptation increases the probability of successful steps, y w, to appear again learns a new, rotated problem representation and a new metric (Mahalanobis) components are independent (only) in the new representation approximates the inverse Hessian on quadratic functions overwhelming empirical evidence, proof is in progress... equations... cumulation, rank-µ, step-size control Anne Auger & Nikolaus Hansen () Evolution Strategies / 5 Anne Auger & Nikolaus Hansen () Evolution Strategies 5 / 5 Rank-One Update Initialize m R n, and C = I, set σ =, learning rate c cov /n While not terminate x i = m + σ y i, y i N i (, C), µ m m + σy w where y w = w i y i:λ i= C ( c cov )C + c cov µ w y w y T w }{{} rank-one where µ w = µ i= w i Evolution Strategies and EDAs 4 Estimation of Distribution 5 Conclusion Anne Auger & Nikolaus Hansen () Evolution Strategies 4 / 5 75 Anne Auger & Nikolaus Hansen () Evolution Strategies 6 / 5
10 Cumulation The Evolution Path Cumulation Utilizing the Evolution Path Evolution Path Conceptually, the evolution path is the path the strategy takes over a number of generation steps. It can be expressed as a sum of consecutive steps of the mean m. An exponentially weighted sum of steps y w is used p c The recursive construction of the evolution path (cumulation): p c ( c c) {z } decay factor gx i= p c + p ( c c) µ w y w {z } {z} normalization factor ( c c) g i {z } exponentially fading weights input, m m old σ where µ w = P wi, c c. History information is accumulated in the evolution path. Anne Auger & Nikolaus Hansen () Evolution Strategies 7 / 5 y (i) w We used y wy T w for updating C. Because y wy T w = y w( y w) T the sign of y w is neglected. The sign information is (re-)introduced by using the evolution path. where µ w = P wi, c c. p c ( c c) p c + p ( c c) {z } µ w {z } decay factor normalization factor y w... equations Anne Auger & Nikolaus Hansen () Evolution Strategies 9 / 5 Cumulation is a widely used technique and also know as exponential smoothing in time series, forecasting exponentially weighted mooving average iterate averaging in stochastic approximation momentum in the back-propagation algorithm for ANNs... Using an evolution path for the rank-one update of the covariance matrix reduces the number of function evaluations to adapt to a straight ridge from O(n ) to O(n). a a Hansen, Müller and Koumoutsakos. Reducing the Time Complexity of the Derandomized Evolution Strategy with (CMA-ES). Evolutionary Computation, (), pp. -8 The overall model complexity is n but important parts of the model can be learned in time of order n... why?... rank µ update Anne Auger & Nikolaus Hansen () Evolution Strategies 8 / 5 76 Anne Auger & Nikolaus Hansen () Evolution Strategies 4 / 5
11 Rank-µ Update x i = m + σ y i, y i N i (, C), m m + σy w y w = P µ i= w i y i:λ The rank-µ update extends the update rule for large population sizes λ using µ > vectors to update C at each generation step. The matrix µ C µ = w i y i:λ y T i:λ i= computes a weighted mean of the outer products of the best µ steps and has rank min(µ, n) with probability one. The rank-µ update then reads where c cov µ w /n and c cov. C ( c cov ) C + c cov C µ Anne Auger & Nikolaus Hansen () Evolution Strategies 4 / 5 The rank-µ update increases the possible learning rate in large populations roughly from /n to µ w/n can reduce the number of necessary generations roughly from O(n ) to O(n) given µ w λ n Therefore the rank-µ update is the primary mechanism whenever a large population size is used say λ n + The rank-one update uses the evolution path and reduces the number of necessary function evaluations to learn straight ridges from O(n ) to O(n). Rank-one update and rank-µ update can be combined... Hansen, Müller, and Koumoutsakos. Reducing the Time Complexity of the Derandomized Evolution Strategy with (CMA-ES). Evolutionary Computation, (), pp. -8 Anne Auger & Nikolaus Hansen () Evolution Strategies 4 / 5 Estimation of Distribution Estimation of Distribution Algorithms Estimate a distribution that (re-)samples the parental population. All parameters of the distribution θ are estimated from the given population. x i = m + σ y i, y i N (, C) sampling of λ = 5 solutions where C = I and σ = C µ = P yi:λ µ y T i:λ C ( ) C + C µ calculating C where µ = 5, w = = w µ = µ, and c cov = m new m + µ P yi:λ new distribution Example: EMNA (Estimation of Multi-variate Normal Algorithm Initialize m R n, and C = I While not terminate x i = m + y i, y i N i (, C), for i =,...,λ µ C (x i:λ m)(x i:λ m) T m i= µ i= x i:λ Larrañaga and Lozano. Estimation of Distribution Algorithms Anne Auger & Nikolaus Hansen () Evolution Strategies 4 / 5 77 Anne Auger & Nikolaus Hansen () Evolution Strategies 44 / 5
12 Estimation of Distribution Estimation of Multivariate Normal Algorithm EMNA global versus rank-µ CMA 4 What did we achieve? Conclusion x i = m old + y i, y i N (, C) C µ P (xi:λ m new)(x i:λ m new) T m new = m old + µ P yi:λ EMNA global conducts a PCA of points Covariance matrix adaptation: reduce any convex quadratic function f(x) = x T Hx to the sphere model f(x) = x T x x i = m old + y i, y i N (, C) C µ P (xi:λ m old )(x i:λ m old ) T m new = m old + µ P yi:λ rank-µ CMA conducts a PCA of steps sampling of λ = 5 calculating C from µ = 5 new distribution solutions (dots) solutions The CMA-update yields a larger variance in particular in gradient direction, because m new is the minimizer for the variances when calculating C 4 Hansen, N. (6). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larranga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75- Anne Auger & Nikolaus Hansen () Evolution Strategies 45 / 5 without use of derivatives lines of equal density align with lines of equal fitness C H Step-size control: converge log-linearly on the sphere Rank-based selection: the same holds for any g(f(x)) = g ( x T Hx ) g : R R stricly monotonic (order preserving) Anne Auger & Nikolaus Hansen () Evolution Strategies 47 / 5 Conclusion Conclusion Experimentum Crucis () f convex quadratic, separable Evolution Strategies and EDAs 4 5 Conclusion Anne Auger & Nikolaus Hansen () Evolution Strategies 46 / 5 78 abs(f) (blue), f min(f) (cyan), Sigma (green), Axis Ratio (red) f= e ( e ) Standard Deviations of All Variables function evaluations f(x) = i n i= α n x i, α = 6 Object Variables (9D) Scaling (All Main Axes) 4 6 function evaluations Anne Auger & Nikolaus Hansen () Evolution Strategies 48 /
13 Conclusion Experimentum Crucis () f convex quadratic, as before but non-separable (rotated) abs(f) (blue), f min(f) (cyan), Sigma (green), Axis Ratio (red) f= e ( e ) Standard Deviations of All Variables function evaluations Object Variables (9D) Scaling (All Main Axes) 4 6 function evaluations f(x) = g ( x T Hx ), g : R R stricly monotonic C H for all g, H... internal parameters Anne Auger & Nikolaus Hansen () Evolution Strategies 49 / 5 Conclusion Invariance Under Strictly Monotonically Increasing Functions Rank-based algorithms Selection based on the rank: f(x :λ ) f(x :λ )... f(x λ:λ ) Update of all parameters uses only the rank g(f(x :λ )) g(f(x :λ ))... g(f(x λ:λ )) Anne Auger & Nikolaus Hansen () Evolution Strategies g preserves rank 5 / 5 79
Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation
Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation Anne Auger & Nikolaus Hansen INRIA Research Centre Saclay Île-de-France Project team TAO University Paris-Sud, LRI (UMR 8623), Bat.
More informationEvolution Strategies and Covariance Matrix Adaptation
Evolution Strategies and Covariance Matrix Adaptation Cours Contrôle Avancé - Ecole Centrale Paris Anne Auger January 2014 INRIA Research Centre Saclay Île-de-France University Paris-Sud, LRI (UMR 8623),
More informationAdvanced Optimization
Advanced Optimization Lecture 3: 1: Randomized Algorithms for for Continuous Discrete Problems Problems November 22, 2016 Master AIC Université Paris-Saclay, Orsay, France Anne Auger INRIA Saclay Ile-de-France
More informationStochastic optimization and a variable metric approach
The challenges for stochastic optimization and a variable metric approach Microsoft Research INRIA Joint Centre, INRIA Saclay April 6, 2009 Content 1 Introduction 2 The Challenges 3 Stochastic Search 4
More informationStochastic Methods for Continuous Optimization
Stochastic Methods for Continuous Optimization Anne Auger et Dimo Brockhoff Paris-Saclay Master - Master 2 Informatique - Parcours Apprentissage, Information et Contenu (AIC) 2016 Slides taken from Auger,
More informationIntroduction to Black-Box Optimization in Continuous Search Spaces. Definitions, Examples, Difficulties
1 Introduction to Black-Box Optimization in Continuous Search Spaces Definitions, Examples, Difficulties Tutorial: Evolution Strategies and CMA-ES (Covariance Matrix Adaptation) Anne Auger & Nikolaus Hansen
More informationIntroduction to Randomized Black-Box Numerical Optimization and CMA-ES
Introduction to Randomized Black-Box Numerical Optimization and CMA-ES July 3, 2017 CEA/EDF/Inria summer school "Numerical Analysis" Université Pierre-et-Marie-Curie, Paris, France Anne Auger, Asma Atamna,
More informationOptimisation numérique par algorithmes stochastiques adaptatifs
Optimisation numérique par algorithmes stochastiques adaptatifs Anne Auger M2R: Apprentissage et Optimisation avancés et Applications anne.auger@inria.fr INRIA Saclay - Ile-de-France, project team TAO
More informationAddressing Numerical Black-Box Optimization: CMA-ES (Tutorial)
Addressing Numerical Black-Box Optimization: CMA-ES (Tutorial) Anne Auger & Nikolaus Hansen INRIA Research Centre Saclay Île-de-France Project team TAO University Paris-Sud, LRI (UMR 8623), Bat. 490 91405
More informationTutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation
Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation Anne Auger & Nikolaus Hansen INRIA Research Centre Saclay Île-de-France Project team TAO University Paris-Sud, LRI (UMR 8623), Bat.
More informationNumerical optimization Typical di culties in optimization. (considerably) larger than three. dependencies between the objective variables
100 90 80 70 60 50 40 30 20 10 4 3 2 1 0 1 2 3 4 0 What Makes a Function Di cult to Solve? ruggedness non-smooth, discontinuous, multimodal, and/or noisy function dimensionality non-separability (considerably)
More informationEmpirical comparisons of several derivative free optimization algorithms
Empirical comparisons of several derivative free optimization algorithms A. Auger,, N. Hansen,, J. M. Perez Zerpa, R. Ros, M. Schoenauer, TAO Project-Team, INRIA Saclay Ile-de-France LRI, Bat 90 Univ.
More informationMirrored Variants of the (1,4)-CMA-ES Compared on the Noiseless BBOB-2010 Testbed
Author manuscript, published in "GECCO workshop on Black-Box Optimization Benchmarking (BBOB') () 9-" DOI :./8.8 Mirrored Variants of the (,)-CMA-ES Compared on the Noiseless BBOB- Testbed [Black-Box Optimization
More informationCMA-ES a Stochastic Second-Order Method for Function-Value Free Numerical Optimization
CMA-ES a Stochastic Second-Order Method for Function-Value Free Numerical Optimization Nikolaus Hansen INRIA, Research Centre Saclay Machine Learning and Optimization Team, TAO Univ. Paris-Sud, LRI MSRC
More informationA Restart CMA Evolution Strategy With Increasing Population Size
Anne Auger and Nikolaus Hansen A Restart CMA Evolution Strategy ith Increasing Population Size Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2005 c IEEE A Restart CMA Evolution Strategy
More informationBio-inspired Continuous Optimization: The Coming of Age
Bio-inspired Continuous Optimization: The Coming of Age Anne Auger Nikolaus Hansen Nikolas Mauny Raymond Ros Marc Schoenauer TAO Team, INRIA Futurs, FRANCE http://tao.lri.fr First.Last@inria.fr CEC 27,
More informationBlack-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed
Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed Raymond Ros To cite this version: Raymond Ros. Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed. Genetic
More informationSurrogate models for Single and Multi-Objective Stochastic Optimization: Integrating Support Vector Machines and Covariance-Matrix Adaptation-ES
Covariance Matrix Adaptation-Evolution Strategy Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating and Covariance-Matrix Adaptation-ES Ilya Loshchilov, Marc Schoenauer,
More informationBenchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed
Benchmarking a BI-Population CMA-ES on the BBOB- Function Testbed Nikolaus Hansen Microsoft Research INRIA Joint Centre rue Jean Rostand Orsay Cedex, France Nikolaus.Hansen@inria.fr ABSTRACT We propose
More informationThe CMA Evolution Strategy: A Tutorial
The CMA Evolution Strategy: A Tutorial Nikolaus Hansen November 6, 205 Contents Nomenclature 3 0 Preliminaries 4 0. Eigendecomposition of a Positive Definite Matrix... 5 0.2 The Multivariate Normal Distribution...
More informationBenchmarking the (1+1)-CMA-ES on the BBOB-2009 Function Testbed
Benchmarking the (+)-CMA-ES on the BBOB-9 Function Testbed Anne Auger, Nikolaus Hansen To cite this version: Anne Auger, Nikolaus Hansen. Benchmarking the (+)-CMA-ES on the BBOB-9 Function Testbed. ACM-GECCO
More informationExperimental Comparisons of Derivative Free Optimization Algorithms
Experimental Comparisons of Derivative Free Optimization Algorithms Anne Auger Nikolaus Hansen J. M. Perez Zerpa Raymond Ros Marc Schoenauer TAO Project-Team, INRIA Saclay Île-de-France, and Microsoft-INRIA
More informationCumulative Step-size Adaptation on Linear Functions
Cumulative Step-size Adaptation on Linear Functions Alexandre Chotard, Anne Auger, Nikolaus Hansen To cite this version: Alexandre Chotard, Anne Auger, Nikolaus Hansen. Cumulative Step-size Adaptation
More informationCovariance Matrix Adaptation in Multiobjective Optimization
Covariance Matrix Adaptation in Multiobjective Optimization Dimo Brockhoff INRIA Lille Nord Europe October 30, 2014 PGMO-COPI 2014, Ecole Polytechnique, France Mastertitelformat Scenario: Multiobjective
More informationEvolution Strategies. Nikolaus Hansen, Dirk V. Arnold and Anne Auger. February 11, 2015
Evolution Strategies Nikolaus Hansen, Dirk V. Arnold and Anne Auger February 11, 2015 1 Contents 1 Overview 3 2 Main Principles 4 2.1 (µ/ρ +, λ) Notation for Selection and Recombination.......................
More informationPrincipled Design of Continuous Stochastic Search: From Theory to
Contents Principled Design of Continuous Stochastic Search: From Theory to Practice....................................................... 3 Nikolaus Hansen and Anne Auger 1 Introduction: Top Down Versus
More informationAdaptive Coordinate Descent
Adaptive Coordinate Descent Ilya Loshchilov 1,2, Marc Schoenauer 1,2, Michèle Sebag 2,1 1 TAO Project-team, INRIA Saclay - Île-de-France 2 and Laboratoire de Recherche en Informatique (UMR CNRS 8623) Université
More informationA comparative introduction to two optimization classics: the Nelder-Mead and the CMA-ES algorithms
A comparative introduction to two optimization classics: the Nelder-Mead and the CMA-ES algorithms Rodolphe Le Riche 1,2 1 Ecole des Mines de Saint Etienne, France 2 CNRS LIMOS France March 2018 MEXICO
More informationBenchmarking a Weighted Negative Covariance Matrix Update on the BBOB-2010 Noiseless Testbed
Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB- Noiseless Testbed Nikolaus Hansen, Raymond Ros To cite this version: Nikolaus Hansen, Raymond Ros. Benchmarking a Weighted Negative
More informationIntroduction to Optimization
Introduction to Optimization Blackbox Optimization Marc Toussaint U Stuttgart Blackbox Optimization The term is not really well defined I use it to express that only f(x) can be evaluated f(x) or 2 f(x)
More informationBI-population CMA-ES Algorithms with Surrogate Models and Line Searches
BI-population CMA-ES Algorithms with Surrogate Models and Line Searches Ilya Loshchilov 1, Marc Schoenauer 2 and Michèle Sebag 2 1 LIS, École Polytechnique Fédérale de Lausanne 2 TAO, INRIA CNRS Université
More informationVariable Metric Reinforcement Learning Methods Applied to the Noisy Mountain Car Problem
Variable Metric Reinforcement Learning Methods Applied to the Noisy Mountain Car Problem Verena Heidrich-Meisner and Christian Igel Institut für Neuroinformatik, Ruhr-Universität Bochum, Germany {Verena.Heidrich-Meisner,Christian.Igel}@neuroinformatik.rub.de
More informationHow information theory sheds new light on black-box optimization
How information theory sheds new light on black-box optimization Anne Auger Colloque Théorie de l information : nouvelles frontières dans le cadre du centenaire de Claude Shannon IHP, 26, 27 and 28 October,
More informationInvestigating the Local-Meta-Model CMA-ES for Large Population Sizes
Investigating the Local-Meta-Model CMA-ES for Large Population Sizes Zyed Bouzarkouna, Anne Auger, Didier Yu Ding To cite this version: Zyed Bouzarkouna, Anne Auger, Didier Yu Ding. Investigating the Local-Meta-Model
More informationA CMA-ES for Mixed-Integer Nonlinear Optimization
A CMA-ES for Mixed-Integer Nonlinear Optimization Nikolaus Hansen To cite this version: Nikolaus Hansen. A CMA-ES for Mixed-Integer Nonlinear Optimization. [Research Report] RR-, INRIA.. HAL Id: inria-
More informationA (1+1)-CMA-ES for Constrained Optimisation
A (1+1)-CMA-ES for Constrained Optimisation Dirk Arnold, Nikolaus Hansen To cite this version: Dirk Arnold, Nikolaus Hansen. A (1+1)-CMA-ES for Constrained Optimisation. Terence Soule and Jason H. Moore.
More informationComparison-Based Optimizers Need Comparison-Based Surrogates
Comparison-Based Optimizers Need Comparison-Based Surrogates Ilya Loshchilov 1,2, Marc Schoenauer 1,2, and Michèle Sebag 2,1 1 TAO Project-team, INRIA Saclay - Île-de-France 2 Laboratoire de Recherche
More informationBounding the Population Size of IPOP-CMA-ES on the Noiseless BBOB Testbed
Bounding the Population Size of OP-CMA-ES on the Noiseless BBOB Testbed Tianjun Liao IRIDIA, CoDE, Université Libre de Bruxelles (ULB), Brussels, Belgium tliao@ulb.ac.be Thomas Stützle IRIDIA, CoDE, Université
More informationThe Dispersion Metric and the CMA Evolution Strategy
The Dispersion Metric and the CMA Evolution Strategy Monte Lunacek Department of Computer Science Colorado State University Fort Collins, CO 80523 lunacek@cs.colostate.edu Darrell Whitley Department of
More informationReal-Parameter Black-Box Optimization Benchmarking 2010: Presentation of the Noiseless Functions
Real-Parameter Black-Box Optimization Benchmarking 2010: Presentation of the Noiseless Functions Steffen Finck, Nikolaus Hansen, Raymond Ros and Anne Auger Report 2009/20, Research Center PPE, re-compiled
More informationNatural Evolution Strategies for Direct Search
Tobias Glasmachers Natural Evolution Strategies for Direct Search 1 Natural Evolution Strategies for Direct Search PGMO-COPI 2014 Recent Advances on Continuous Randomized black-box optimization Thursday
More informationCLASSICAL gradient methods and evolutionary algorithms
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 2, NO. 2, JULY 1998 45 Evolutionary Algorithms and Gradient Search: Similarities and Differences Ralf Salomon Abstract Classical gradient methods and
More informationParameter Estimation of Complex Chemical Kinetics with Covariance Matrix Adaptation Evolution Strategy
MATCH Communications in Mathematical and in Computer Chemistry MATCH Commun. Math. Comput. Chem. 68 (2012) 469-476 ISSN 0340-6253 Parameter Estimation of Complex Chemical Kinetics with Covariance Matrix
More informationBenchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed
Benchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed Lukáš Bajer,, Zbyněk Pitra,, Martin Holeňa Faculty of Mathematics and Physics, Charles University, Institute of Computer
More informationThree Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms
Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn
More informationComparison of NEWUOA with Different Numbers of Interpolation Points on the BBOB Noiseless Testbed
Comparison of NEWUOA with Different Numbers of Interpolation Points on the BBOB Noiseless Testbed Raymond Ros To cite this version: Raymond Ros. Comparison of NEWUOA with Different Numbers of Interpolation
More informationCompletely Derandomized Self-Adaptation in Evolution Strategies
Completely Derandomized Self-Adaptation in Evolution Strategies Nikolaus Hansen and Andreas Ostermeier In Evolutionary Computation 9(2) pp. 159-195 (2001) Errata Section 3 footnote 9: We use the expectation
More informationThree Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms
Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn
More informationViability Principles for Constrained Optimization Using a (1+1)-CMA-ES
Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES Andrea Maesani and Dario Floreano Laboratory of Intelligent Systems, Institute of Microengineering, Ecole Polytechnique Fédérale de
More informationChapter 10. Theory of Evolution Strategies: A New Perspective
A. Auger and N. Hansen Theory of Evolution Strategies: a New Perspective In: A. Auger and B. Doerr, eds. (2010). Theory of Randomized Search Heuristics: Foundations and Recent Developments. World Scientific
More informationA0M33EOA: EAs for Real-Parameter Optimization. Differential Evolution. CMA-ES.
A0M33EOA: EAs for Real-Parameter Optimization. Differential Evolution. CMA-ES. Petr Pošík Czech Technical University in Prague Faculty of Electrical Engineering Department of Cybernetics Many parts adapted
More informationComputational Intelligence Winter Term 2017/18
Computational Intelligence Winter Term 2017/18 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund mutation: Y = X + Z Z ~ N(0, C) multinormal distribution
More informationLocal-Meta-Model CMA-ES for Partially Separable Functions
Local-Meta-Model CMA-ES for Partially Separable Functions Zyed Bouzarkouna, Anne Auger, Didier Yu Ding To cite this version: Zyed Bouzarkouna, Anne Auger, Didier Yu Ding. Local-Meta-Model CMA-ES for Partially
More informationChallenges in High-dimensional Reinforcement Learning with Evolution Strategies
Challenges in High-dimensional Reinforcement Learning with Evolution Strategies Nils Müller and Tobias Glasmachers Institut für Neuroinformatik, Ruhr-Universität Bochum, Germany {nils.mueller, tobias.glasmachers}@ini.rub.de
More informationEfficient Covariance Matrix Update for Variable Metric Evolution Strategies
Efficient Covariance Matrix Update for Variable Metric Evolution Strategies Thorsten Suttorp, Nikolaus Hansen, Christian Igel To cite this version: Thorsten Suttorp, Nikolaus Hansen, Christian Igel. Efficient
More informationParticle Swarm Optimization with Velocity Adaptation
In Proceedings of the International Conference on Adaptive and Intelligent Systems (ICAIS 2009), pp. 146 151, 2009. c 2009 IEEE Particle Swarm Optimization with Velocity Adaptation Sabine Helwig, Frank
More informationNoisy Optimization: A Theoretical Strategy Comparison of ES, EGS, SPSA & IF on the Noisy Sphere
Noisy Optimization: A Theoretical Strategy Comparison of ES, EGS, SPSA & IF on the Noisy Sphere S. Finck Vorarlberg University of Applied Sciences Hochschulstrasse 1 Dornbirn, Austria steffen.finck@fhv.at
More informationA CMA-ES with Multiplicative Covariance Matrix Updates
A with Multiplicative Covariance Matrix Updates Oswin Krause Department of Computer Science University of Copenhagen Copenhagen,Denmark oswin.krause@di.ku.dk Tobias Glasmachers Institut für Neuroinformatik
More informationPower Prediction in Smart Grids with Evolutionary Local Kernel Regression
Power Prediction in Smart Grids with Evolutionary Local Kernel Regression Oliver Kramer, Benjamin Satzger, and Jörg Lässig International Computer Science Institute, Berkeley CA 94704, USA, {okramer, satzger,
More informationTHIS paper considers the general nonlinear programming
IEEE TRANSACTIONS ON SYSTEM, MAN, AND CYBERNETICS: PART C, VOL. X, NO. XX, MONTH 2004 (SMCC KE-09) 1 Search Biases in Constrained Evolutionary Optimization Thomas Philip Runarsson, Member, IEEE, and Xin
More informationBenchmarking Natural Evolution Strategies with Adaptation Sampling on the Noiseless and Noisy Black-box Optimization Testbeds
Benchmarking Natural Evolution Strategies with Adaptation Sampling on the Noiseless and Noisy Black-box Optimization Testbeds Tom Schaul Courant Institute of Mathematical Sciences, New York University
More informationEvolutionary Gradient Search Revisited
!"#%$'&(! )*+,.-0/214365879/;:!=0? @B*CEDFGHC*I>JK9ML NPORQTS9UWVRX%YZYW[*V\YP] ^`_acb.d*e%f`gegih jkelm_acnbpò q"r otsvu_nxwmybz{lm }wm~lmw gih;g ~ c ƒwmÿ x }n+bp twe eˆbkea} }q k2špe oœ Ek z{lmoenx
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More informationarxiv: v1 [cs.ne] 9 May 2016
Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES) arxiv:1605.02720v1 [cs.ne] 9 May 2016 ABSTRACT Ilya Loshchilov University of Freiburg Freiburg, Germany ilya.loshchilov@gmail.com
More informationTuning Parameters across Mixed Dimensional Instances: A Performance Scalability Study of Sep-G-CMA-ES
Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Tuning Parameters across Mixed Dimensional Instances: A Performance Scalability
More informationBBOB-Benchmarking Two Variants of the Line-Search Algorithm
BBOB-Benchmarking Two Variants of the Line-Search Algorithm Petr Pošík Czech Technical University in Prague, Faculty of Electrical Engineering, Dept. of Cybernetics Technická, Prague posik@labe.felk.cvut.cz
More informationarxiv: v6 [math.oc] 11 May 2018
Quality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions Youhei Akimoto a,, Anne Auger b, Nikolaus Hansen b a Faculty of Engineering, Information and
More informationNatural Evolution Strategies
Natural Evolution Strategies Daan Wierstra, Tom Schaul, Jan Peters and Juergen Schmidhuber Abstract This paper presents Natural Evolution Strategies (NES), a novel algorithm for performing real-valued
More informationOptimization using derivative-free and metaheuristic methods
Charles University in Prague Faculty of Mathematics and Physics MASTER THESIS Kateřina Márová Optimization using derivative-free and metaheuristic methods Department of Numerical Mathematics Supervisor
More informationQuality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions
Quality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions Youhei Akimoto, Anne Auger, Nikolaus Hansen To cite this version: Youhei Akimoto, Anne Auger,
More informationESANN'2001 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), April 2001, D-Facto public., ISBN ,
ESANN'200 proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), 25-27 April 200, D-Facto public., ISBN 2-930307-0-3, pp. 79-84 Investigating the Influence of the Neighborhood
More informationGaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress
Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Petr Pošík Czech Technical University, Faculty of Electrical Engineering, Department of Cybernetics Technická, 66 7 Prague
More informationOptimum Tracking with Evolution Strategies
Optimum Tracking with Evolution Strategies Dirk V. Arnold dirk@cs.dal.ca Faculty of Computer Science, Dalhousie University, Halifax, Nova Scotia, Canada B3H 1W5 Hans-Georg Beyer hans-georg.beyer@fh-vorarlberg.ac.at
More informationDecomposition and Metaoptimization of Mutation Operator in Differential Evolution
Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Karol Opara 1 and Jaros law Arabas 2 1 Systems Research Institute, Polish Academy of Sciences 2 Institute of Electronic
More informationBenchmarking Projection-Based Real Coded Genetic Algorithm on BBOB-2013 Noiseless Function Testbed
Benchmarking Projection-Based Real Coded Genetic Algorithm on BBOB-2013 Noiseless Function Testbed Babatunde Sawyerr 1 Aderemi Adewumi 2 Montaz Ali 3 1 University of Lagos, Lagos, Nigeria 2 University
More informationBenchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed
Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed Lukáš Bajer Institute of Computer Science Academy of Sciences of the Czech Republic and Faculty of Mathematics
More informationTECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531
TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 5 Design and Management of Complex Technical Processes and Systems by means of Computational Intelligence
More informationarxiv: v4 [math.oc] 28 Apr 2017
Journal of Machine Learning Research 18 (2017) 1-65 Submitted 11/14; Revised 10/16; Published 4/17 Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles arxiv:1106.3708v4
More informationAdaptive Generation-Based Evolution Control for Gaussian Process Surrogate Models
J. Hlaváčová (Ed.): ITAT 07 Proceedings, pp. 36 43 CEUR Workshop Proceedings Vol. 885, ISSN 63-0073, c 07 J. Repický, L. Bajer, Z. Pitra, M. Holeňa Adaptive Generation-Based Evolution Control for Gaussian
More informationDiscriminative Direction for Kernel Classifiers
Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering
More informationTruncation Selection and Gaussian EDA: Bounds for Sustainable Progress in High-Dimensional Spaces
Truncation Selection and Gaussian EDA: Bounds for Sustainable Progress in High-Dimensional Spaces Petr Pošík Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics
More informationA Method for Handling Uncertainty in Evolutionary Optimization with an Application to Feedback Control of Combustion
A Method for Handling Uncertainty in Evolutionary Optimization with an Application to Feedback Control of Combustion Nikolaus Hansen, Andre Niederberger, Lino Guzzella, Petros Koumoutsakos To cite this
More informationGenetic Algorithm: introduction
1 Genetic Algorithm: introduction 2 The Metaphor EVOLUTION Individual Fitness Environment PROBLEM SOLVING Candidate Solution Quality Problem 3 The Ingredients t reproduction t + 1 selection mutation recombination
More informationOverview of ECNN Combinations
1 Overview of ECNN Combinations Evolutionary Computation ECNN Neural Networks by Paulo Cortez and Miguel Rocha pcortez@dsi.uminho.pt mrocha@di.uminho.pt (Presentation available at: http://www.dsi.uminho.pt/
More informationGradient-based Adaptive Stochastic Search
1 / 41 Gradient-based Adaptive Stochastic Search Enlu Zhou H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology November 5, 2014 Outline 2 / 41 1 Introduction
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationEvolution Strategies for Constants Optimization in Genetic Programming
Evolution Strategies for Constants Optimization in Genetic Programming César L. Alonso Centro de Inteligencia Artificial Universidad de Oviedo Campus de Viesques 33271 Gijón calonso@uniovi.es José Luis
More informationCross Entropy and Adaptive Variance Scaling in Continuous EDA
Cross Entropy and Adaptive Variance Scaling in Continuous EDA Cai Yunpeng caiyp78@gmail.com Xu Hua xuhua@tsinghua.edu.cn Sun Xiaomin sxm23@tsinghua.edu.cn Jia Peifa dcsjpf@tsinghua.edu.cn State Key Lab
More informationA Hybrid Algorithm Based on Evolution Strategies and Instance-Based Learning, Used in Two-dimensional Fitting of Brightness Profiles in Galaxy Images
A Hybrid Algorithm Based on Evolution Strategies and Instance-Based Learning, Used in Two-dimensional Fitting of Brightness Profiles in Galaxy Images Juan Carlos Gomez 1 and Olac Fuentes 2 1 INAOE, Computer
More informationBehaviour of the UMDA c algorithm with truncation selection on monotone functions
Mannheim Business School Dept. of Logistics Technical Report 01/2005 Behaviour of the UMDA c algorithm with truncation selection on monotone functions Jörn Grahl, Stefan Minner, Franz Rothlauf Technical
More informationStochastic Search using the Natural Gradient
Keywords: stochastic search, natural gradient, evolution strategies Sun Yi Daan Wierstra Tom Schaul Jürgen Schmidhuber IDSIA, Galleria 2, Manno 6928, Switzerland yi@idsiach daan@idsiach tom@idsiach juergen@idsiach
More informationNatural Gradient for the Gaussian Distribution via Least Squares Regression
Natural Gradient for the Gaussian Distribution via Least Squares Regression Luigi Malagò Department of Electrical and Electronic Engineering Shinshu University & INRIA Saclay Île-de-France 4-17-1 Wakasato,
More informationarxiv: v2 [cs.ai] 13 Jun 2011
A Linear Time Natural Evolution Strategy for Non-Separable Functions Yi Sun, Faustino Gomez, Tom Schaul, and Jürgen Schmidhuber IDSIA, University of Lugano & SUPSI, Galleria, Manno, CH-698, Switzerland
More informationLECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION
15-382 COLLECTIVE INTELLIGENCE - S19 LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION TEACHER: GIANNI A. DI CARO WHAT IF WE HAVE ONE SINGLE AGENT PSO leverages the presence of a swarm: the outcome
More informationMachine Learning (CS 567) Lecture 5
Machine Learning (CS 567) Lecture 5 Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol
More informationInvestigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems
Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Miguel Leon Ortiz and Ning Xiong Mälardalen University, Västerås, SWEDEN Abstract. Differential evolution
More informationStochastic Local Search in Continuous Domains
Stochastic Local Search in Continuous Domains Questions to be Answered When Designing A Novel Algorithm Petr Pošík Czech Technical University in Prague Faculty of Electrical Engineering, Department of
More information1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?)
Math 35 Exam Review SOLUTIONS Overview In this third of the course we focused on linear learning algorithms to model data. summarize: To. Background: The SVD and the best basis (questions selected from
More informationLecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.
Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods
More informationThe Story So Far... The central problem of this course: Smartness( X ) arg max X. Possibly with some constraints on X.
Heuristic Search The Story So Far... The central problem of this course: arg max X Smartness( X ) Possibly with some constraints on X. (Alternatively: arg min Stupidness(X ) ) X Properties of Smartness(X)
More information