Cumulative Step-size Adaptation on Linear Functions

Similar documents
Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed

Easter bracelets for years

Methylation-associated PHOX2B gene silencing is a rare event in human neuroblastoma.

A Context free language associated with interval maps

A non-commutative algorithm for multiplying (7 7) matrices using 250 multiplications

On size, radius and minimum degree

Markov Chain Analysis of Cumulative Step-size Adaptation on a Linear Constraint Problem

Comparison of NEWUOA with Different Numbers of Interpolation Points on the BBOB Noiseless Testbed

Linear Quadratic Zero-Sum Two-Person Differential Games

A new simple recursive algorithm for finding prime numbers using Rosser s theorem

On Symmetric Norm Inequalities And Hermitian Block-Matrices

Unbiased minimum variance estimation for systems with unknown exogenous inputs

Analysis of Boyer and Moore s MJRTY algorithm

A non-commutative algorithm for multiplying (7 7) matrices using 250 multiplications

Full-order observers for linear systems with unknown inputs

Investigating the Local-Meta-Model CMA-ES for Large Population Sizes

New estimates for the div-curl-grad operators and elliptic problems with L1-data in the half-space

Advanced Optimization

On Symmetric Norm Inequalities And Hermitian Block-Matrices

Case report on the article Water nanoelectrolysis: A simple model, Journal of Applied Physics (2017) 122,

On Solving Aircraft Conflict Avoidance Using Deterministic Global Optimization (sbb) Codes

Axiom of infinity and construction of N

Exact Comparison of Quadratic Irrationals

Hook lengths and shifted parts of partitions

Unfolding the Skorohod reflection of a semimartingale

RENORMALISATION ON THE PENROSE LATTICE

Widely Linear Estimation with Complex Data

On the simultaneous stabilization of three or more plants

Some tight polynomial-exponential lower bounds for an exponential function

Norm Inequalities of Positive Semi-Definite Matrices

Non Linear Observation Equation For Motion Estimation

L institution sportive : rêve et illusion

Spatial representativeness of an air quality monitoring station. Application to NO2 in urban areas

b-chromatic number of cacti

Smart Bolometer: Toward Monolithic Bolometer with Smart Functions

A Simple Proof of P versus NP

On path partitions of the divisor graph

Evolution of the cooperation and consequences of a decrease in plant diversity on the root symbiont diversity

Influence of a Rough Thin Layer on the Potential

Best linear unbiased prediction when error vector is correlated with other random vectors in the model

Dissipative Systems Analysis and Control, Theory and Applications: Addendum/Erratum

Two-step centered spatio-temporal auto-logistic regression model

Dispersion relation results for VCS at JLab

A remark on a theorem of A. E. Ingham.

Classification of high dimensional data: High Dimensional Discriminant Analysis

Quality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions

Cutwidth and degeneracy of graphs

Multiple sensor fault detection in heat exchanger system

On a series of Ramanujan

Sparsity Measure and the Detection of Significant Data

Replicator Dynamics and Correlated Equilibrium

Passerelle entre les arts : la sculpture sonore

Thomas Lugand. To cite this version: HAL Id: tel

Confluence Algebras and Acyclicity of the Koszul Complex

Some explanations about the IWLS algorithm to fit generalized linear models

The sound power output of a monopole source in a cylindrical pipe containing area discontinuities

A Study of the Regular Pentagon with a Classic Geometric Approach

FORMAL TREATMENT OF RADIATION FIELD FLUCTUATIONS IN VACUUM

Problem Statement Continuous Domain Search/Optimization. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms.

A (1+1)-CMA-ES for Constrained Optimisation

Completeness of the Tree System for Propositional Classical Logic

On infinite permutations

A simple kinetic equation of swarm formation: blow up and global existence

Solution to Sylvester equation associated to linear descriptor systems

On sl3 KZ equations and W3 null-vector equations

Efficient Subquadratic Space Complexity Binary Polynomial Multipliers Based On Block Recombination

The CO-H2 conversion factor of diffuse ISM: Bright 12CO emission also traces diffuse gas

On Newton-Raphson iteration for multiplicative inverses modulo prime powers

Performance analysis of clouds with phase-type arrivals

Solving the neutron slowing down equation

RHEOLOGICAL INTERPRETATION OF RAYLEIGH DAMPING

Entropies and fractal dimensions

Finite volume method for nonlinear transmission problems

Finite Volume for Fusion Simulations

Quasi-periodic solutions of the 2D Euler equation

THE BAYESIAN ABEL BOUND ON THE MEAN SQUARE ERROR

Stochastic invariances and Lamperti transformations for Stochastic Processes

DYNAMICAL PROPERTIES OF MONOTONE DENDRITE MAPS

Quantum efficiency and metastable lifetime measurements in ruby ( Cr 3+ : Al2O3) via lock-in rate-window photothermal radiometry

Markov chain Analysis of Evolution Strategies

Can we reduce health inequalities? An analysis of the English strategy ( )

Nodal and divergence-conforming boundary-element methods applied to electromagnetic scattering problems

Electromagnetic characterization of magnetic steel alloys with respect to the temperature

Computable priors sharpened into Occam s razors

Approximation SEM-DG pour les problèmes d ondes elasto-acoustiques

On the Griesmer bound for nonlinear codes

Soundness of the System of Semantic Trees for Classical Logic based on Fitting and Smullyan

Accurate critical exponents from the ϵ-expansion

Low frequency resolvent estimates for long range perturbations of the Euclidean Laplacian

Sparse multivariate factorization by mean of a few bivariate factorizations

Two Dimensional Linear Phase Multiband Chebyshev FIR Filter

SOLAR RADIATION ESTIMATION AND PREDICTION USING MEASURED AND PREDICTED AEROSOL OPTICAL DEPTH

On one class of permutation polynomials over finite fields of characteristic two *

Call Detail Records to Characterize Usages and Mobility Events of Phone Users

3D-CE5.h: Merge candidate list for disparity compensated prediction

A proximal approach to the inversion of ill-conditioned matrices

Eddy-Current Effects in Circuit Breakers During Arc Displacement Phase

Diagnosability Analysis of Discrete Event Systems with Autonomous Components

Chebyshev polynomials, quadratic surds and a variation of Pascal s triangle

Symmetric Norm Inequalities And Positive Semi-Definite Block-Matrices

Transcription:

Cumulative Step-size Adaptation on Linear Functions Alexandre Chotard, Anne Auger, Nikolaus Hansen To cite this version: Alexandre Chotard, Anne Auger, Nikolaus Hansen. Cumulative Step-size Adaptation on Linear Functions. PPSN 0, Sep 0, Taormina, Italy. Springer, pp.7-8, 0. <hal-00759577> HAL Id: hal-00759577 https://hal.inria.fr/hal-00759577 Submitted on Dec 0 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Cumulative Step-size Adaptation on Linear Functions Alexandre Chotard, Anne Auger and Nikolaus Hansen TAO team, INRIA Saclay-Ile-de-France, LRI, Paris-Sud University, France firstname.lastname@lri.fr Abstract. The CSA-ES is an Evolution Strategy with Cumulative Step size Adaptation, where the step size is adapted measuring the length of a so-called cumulative path. The cumulative path is a combination of the previous steps realized by the algorithm, where the importance of each step decreases with time. This article studies the CSA-ES on composites of strictly increasing functions with affine linear functions through the investigation of its underlying Markov chains. Rigorous results on the change and the variation of the step size are derived with and without cumulation. The step-size diverges geometrically fast in most cases. Furthermore, the influence of the cumulation parameter is studied. Keywords: CSA, cumulative path, evolution path, evolution strategies, step-size adaptation Introduction Evolution strategies ESs are continuous stochastic optimization algorithms searching for the minimum of a real valued function f : R n R. In the, λ-es, in each iteration, λ new children are generated from a single parent point X R n by adding a random Gaussian vector to the parent, X R n X + σn 0, C. Here, σ R + is called step-size and C is a covariance matrix. The best of the λ children, i.e. the one with the lowest f-value, becomes the parent of the next iteration. To achieve reasonably fast convergence, step size and covariance matrix have to be adapted throughout the iterations of the algorithm. In this paper, C is the identity and we investigate the so-called Cumulative Step-size Adaptation CSA, which is used to adapt the step-size in the Covariance Matrix Adaptation Evolution Strategy CMA-ES [,0]. In CSA, a cumulative path is introduced, which is a combination of all steps the algorithm has made, where the importance of a step decreases exponentially with time. Arnold and Beyer studied the behavior of CSA on sphere, cigar and ridge functions [,,3,7] and on dynamical optimization problems where the optimum moves randomly [5] or linearly [6]. Arnold also studied the behaviour of a, λ-es on linear functions with linear constraint [4]. In this paper, we study the behaviour of the, λ-csa-es on composites of strictly increasing functions with affine linear functions, e.g. f : x expx. Because

the CSA-ES is invariant under translation, under change of an orthonormal basis rotation and reflection, and under strictly increasing transformations of the f-value, we investigate, w.l.o.g., f : x x. Linear functions model the situation when the current parent is far here infinitely far from the optimum of a smooth function. To be far from the optimum means that the distance to the optimum is large, relative to the step-size σ. This situation is undesirable and threatens premature convergence. The situation should be handled well, by increasing step widths, by any search algorithm and is not handled well by the, -σsa-es [9]. Solving linear functions is also very useful to prove convergence independently of the initial state on more general function classes. In Section we introduce the, λ-csa-es, and some of its characteristics on linear functions. In Sections 3 and 4 we study lnσ t without and with cumulation, respectively. Section 5 presents an analysis of the variance of the logarithm of the stepsize and in Section 6 we summarize our results. Notations In this paper, we denote t the iteration or time index, n the search space dimension, N 0, a standard normal distribution, i.e. a normal distribution with mean zero and standard deviation. The multivariate normal distribution with mean vector zero and covariance matrix identity will be denoted N 0, I n, the i th order statistic of λ standard normal distributions N i:λ, and Ψ i:λ its distribution. If x = x,, x n R n is a vector, then [x] i will be its value on the i th dimension, that is [x] i = x i. A random variable X distributed according to a law L will be denoted X L. The, λ-csa-es We denote with X t the parent at the t th iteration. From the parent point X t, λ children are generated: Y t,i = X t +σ t ξ t,i with i [[, λ]], and ξ t,i N 0, I n, ξ t,i i [[,λ]] i.i.d. Due to the, λ selection scheme, from these children, the one minimizing the function f is selected: X t+ = argmin{fy, Y {Y t,,..., Y t,λ }}. This latter equation implicitly defines the random variable ξ t as X t+ = X t + σ t ξ t. In order to adapt the step-size, the cumulative path is defined as p t+ = cp t + c c ξ t with 0 < c. The constant /c represents the life span of the information contained in p t, as after /c generations p t is multiplied by a factor that approaches /e 0.37 for c 0 from below indeed c /c exp. The typical value for c is between / n and /n. We will consider that p 0 N 0, I n as it makes the algorithm easier to analyze. The normalization constant c c in front of ξ t in Eq. is chosen so that under random selection and if p t is distributed according to N 0, I n then also p t+ follows N 0, I n. Hence the length of the path can be compared to the expected length of N 0, I n representing the expected length under random selection.

The step-size update rule increases the step-size if the length of the path is larger than the length under random selection and decreases it if the length is shorter than under random selection: c σ t+ = σ t exp d σ p t+ E N 0, I n where the damping parameter d σ determines how much the step-size can change and is set to d σ =. A simplification of the update considers the squared length of the path [5]: c pt+ σ t+ = σ t exp. 3 d σ n This rule is easier to analyse and we will use it throughout the paper. Preliminary results on linear functions. Selection on the linear function, fx = [x], is determined by [X t ] + σ t [ξ [ t ] [X t ] + σ t ξt,i for all i which is equivalent to ] [ξ t ] [ ] ξ for all i where by definition [ ] t,i ξ t,i is distributed according to N 0,. Therefore the first coordinate of the selected step is distributed according to N :λ and all others coordinates are distributed according to N 0,, i.e. selection does not bias the distribution along the coordinates,..., n. Overall we have the following result. Lemma. On the linear function fx = x, the selected steps ξ t t N of the, λ- ES are i.i.d. and distributed according to the vector ξ := N :λ, N,..., N n where N i N 0, for i. Because the selected steps ξ t are i.i.d. the path defined in Eq. is an autonomous Markov chain, that we will denote P = p t t N. Note that if the distribution of the selected step depended on X t, σ t as it is generally the case on non-linear functions, then the path alone would not be a Markov Chain, however X t, σ t, p t would be an autonomous Markov Chain. In order to study whether the, λ-csa-es diverges geometrically, we investigate the log of the step-size change, whose formula can be immediately deduced from Eq. 3: σt+ ln = c pt+ 4 σ t d σ n By summing up this equation from 0 to t we obtain t ln σt = c t p k σ 0 d σ t n k=. 5 We are interested to know whether t lnσ t/σ 0 converges to a constant. In case this constant is positive this will prove that the, λ-csa-es diverges geometrically. We recognize thanks to 5 that this quantity is equal to the sum of t terms divided by t that suggests the use of the law of large numbers to prove convergence of 5. We will start by investigating the case without cumulation c = Section 3 and then the case with cumulation Section 4.

3 Divergence rate of, λ-csa-es without cumulation In this section we study the, λ-csa-es without cumulation, i.e. c =. In this case, the path always equals to the selected step, i.e. for all t, we have p t+ = ξ t. We have proven in Lemma that ξ t are i.i.d. according to ξ. This allows us to use the standard law of large numbers to find the limit of t lnσ t/σ 0 as well as compute the expected log-step-size change. E N :λ. On linear functions, the, λ-csa- Proposition. Let σ := d σn ES without cumulation satisfies i almost surely lim t t ln σ t/σ 0 = σ, and ii for all t N, Elnσ t+ /σ t = σ. Proof. We have identified in Lemma that the first coordinate of ξ t is distributed according to N :λ and the other coordinates according to N 0,, hence E ξ t = E [ξ t ] + n i= [ξ E t ] i = E N:λ + n. Therefore E ξ t /n = E N:λ /n. By applying this to Eq. 4, we deduce that Elnσt+ /σ t = /d σ nen:λ. Furthermore, as EN :λ EλN 0, = λ <, we have E ξ t <. The sequence ξ t t N being i.i.d according to Lemma, and being integrable as we just showed, we can apply the strong law of large numbers on Eq. 5. We obtain t ln σt = t ξ k σ 0 d σ t n k=0 a.s. E ξ = E N t d σ n d σ n :λ The proposition reveals that the sign of E N :λ determines whether the stepsize diverges to infinity. In the following, we show that E N :λ increases in λ for λ and that the, λ-es diverges for λ 3. For λ = and λ =, the step-size follows a random walk on the log-scale. Lemma. Let N i i [[,λ]] be independent random variables, distributed according to N 0,, and N i:λ the i th order statistic of N i i [[,λ]]. Then E N : = E N : =. In addition, for all λ, E N :λ+ > E N :λ. Proof. see [8] for the full proof The idea of the proof is to use the symmetry of the normal distribution to show that for two random variables U Ψ :λ+ and V Ψ :λ, for every event E where U < V, there exists another event E counterbalancing the effect of E, i.e E u v f U,V u, v du dv = E v u f U,V u, v du dv, with f U,V the joint density of the couple U, V. We then have E N:λ+ E N :λ. As there is a non-negligible set of events E 3, distinct of E and E, where U > V, we have EN:λ+ > EN :λ. For λ =, N : N 0, so EN: =. For λ = we have EN: + N: = EN 0, =, and since the normal distribution is symmetric EN: = EN:, hence EN: =.

We can now link Proposition and Lemma into the following theorem: Theorem. On linear functions, for λ 3, the step-size of the, λ-csa-es without cumulation c = diverges geometrically almost surely and in expectation at the rate /d σ nen:λ, i.e. t ln σt σ 0 a.s. t E ln σt+ σ t = E N d σ n :λ. 6 For λ = and λ =, without cumulation, the logarithm of the step-size does an additive unbiased random walk i.e. ln σ t+ = ln σ t + W t where E[W t ] = 0. More precisely W t /d σ χ n/n for λ =, and W t /d σ N: +χ n /n for λ =, where χ k stands for the chi-squared distribution with k degree of freedom. Proof. For λ >, from Lemma we know that EN:λ > EN : =. Therefore EN:λ > 0, hence Eq. 6 is strictly positive, and with Proposition we get that the step-size diverges geometrically almost surely at the rate /d σ EN:λ. With Eq. 4 we have lnσ t+ = lnσ t + W t, with W t = /d σ ξ t /n. For λ = and λ =, according to Lemma, EW t = 0. Hence lnσ t does an additive unbiased random walk. Furthermore ξ = N:λ + χ n, so for λ =, since N : = N 0,, ξ = χ n. In [8] we extend this result on the step-size to [X t ], which diverges geometrically almost surely at the same rate. 4 Divergence rate of, λ-csa-es with cumulation We are now investigating the, λ-csa-es with cumulation, i.e. 0 < c <. The path P is then a Markov chain and contrary to the case where c = we cannot apply a LLN for independent variables to Eq. 5 in order to prove the almost sure geometric divergence. However LLN for Markov chains exist as well, provided the Markov chain satisfies some stability properties: in particular, if the Markov chain P is ϕ-irreducible, that is, there exists a measure ϕ such that every Borel set A of R n with ϕa > 0 has a positive probability to be reached in a finite number of steps by P starting from any p 0 R n. In addition, the chain P needs to be i positive, that is the chain admits an invariant probability measure π, i.e., for any borelian A, πa = R n P x, AπA with P x, A being the probability to transition in one time step from x into A, and ii Harris recurrent which means for any borelian A such that ϕa > 0, the chain P visits A an infinite number of times with probability one. Under those conditions, P satisfies a LLN, more precisely: Lemma 3. [, 7.0.] Suppose that P is a positive Harris chain with invariant probability measure π, and let g be a π-integrable function such that π g = gx πdx <. Then /t t R n k= gp k a.s πg. t

The path P satisfies the conditions of Lemma 3 and exhibits an invariant measure [8]. By a recurrence on Eq. we see that the path follows the following equation p t = c t p 0 + t c c c k ξ t k }{{} k=0 i.i.d.. 7 For i, [ξ t ] i N 0, and, as also [p 0 ] i N 0,, by recurrence [p t ] i N 0, for all t N. For i = with cumulation c <, the influence of [p 0 ] vanishes with c t. Furthermore, as from Lemma the sequence [ξ t ] ] t N is independent, we get by applying the Kolgomorov s three series theorem that the series t k=0 [ ck ξ ] t k converges almost surely. Therefore, the first component of the path becomes distributed as the random variable [p ] = c c k=0 c k [ξ k] by re-indexing the variable ξ t k in ξ k, as the sequence ξ t t N is i.i.d.. We now obtain geometric divergence of the step-size and get an explicit estimate of the expression of the divergence rate. Theorem. The step-size of the, λ-csa-es with λ diverges geometrically fast if c < or λ 3. Almost surely and in expectation we have for 0 < c, t ln σt c E N :λ + c E N :λ. 8 σ 0 t d σ n }{{} >0 for λ 3 and for λ= and c< Proof. For proving almost sure convergence of lnσ t /σ 0 /t we need to use the LLN for Markov chain. We refer to [8] for the proof that P satisfies the right assumptions. We now focus on the convergence in expectation. From Eq. 4 we have Elnσ t+ /σ t = c/d σ E p t+ /n, so E p t+ = E n [ ] i= pt+ is the term we have i to analyse. From Eq. 7 and its conclusions we get that for j [p t ] j N 0,, so E n [ ] j= pt+ = E[ ] p j t+ + n. When t goes to infinity, the influence of [p 0 ] in this equation goes to 0 with c t+, so we can remove it when taking the limit: [pt+ lim E ] c t = lim E t c c i [ ξ ] t i t i=0 We will now develop the sum with the square, such that we have either a product [ ] [ ] ξ t i ξ with i j, or [ t j ξ ] t j. This way, we can separate the variables by using Lemma with the independence of ξ i over time. To do so, we use the development formula n i= a n = n n i= j=i+ a ia j + n i= a i. We take the limit of E [ ] p t+ and find that it is equal to lim c c t t t i=0j=i+ [ξ c i+j ] E t i [ ] ξ t j + }{{} =E[ξ t i] E[ξ t j] =E[N :λ ] 9 t [ξ c i ] E t i }{{} =E[N:λ ] 0 i=0

Now the expected value does not depend on i or j, so what is left is to calculate t t i=0 j=i+ ci+j and t i=0 ci. We have t t i=0 j=i+ ci+j = t i=0 ci+ c t i and when we separates this sum in two, the right hand side c t goes to 0 for t. Therefore, the left hand side converges to lim t i=0 c i+ /c, which is equal to lim t c/c t i=0 ci. And t i=0 ci is equal to c t+ / c, which converges to /c c. So, by [pt+ ] inserting this in Eq. 0 we get that E c t c E N :λ + E N:λ, which gives us the right hand side of Eq. 8. By summing Elnσ i+ /σ i for i = 0,..., t and dividing by t we have the Cesaro mean /telnσ t /σ 0 that converges to the same value that Elnσ t+ /σ t converges to when t goes to infinity. Therefore we have in expectation Eq. 8. According to Lemma, for λ =, EN : =, so the RHS of Eq. 8 is equal to c/d σ nen :. The expected value of N : is strictly negative, so the previous expression is strictly positive. Furthermore, according to Lemma, EN:λ increases with λ, as does EN :. Therefore we have geometric divergence for λ. From Eq. we see that the behavior of the step-size and of X t t N are directly related. Geometric divergence of the step-size, as shown in Theorem, means that also the movements in search space and the improvements on affine linear functions f increase geometrically fast. Therefore, as we showed in Theorem geometric divergence for the step-size when λ and c <, or when λ 3, we expect geometric divergence on the first dimension of X t t N the first dimension being the only dimension with selection pressure. Analyzing X t t N with cumulation requires to study a double Markov chain, which is left to possible future research. 5 Study of the variations of ln σ t+ /σ t The proof of Theorem shows that the step size increase converges to the right hand side of Eq. 8, for t. When the dimension increases this increment goes to zero, which also suggests that it becomes more likely that σ t+ is smaller than σ t. To analyze this behavior, we study the variance of ln σ t+ /σ t as a function of c and the dimension. Theorem 3. The variance of ln σ t+ /σ t equals to σt+ Var ln = c [pt+ ] 4 [pt+ ] σ t 4d σn E E + n. [pt+ ] Furthermore, E E N t :λ + c c E N :λ and with a = c [pt+ lim E ] 4 = a t a 4 k 4 + k 3 + k + k + k, where k 4 =E N:λ 4, k3 = 4 a+a+a a E N 3 3 :λ E N:λ, k = 6 a a E, N:λ k = a3 +a+3a a a 3 E N:λ EN:λ a and k = 4 6 a a a 3 E N :λ 4.

Proof. σt+ c pt+ Var ln = Var = c σ t d σ n 4d σn Var p t+ }{{} E p t+ 4 E p t+ 3 The first part of Var p t+, E p t+ 4, is equal to E n [ ] i= pt+ i. We develop it along the dimensions such that we can use the independence of [p t+ ] i with [p t+ ] j for i j, to get E n n [ ] [ ] i= j=i+ pt+ pt+ i j + n [ ] 4 i= pt+. For i i [ ] [pt+ p is distributed according to a standard normal distribution, so E ] t+ = i i [pt+ ] 4 and E = 3. i E p t+ 4 = i= j=i+ = = = E E i= j=i+ [pt+ ] [pt+ ] E + i j + E j= n i + n E i= [pt+ ] 4 + n E [pt+ ] + E i= [pt+ ] 4 i n 3 + E i= [pt+ ] + 3n + E [pt+ ] + n n + [pt+ ] 4 [pt+ ] 4 The other part left is E p t+, which we develop along the dimensions to get E n [ ] i= pt+ i = E [ ] p + n t+, which equals to E [ ] p t+ + n E [ ] p + n t+. So by subtracting both parts we get E p t+ 4 E p t+ = E [ p t+ ] 4 E[ p t+ ] + n, which we insert into Eq. 3 to get Eq.. The development of E [ ] p t+ is the same than the one done in the proof of Theorem. We refer to [8] for the development of E [ ] 4 p t+, since limits of space in the paper prevents us to present it here. Figure shows the time evolution of lnσ t /σ 0 for 500 runs and c = left and c = / n right. By comparing Figure a and Figure b we observe smaller variations of lnσ t /σ 0 with the smaller value of c. Figure shows the relative standard deviation of ln σ t+ /σ t i.e. the standard deviation divided by its expected value. Lowering c, as shown in the left, decreases the relative standard deviation. To get a value below one, c must be smaller for larger dimension. In agreement with Theorem 3, In Figure, right, the relative standard deviation increases like n with the dimension for constant c three increasing curves. A careful study [8] of the variance equation of Theorem 3 shows that for the choice of c = / + n α, if α > /3 the relative standard deviation converges to 0 with

50 50 40 40 30 30 lnσ t /σ 0 0 0 lnσ t /σ 0 0 0 0 0 0 0 00 400 600 800 000 number of iterations a Without cumulation c = 0 0 00 00 300 400 500 number of iterations b With cumulation c = / 0 Fig. : lnσ t /σ 0 against t. The different curves represent the quantiles of a set of 5.0 3 + samples, more precisely the 0 i -quantile and the 0 i -quantile for i from to 4; and the median. We have n = 0 and λ = 8. n α + n/n 3α. Taking α = /3 is a critical value where the relative standard deviation converges to / EN :λ. On the other hand, lower values of α makes the relative standard deviation diverge with n 3α/. 6 Summary We investigate throughout this paper the, λ-csa-es on affine linear functions composed with strictly increasing transformations. We find, in Theorem, the limit distribution for lnσ t /σ 0 /t and rigorously prove the desired behaviour of σ with λ 3 for any c, and with λ = and cumulation 0 < c < : the step-size diverges geometrically fast. In contrast, without cumulation c = and with λ =, a random walk on lnσ occurs, like for the, -σsa-es [9] and also for the same symmetry reason. We derive an expression for the variance of the step-size increment. On linear functions when c = /n α, for α 0 α = 0 meaning c constant and for n the standard deviation is about n α + n/n 3α times larger than the step-size increment. From this follows that keeping c < /n /3 ensures that the standard deviation of lnσ t+ /σ t becomes negligible compared to lnσ t+ /σ t when the dimensions goes to infinity. That means, the signal to noise ratio goes to zero, giving the algorithm strong stability. The result confirms that even the largest default cumulation parameter c = / n is a stable choice. Acknowledgments This work was partially supported by the ANR-00-COSI-00 grant SIMINOLE of the French National Research Agency and the ANR COSINUS project ANR-08-COSI- 007-.

STDlnσ t + /σ t / lnσ t + /σ t 0 0 0 0-0 - 0-0 c - 0 0 STDlnσ t + /σ t / lnσ t + /σ t 0 3 0 0 0 0 0-0 - 0-3 0 0 0 0 0 3 0 4 dimension of the search space Fig. : Standard deviation of ln σ t+ /σ t relatively to its expectation. Here λ = 8. The curves were plotted using Eq. and Eq.. On the left, curves for right to left n =, 0, 00 and 000. On the right, different curves for top to bottom c =, 0.5, 0., / + n /4, / + n /3, / + n / and / + n. References. D. V. Arnold and H.-G. Beyer. Performance analysis of evolutionary optimization with cumulative step length adaptation. IEEE Transactions on Automatic Control, 494:67 6, 004.. D. V. Arnold and H.-G. Beyer. On the behaviour of evolution strategies optimising cigar functions. Evolutionary Computation, 84:66 68, 00. 3. D.V. Arnold. Cumulative step length adaptation on ridge functions. In Parallel Problem Solving from Nature PPSN IX, pages 0. Springer, 006. 4. D.V. Arnold. On the behaviour of the,λ-es for a simple constrained problem. In Foundations of Genetic Algorithms FOGA, pages 5 4. ACM, 0. 5. D.V. Arnold and H.G. Beyer. Random dynamics optimum tracking with evolution strategies. In Parallel Problem Solving from Nature PPSN VII, pages 3. Springer, 00. 6. D.V. Arnold and H.G. Beyer. Optimum tracking with evolution strategies. Evolutionary Computation, 43:9 308, 006. 7. D.V. Arnold and H.G. Beyer. Evolution strategies with cumulative step length adaptation on the noisy parabolic ridge. Natural Computing, 74:555 587, 008. 8. A. Chotard, A. Auger, and N. Hansen. Cumulative step-size adaptation on linear functions: Technical report. 0. http://hal.inria.fr/hal-00704903. 9. N. Hansen. An analysis of mutative σ-self-adaptation on linear fitness functions. Evolutionary Computation, 43:55 75, 006. 0. N. Hansen and A. Ostermeier. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In International Conference on Evolutionary Computation, pages 3 37, 996.. S. P. Meyn and R. L. Tweedie. Markov chains and stochastic stability. Cambridge University Press, second edition, 993.. A. Ostermeier, A. Gawelczyk, and N. Hansen. Step-size adaptation based on non-local use of selection information. In Proceedings of Parallel Problem Solving from Nature PPSN III, volume 866 of Lecture Notes in Computer Science, pages 89 98. Springer, 994.