Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB-2010 Noiseless Testbed

Size: px
Start display at page:

Download "Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB-2010 Noiseless Testbed"

Transcription

1 Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB- Noiseless Testbed Nikolaus Hansen, Raymond Ros To cite this version: Nikolaus Hansen, Raymond Ros. Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB- Noiseless Testbed. Genetic And Evolutionary Computation Conference, Jul, Portland, United States. pp.,, <./.>. <halv> HAL Id: hal Submitted on Dec (v), last revised Dec (v) HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

2 Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB- Noiseless Testbed ABSTRACT Nikolaus Hansen INRIA Saclay, TAO team LRI, Bat 9 Univ. Paris-Sud 9 Orsay Cedex, France nikolaus.hansen@inria.fr We implement a weighted negative update of the covariance matrix in the CMA-ES weighted active CMA-ES or, in short, acma-es. We benchmark the IPOP-aCMA-ES and compare the performance with the IPOP-CMA-ES on the BBOB- noiseless testbed in dimensions between and. On nine out of essentially unimodal functions, the acma is faster than CMA, in particular in larger dimension. On at least three functions it also leads to a (slightly) better scaling with the dimension. In none of the benchmark functions acma appears to be significantly worse in any dimension. On two and five functions, IPOP-CMA- ES and IPOP-aCMA-ES respectively exceed the record observed during BBOB-9. Categories and Subject Descriptors G.. [Numerical Analysis]: Optimization global optimization, unconstrained optimization; F.. [Analysis of Algorithms and Problem Complexity]: Numerical Algorithms and Problems General Terms Algorithms, performance, comparison Keywords CMA-ES, IPOP-CMA-ES, active CMA-ES, Benchmarking, Black-box optimization. INTRODUCTION The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [,, ] is a stochastic search procedure that samples new candidate solutions from a multivariate normal distribution thereof mean and covariance matrix are adapted after each iteration. The (µ/µ w, λ)-cma-es samples λ new candidate solutions and selects the µ best among them. They contribute in a weighted manner to the update of the distribution parameters. The algorithm is non-elitist Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. GECCO, July,, Portland, Oregon, USA. Copyright ACM 9---//...$.. Raymond Ros TAO Team-Project INRIA Saclay LRI, Bat 9, Univ. Paris-Sud F-9 Orsay Cedex, France raymond.ros@inria.fr by nature, but a practical implementation will preserve the best-ever evaluated solution. Elitist variants of the CMA-ES [] are often slightly faster but more susceptible of getting stuck in a suboptimal local optimum. The IPOP-CMA-ES [] implements a restart procedure. Before each restart, λ is doubled. In most cases, doubling λ increases the length of single runs, but it often improves the quality of the best found solution. The BIPOP-CMA-ES [], proposed recently, maintains two budgets. Under the first budget, an IPOP-CMA-ES is executed. Under the second budget, a multi-start (µ/µ w, λ)-cma-es with various small population sizes is entertained. Previous Benchmarking of CMA-ES. The (µ/µ w, λ)- CMA-ES has been quite successful in several elaborate benchmarking exercises. Notably, the IPOP-CMA-ES [] in the special session on real parameter optimization CEC- and the BIPOP-CMA-ES [] in the black-box optimization benchmarking BBOB-9 have shown excellent performance [, ] on respectively and uni- and multi-modal benchmark functions in search space dimension up to. A comparison of two Estimation of Distribution Algorithms and three Evolution Strategies is presented in [] on functions. The CMA-ES performed best on functions with a median speed-up by a factor of at least compared to any other algorithm. On the remaining functions the maximum loss factor was.. A comparison on a small collection of multi-modal function with three other algorithms is presented in []. The CMA-ES with optimal population size performs clearly best on the non-separable functions and is clearly outperformed on additively decomposable functions by Differential Evolution (DE). A comparison of derivative-free optimization methods and BFGS on smooth unimodal functions is presented in []. On these functions, PSO and DE are in general significantly slower than CMA-ES. Only PSO performs similar on separable problems. NEWUOA and BFGS remarkably outperform CMA-ES, if the problem is convex and has a moderate conditioning. On non-convex problems with at least moderate condition number (i.e. ) and on non-separable problems with higher condition number (i.e. ), the CMA-ES breaks even and becomes advantageous with increasing condition number, tested up to. In [] the CMA-ES is recognized as state-of-the-art in CEC-/CEC.htm

3 Given t N {}, m t R D, σ t R +, C t R D D positive definite, p t σ,p t c R D and p t= σ = p t= c =, C t= = I x i m t + σ t N,C t is normally distributed for i =,..., λ () X µ m t+ = m t + c m w i (x i:λ m t ) where f(x :λ ) f(x µ:λ ) f(x µ+:λ )... () p t+ σ h σ = p t+ c C + µ = C µ = i= = ( c σ)p t σ + p c σ( c σ)µ w C t mt+ m t c m σ t ( if p t+ σ < p ( c σ) (t+). + D+ otherwise = ( c c)p t c + h σ p cc( c c)µ w m t+ m t µx i= C t is symmetric with positive eigenvalues such that C t C t is the inverse of C t. If BΛB T = C is an eigendecomposition into an orthogonal matrix B and a diagonal matrix Λ, we have C = BΛ / B T. E N (,I) stalls the update of p c in () when σ increases rapidely c m σ t evolution path for the rank-one update of C t () x i:λ m t `xi:λ m t T w i matrix with rank min(µ, D) for the so-called rank-µ update of C t () σ t σ t µ X w i+y λ i:λ yλ i:λ T i= C t+ = ( c c µ + c α old )C t + c p t+ σ t+ = σ t exp max σ cσ d σ with y λ i:λ = Ct (x λ µ++i:λ m t ) C t (xλ i:λ m t ) c p t+ T c {z } rank one update ««p t+ σ E N (,I) + (c µ + c ( α old)) C + µ () () x λ i:λ m t σ t () {z} rank-µ update c C µ {z } active for E N (,I) = Γ( n+ )/Γ( n ) we use the approximation D ` + (9) D D Figure : Update equations for the state variables in the (µ/µ w, λ)-acma-es with iteration index t =,,,... and a bc+d = min(a, bc+d). The chosen ordering of equations allows to remove the time index in all variables but m t. The symbol x i:λ is the i-th best of the solutions x,...,x λ () real-valued evolutionary optimization. Partly due to the fact that the algorithm is quasi-parameter-free, the CMA-ES is widely applied and has become a quasi-standard. A Further Improvement. The active CMA-ES proposed in [] introduces a negative update of the covariance matrix in the (µ/µ I, λ)-cma-es. The authors investigate essentially unimodal functions. They observe a significant speed-up in particular on the discus function. The speed-up reaches almost a factor of three in dimension (compared to the (µ/µ I, λ)-cma-es) and it increases with increasing dimension (i.e. the update leads to an improved scaling with increasing search space dimension). The negative update can in particular speed-up the adaptation of small variances in a small number of directions. The speed-up is less pronounced with a larger population size λ. This Paper. We combine the technique of negative updates with the (µ/µ w, λ)-cma-es and implement a weighted negative update procedure for the covariance matrix. The update also improves compared to [], in that it remains feasible with a large population size. Consequently, the objective of this paper is threefold. ) Specify a weighted negative update scheme with a parameter setting suited also for a very large population size. ) Reproduce the effects from [], observing significant improvements on functions, where a small number of directions need a small variance (here, we want strategies#covariance_matrix_adaptation_evolution_ Strategies_CMA-ESs to improve over the (µ/µ w, λ)-cma-es). ) Survey the performance of a negative update on a larger number of diverse functions to possibly detect failures of the method. For this survey the BBOB- test environment is used.. THE ALGORITHM The (µ/µ w, λ)-acma-es is summarized in Figure. The initial values m t= R D and σ t= > are user defined and given in the next section. Modifications compared to [, ] are the addition of () and otherwise highlighted in pink (mainly the supplement to ()). The default parameter values are shown in Table. For c =, the (µ/µ w, λ)-cma-es is recovered. Restarts. We apply nine restarts each with maximal + (D + ) / λ iterations. The default termination methods are used, besides that we have set TolHistFun=e-, TolX=e-, StopOnStagnation= on and MaxFunEvals=inf, following []. The population size λ is doubled for each restart, the first value is given in Table.. PARAMETER TUNING AND SETUP The new parameter(s) for acma have been identified with experiments on the sphere function f with various initial covariance matrices. In particular c is chosen such that (a) decreasing c from the default value does not improve the

4 Table : Default parameter values of (µ/µ w, λ)-acma-es, where by definition P µ i= wi = and µ w = P µ i= w i and a b c + d := min(a b, c + d). Only population size λ is possibly left to the users choice (see also [, ]) λ = + ln D population size, number of newly sampled candidate solutions in each iteration µ = λ parent number, number of candidate solution used to update the distribution parameters w i = ` ln λ+ ln i P µ ` j= ln λ+ ln j «recombination weights for i =,..., µ λ/ c m = learning rate for the mean, sometimes interpreted as rescaled mutation with κ = c m µ c σ = w+ D+µ w+ cumulation constant for step-size q µ d σ = + c σ + max, w step-size damping, is usually close to one. This formula might be replaced D+ c c = c = + µ w/d D++ µ w/d αcov min(,λ/) (D+.) +µ w by µ w/λ +. + c σ in the near future cumulation constant for p c, the might be removed in near future covariance matrix learning rate for the rank one update using p c µ c µ = c α w +/µ w cov (D+) +α covµ w/ covariance matrix learning rate for rank-µ update c = c αcov µ min ( cµ) w (D+). +µ w remark that c becomes small, if c µ gets close to one α cov = could be chosen <, e.g. α cov =. for noisy problems with α c min = α ( c c µ)( λ mintarget ) min =, λmintarget =. and λmax(.) is the largest eigenvalue; the min coefficient c λ max(c t / C µ C t / ) min depends in general on the iteration index, but here, due to the setting of α min, no upper bound on c is enforced α old =. is in [, ] and is chosen in the domain middle without deep motives, because it seems quite irrelevant σ = might become in near future, which has only a negligible effect under neutral selection, because the term to the right of the in (9) is approximately cσ d σ N (, /D) under neutral selection max performance, (b) increasing c by a factor of two never leads to a failure, and (c) the condition number of C t remains below in the stationary limit. The remaining parameters are used in their default settings as found in the source code on the WWW or in []. No special attempt has been made to further tune the parameters to the BBOB testbed. The crafting-effort [] of both algorithms is equal to CrE =. Experiments were conducted according to BBOB- [] on the benchmark functions given in [, 9]. On each function, trials are executed (on the first instances) with m uniformly random in [, ] D and σ =. Source code to reproduce the experiment is provided at. CPU Timing Experiment. The complete algorithm was run on f for at least seconds on a Intel Core processor (. GHz) with Linux.. and Matlab Ra. Results for the IPOP-aCMA-ES are.;.;.;.;.;. and. seconds per function evaluation for dimension ; ; ; ; ; and.. RESULTS We show results for two algorithms: the IPOP-aCMA-ES as presented above and denoted as acma, and the IPOP- CMA-ES, the same algorithm where c is set to zero, denoted as CMA. In exploratory experiments comparing both (µ/µ w, λ)-cma-es and (µ/µ w, λ)-acma-es with the result of the (µ/µ I, λ)-variants from [], the (µ/µ w, λ)-variants perform at least as good and usually better: as a rule, (µ/µ w, λ)-acma-es outperforms (µ/µ I, λ)-acma-es that outperforms (µ/µ w, λ)-cma-es that outperforms (µ/µ I, λ)- CMA-ES. Runtime results comparing acma with CMA and with the respective best algorithm from BBOB-9 are presented bbob--results log of ERT loss ratio - - IPOP-CMA-ES CrE = f- log of FEvals / dimension log of ERT loss ratio - - IPOP-aCMA-ES CrE = f- log of FEvals / dimension Figure : Box-Whisker plots of the ERT ratio to the respective best algorithm from BBOB-9 for all functions in dimension in Figures and in Table. The expected running time (ERT), used in the figures and table, depends on a given target function value, f t = f opt+ f, and is computed over all relevant trials as the number of function evaluations executed during each trial while the best function value did not reach f t, summed over all trials and divided by the number of trials that actually reached f t [, ]. Statistical significance is tested with the rank-sum test for a given target f t ( in Figure ) up to the smallest number of function evaluations, b u min, executed in any unsuccessful trial under consideration. The datum from each trial is either the best achieved f-value, or if f t was reached within the budget b u min, the number of needed function evaluations to reach f t (inverted and multiplied by ), In the following, when a performance difference is highlighted on an individual function, the difference is statistically significant. Figure. The figure shows Box-Whisker plots of the ERT ratio compared to the respective best algorith from BBOB-

5 9 for all functions. Both algorithms show a very similar characteristic. Besides for very small budgets, the median (horizontal red line) and average (connected line) ERT ratio are somewhat below ten. In the final stage, acma improves in about % of the functions compared to the best algorithm from 9 (ratio smaller than one). Figure. The figure shows empirical cumulative distributions (a) of the runtime in number of function evaluations and (b) of the runtime ratio between the two algorithms acma/cma. Both algorithms perform very similar while acma appears to be slightly faster. Clearly, the strongest effect in Fig. is observed for the ill-conditioned functions in dimension. The runtime is consistently shorter for acma, almost uniformly by a factor of close to two. The improvement is statistically significant on all functions (see Table ). On the weak structure functions, the observed differences are probably caused by stochastic deviations and not significant. The remaining subgroups show almost identical behavior. Figure. The scatter plots in Fig. visualize the ratio of expected runtime acma/cma for each measurement on each function and each dimension. A slightly improved scaling with acma can be observed on f, f and f. On the Discus function f the effect is most pronounced and the speed-up comes close to a factor of three in dimension. The advantage of CMA on f in dimension is far beyond the maximum number of function evaluations, based on two successes (see Fig. ) and not statistically significant. Additionally, on functions f, f f, and f an advantage by acma, in particular for larger dimension, could be conjectured. On f an advantage of CMA could be conjectured. According to Table, in dimension, statistical significance is established for f and f f. Figure. The questions of a significant performance difference can be pursued in Fig. which plots the ERT ratio versus the target function value. The figure reveals more statistically significant (but sometimes small) differences in performance. A non-negligible and statistically significant performance advantage of acma in some dimensions larger than can be found on f, and f f. No such advantage is detected for CMA. Table. Finally, Table presents the ERT numbers for dimension and in comparison with the respective best algorithm of BBOB-9. Again, acma is significantly better than CMA on f, and f f in dimension as well as in dimension apart from f and f. Compared to the best algorithm from BBOB-9, chosen for each function respectively, the acma can improve the record in dimension on f, f, f, f and f 9. On further four functions acma is visibly better, but the results appear not to be statistically significant. The CMA improves the record on f and f 9.. SUMMARY AND CONCLUSION We have introduced the weighted negative update of the covariance matrix into the (µ/µ w, λ)-cma-es with weighted recombination, denoted as (µ/µ w, λ)-acma-es. The negative update is based in the idea of []. It provides a similar effect for the (µ/µ w, λ)-cma-es as [] for the (µ/µ I, λ)- CMA-ES. The benchmarking of the new IPOP-aCMA-ES with BBOB- reveals no deficiencies or failures compared to the IPOP-CMA-ES on the testbed of functions in any dimension between and. The most prominent effect from acma is observed on ill-conditioned functions. In dimension, the average runtime advantage on the ill-conditioned functions is about. (CMA is. times slower than acma). On three ill-conditioned functions the scaling behavior compared to the CMA improves notably. In no case CMA outperformed acma with statistical significance. Overall, the acma improves the performance over CMA on nine out of essentially unimodal functions significantly at least in dimension, and the advantages are more pronounced in larger dimension. Finally, IPOP-aCMA-ES improves the record on five functions, where it is faster than the respective best algorithm from BBOB-9 in dimension with a sufficient statistical significance. In two of these cases IPOP-CMA-ES is still slightly faster and the record cannot be attributed to the negative covariance matrix update. Acknowledgments NH likes to thank M. Schoenauer for his support of BBOB.. REFERENCES [] A. Auger and N. Hansen. A restart CMA evolution strategy with increasing population size. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC ), pages 9. IEEE Press,. [] A. Auger, N. Hansen, J. Perez Zerpa, R. Ros, and M. Schoenauer. Experimental comparisons of derivative free optimization algorithms. In J. Vahrenhold, editor, th International Symposium on Experimental Algorithms, volume of LNCS, pages. Springer, 9. [] H.-G. Beyer. Evolution strategies. Scholarpedia, ():9,. [] S. Finck, N. Hansen, R. Ros, and A. Auger. Real-parameter black-box optimization benchmarking 9: Presentation of the noiseless functions. Technical Report 9/, Research Center PPE, 9. Updated February. [] S. García, D. Molina, M. Lozano, and F. Herrera. A study on the use of non-parametric tests for analyzing the evolutionary algorithms behaviour: a case study on the CEC special session on real parameter optimization. Journal of Heuristics, ():, 9. [] N. Hansen. Benchmarking a BI-population CMA-ES on the BBOB-9 function testbed. In Rothlauf [], pages 9 9. [] N. Hansen, A. Auger, S. Finck, and R. Ros. Real-parameter black-box optimization benchmarking : Experimental setup. Technical Report RR, INRIA,. [] N. Hansen, A. Auger, R. Ros, S. Finck, and P. Pošík. Comparing results of algorithms from the black-box optimization benchmarking BBOB-9. In J. Branke, editor, GECCO (Companion). ACM,. [9] N. Hansen, S. Finck, R. Ros, and A. Auger. Real-parameter black-box optimization benchmarking 9: Noiseless functions definitions. Technical Report RR9, INRIA, 9. Updated February. [] N. Hansen and S. Kern. Evaluating the CMA evolution strategy on multimodal test functions. In X. Yao et al., editors, Parallel Problem Solving from Nature PPSN VIII, volume of LNCS, pages 9. Springer,. [] N. Hansen, S. Muller, and P. Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation, ():,. [] N. Hansen and A. Ostermeier. Completely derandomized

6 . + f- -D -D. +: / + f- +: / - -: / - -: 9/9 all functions. - -: / : /. - -: 9/9 : 9/9. log of FEvals / DIM. + f- f log of FEvals(A)/FEvals(A) +: /. log of FEvals / DIM. + f- f log of FEvals(A)/FEvals(A) +: / separable fcts : / -: / : / : / -: / : /. log of FEvals / DIM. + f-9 f- - - log of FEvals(A)/FEvals(A) +: /. log of FEvals / DIM. + f-9 f- - log of FEvals(A)/FEvals(A) +: / - -: / - -: / moderate fcts. - -: / : /. - -: / : / ill-conditioned fcts. log of FEvals / DIM. + f log of FEvals / DIM. + f-9 f log of FEvals(A)/FEvals(A) +: / -: / -: / : / f- - - log of FEvals(A)/FEvals(A) +: /. log of FEvals / DIM. + f log of FEvals / DIM. + f-9 f-9 - log of FEvals(A)/FEvals(A) +: / -: / -: / : / f- - - log of FEvals(A)/FEvals(A) +: / multi-modal fcts weak structure fcts log of FEvals / DIM. + f log of FEvals / DIM -: / -: / : / f log of FEvals(A)/FEvals(A) +: / -: / -: / : / f log of FEvals(A)/FEvals(A) log of FEvals / DIM. + f log of FEvals / DIM -: / -: / : / f log of FEvals(A)/FEvals(A) +: / -: / -: / : / f log of FEvals(A)/FEvals(A) Figure : Empirical cumulative distributions (ECDF) of run lengths and speed-up ratios in -D (left) and - D (right). Left sub-columns: ECDF of the number of function evaluations divided by dimension D (FEvals/D) to reach a target value f opt + f with f = k, where k {,,, } is given by the first value in the legend, for IPOP-aCMA (solid) and IPOP-CMA (dashed). Light beige lines show the ECDF of FEvals for target value f = of algorithms benchmarked during BBOB-9. Right sub-columns: ECDF of FEval ratios of IPOP-aCMA divided by IPOP-CMA, all trial pairs for each function. Pairs where both trials failed are disregarded, pairs where one trial failed are visible in the limits being > or <. The legends indicate the number of functions that were solved in at least one trial (IPOP-aCMA first).

7 Sphere Ellipsoid separable Rastrigin separable Skew Rastrigin-Bueche Linear slope Attractive sector Step-ellipsoid Rosenbrock original 9 Rosenbrock rotated Ellipsoid Discus Bent cigar Schaffer F, cond. Sharp ridge Schaffer F, cond. Sum of diff. powers Rastrigin 9 Griewank-Rosenbrock Schwefel x*sin(x) Weierstrass Gallagher peaks Gallagher peaks Katsuuras Lunacek bi-rastrigin Figure : Expected running time (ERT in log of number of function evaluations) of IPOP-aCMA versus IPOP-CMA for target values f [, ] in each dimension for functions f f. Markers on the upper or right egde indicate that the target value was never reached by IPOP-aCMA or IPOP-CMA respectively. Markers represent dimension: :+, :, :, :, :, :. Lines indicate the maximum number of function evaluations

8 log(ert/ert) or ~#succ Sphere -D -D -D -D -D -D D D D D D D Ellipsoid separable D D D D D D Rastrigin separable / D log(ert/ert) or ~#succ D D / / Skew Rastrigin-Bueche separable D D / D D D Linear slope Attractive sector Step-ellipsoid Rosenbrock original log(ert/ert) or ~#succ log(ert/ert) or ~#succ D D D D D D log(ert/ert) or ~#succ D D D D D D log(ert/ert) or ~#succ D D D D D D D D D D D D log(delta ftarget) 9 Rosenbrock rotated log(delta ftarget) Ellipsoid log(delta ftarget) Discus log(delta ftarget) Bent cigar log(ert/ert) or ~#succ log(ert/ert) or ~#succ D D D D D D log(ert/ert) or ~#succ D D D D D D log(ert/ert) or ~#succ D D D D D D D D D D D D log(delta ftarget) Sharp ridge log(delta ftarget) Sum of different powers log(delta ftarget) Rastrigin log(delta ftarget) Weierstrass log(ert/ert) or ~#succ log(ert/ert) or ~#succ D D D D D D log(ert/ert) or ~#succ D D D D D D log(ert/ert) or ~#succ D D D D D D D D D D D D log(delta ftarget) Schaffer F, condition log(delta ftarget) Schaffer F, condition log(delta ftarget) 9 Griewank-Rosenbrock FF log(delta ftarget) Schwefel x*sin(x) log(ert/ert) or ~#succ log(ert/ert) or ~#succ D D D D D D log(ert/ert) or ~#succ D D D D D D log(ert/ert) or ~#succ D D D D D D D D D D D D log(delta ftarget) Gallagher peaks log(delta ftarget) Gallagher peaks log(delta ftarget) Katsuuras log(delta ftarget) Lunacek bi-rastrigin log(ert/ert) or ~#succ - log(ert/ert) or ~#succ D / D D D 9/ / / log(ert/ert) or ~#succ 9/ / D D - / / log(ert/ert) or ~#succ / D D D / D D -D -D -D -D -D -D D D D log(delta ftarget) log(delta ftarget) log(delta ftarget) log(delta ftarget) Figure : ERT ratio of IPOP-aCMA divided by IPOP-CMA versus log ( f) for f f in,,,,, -D. Ratios < indicate an advantage of IPOP-aCMA, smaller values are always better. The line gets dashed when for any algorithm the ERT exceeds thrice the median of the trial-wise overall number of f- evaluations for the same algorithm on this function. Symbols indicate the best achieved f-value of one algorithm (ERT gets undefined to the right). The dashed line continues as the fraction of successful trials of the other algorithm, where means % and the y-axis limits mean %, values below zero for IPOP-aCMA. The line ends when no algorithm reaches f anymore. The number of successful trials is given, only if it was in {... 9} for IPOP-aCMA (st number) and non-zero for IPOP-CMA (nd number). Results are significant with p =. for one star and p = # otherwise, with Bonferroni correction within each figure.

9 f e+ e+ e- e- e- e #succ f / : IPOP-CMA.. 9 / : IPOP-aCMA..9 9 / f / : IPOP-CMA 9 / : IPOP-aCMA / f / : IPOP-CMA. 99 / : IPOP-aCMA. 9 / f 9 9 / : IPOP-CMA..e / : IPOP-aCMA..e / f / : IPOP-CMA / : IPOP-aCMA / f / : IPOP-CMA / : IPOP-aCMA / f 9 / : IPOP-CMA / : IPOP-aCMA / f 9 / : IPOP-CMA / : IPOP-aCMA / f 9 9 / : IPOP-CMA..... / : IPOP-aCMA / f 9 9 / : IPOP-CMA / : IPOP-aCMA / f / : IPOP-CMA / : IPOP-aCMA / f 9 / : IPOP-CMA / : IPOP-aCMA / f 9 / : IPOP-CMA / : IPOP-aCMA / f 9 / : IPOP-CMA / : IPOP-aCMA / f / : IPOP-CMA / : IPOP-aCMA / f 9 9 / : IPOP-CMA / : IPOP-aCMA / f / : IPOP-CMA / : IPOP-aCMA / f / : IPOP-CMA / : IPOP-aCMA / f 9.e.e.e / : IPOP-CMA... / : IPOP-aCMA / f / : IPOP-CMA / : IPOP-aCMA / f 9 / : IPOP-CMA.. / : IPOP-aCMA.. / f 9 / : IPOP-CMA / : IPOP-aCMA. 9/ f. 9 / : IPOP-CMA. / : IPOP-aCMA. / f.e.e 9.e.e.e / : IPOP-CMA / : IPOP-aCMA..e / -D -D f e+ e+ e- e- e- e #succ f / : IPOP-CMA. / : IPOP-aCMA.9 / f / : IPOP-CMA / : IPOP-aCMA 9 / f / : IPOP-CMA.9e / : IPOP-aCMA.9e / f.e 9/ : IPOP-CMA.e / : IPOP-aCMA.e / f / : IPOP-CMA / : IPOP-aCMA / f 9 9 / : IPOP-CMA / : IPOP-aCMA / f 9 99 / : IPOP-CMA / : IPOP-aCMA / f 9 9 / : IPOP-CMA / : IPOP-aCMA / f 9 9 / : IPOP-CMA / : IPOP-aCMA / f 9 / : IPOP-CMA / : IPOP-aCMA / f 9 / : IPOP-CMA..... / : IPOP-aCMA / f 9 / : IPOP-CMA / : IPOP-aCMA / f 9 / : IPOP-CMA / : IPOP-aCMA / f 9 9 / : IPOP-CMA / : IPOP-aCMA / f.e.e.e.9e.9e / : IPOP-CMA / : IPOP-aCMA / f.e.9e.e / : IPOP-CMA / : IPOP-aCMA / f / : IPOP-CMA / : IPOP-aCMA / f e.e / : IPOP-CMA / : IPOP-aCMA / f 9.e.e.9e.e / : IPOP-CMA.... / : IPOP-aCMA / f.e.e.9e.e / : IPOP-CMA / : IPOP-aCMA / f 9 / : IPOP-CMA. 9 / : IPOP-aCMA.9 9/ f 9 9.e / : IPOP-CMA.e / : IPOP-aCMA.e / f..9e.e.e / : IPOP-CMA..e / : IPOP-aCMA. 9.e / f.e.e.9e.e.e.e / : IPOP-CMA.e / : IPOP-aCMA..e / Table : Expected running time (ERT in number of function evaluations) divided by the best ERT measured during BBOB-9 (given in the respective first row) for different f values for functions f f. The median number of conducted function evaluations is additionally given in italics, if ERT( ) =. #succ is the number of trials that reached the final target f opt +. Bold entries are statistically significantly better compared to the other algorithm, with p =. or p = k where k > is the number following the symbol, with Bonferroni correction of. self-adaptation in evolution strategies. Evolutionary Computation, 9():9 9,. [] C. Igel, T. Suttorp, and N. Hansen. A computational efficient covariance matrix update and a (+)-CMA for evolution strategies. In Proceedings of the th annual conference on genetic and evolutionary computation GECCO, pages. ACM,. [] G. Jastrebski and D. Arnold. Improving evolution strategies through active covariance matrix adaptation. In IEEE Congress on Evolutionary Computation, CEC, Proceedings, pages. IEEE,. [] S. Kern, S. Müller, N. Hansen, D. Büche, J. Ocenasek, and P. Koumoutsakos. Learning probability distributions in continuous evolutionary algorithms a comparative review. Natural Computing, ():,. [] K. Price. Differential evolution vs. the functions of the second ICEO. In Proc. of the IEEE International Congress on Evolutionary Computation, pages, 99. [] F. Rothlauf, editor. Genetic and Evolutionary Computation Conference, GECCO 9, Proceedings, Montreal, Quعebec, Canada, July -, 9, Companion Material. ACM, 9.

Comparison of NEWUOA with Different Numbers of Interpolation Points on the BBOB Noiseless Testbed

Comparison of NEWUOA with Different Numbers of Interpolation Points on the BBOB Noiseless Testbed Comparison of NEWUOA with Different Numbers of Interpolation Points on the BBOB Noiseless Testbed Raymond Ros To cite this version: Raymond Ros. Comparison of NEWUOA with Different Numbers of Interpolation

More information

Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed

Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed Raymond Ros To cite this version: Raymond Ros. Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed. Genetic

More information

Mirrored Variants of the (1,4)-CMA-ES Compared on the Noiseless BBOB-2010 Testbed

Mirrored Variants of the (1,4)-CMA-ES Compared on the Noiseless BBOB-2010 Testbed Author manuscript, published in "GECCO workshop on Black-Box Optimization Benchmarking (BBOB') () 9-" DOI :./8.8 Mirrored Variants of the (,)-CMA-ES Compared on the Noiseless BBOB- Testbed [Black-Box Optimization

More information

Bounding the Population Size of IPOP-CMA-ES on the Noiseless BBOB Testbed

Bounding the Population Size of IPOP-CMA-ES on the Noiseless BBOB Testbed Bounding the Population Size of OP-CMA-ES on the Noiseless BBOB Testbed Tianjun Liao IRIDIA, CoDE, Université Libre de Bruxelles (ULB), Brussels, Belgium tliao@ulb.ac.be Thomas Stützle IRIDIA, CoDE, Université

More information

Benchmarking the (1+1)-CMA-ES on the BBOB-2009 Function Testbed

Benchmarking the (1+1)-CMA-ES on the BBOB-2009 Function Testbed Benchmarking the (+)-CMA-ES on the BBOB-9 Function Testbed Anne Auger, Nikolaus Hansen To cite this version: Anne Auger, Nikolaus Hansen. Benchmarking the (+)-CMA-ES on the BBOB-9 Function Testbed. ACM-GECCO

More information

Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed

Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed Benchmarking a BI-Population CMA-ES on the BBOB- Function Testbed Nikolaus Hansen Microsoft Research INRIA Joint Centre rue Jean Rostand Orsay Cedex, France Nikolaus.Hansen@inria.fr ABSTRACT We propose

More information

Benchmarking the Nelder-Mead Downhill Simplex Algorithm With Many Local Restarts

Benchmarking the Nelder-Mead Downhill Simplex Algorithm With Many Local Restarts Benchmarking the Nelder-Mead Downhill Simplex Algorithm With Many Local Restarts Example Paper The BBOBies ABSTRACT We have benchmarked the Nelder-Mead downhill simplex method on the noisefree BBOB- testbed

More information

Benchmarking Natural Evolution Strategies with Adaptation Sampling on the Noiseless and Noisy Black-box Optimization Testbeds

Benchmarking Natural Evolution Strategies with Adaptation Sampling on the Noiseless and Noisy Black-box Optimization Testbeds Benchmarking Natural Evolution Strategies with Adaptation Sampling on the Noiseless and Noisy Black-box Optimization Testbeds Tom Schaul Courant Institute of Mathematical Sciences, New York University

More information

BBOB-Benchmarking Two Variants of the Line-Search Algorithm

BBOB-Benchmarking Two Variants of the Line-Search Algorithm BBOB-Benchmarking Two Variants of the Line-Search Algorithm Petr Pošík Czech Technical University in Prague, Faculty of Electrical Engineering, Dept. of Cybernetics Technická, Prague posik@labe.felk.cvut.cz

More information

The Impact of Initial Designs on the Performance of MATSuMoTo on the Noiseless BBOB-2015 Testbed: A Preliminary Study

The Impact of Initial Designs on the Performance of MATSuMoTo on the Noiseless BBOB-2015 Testbed: A Preliminary Study The Impact of Initial Designs on the Performance of MATSuMoTo on the Noiseless BBOB-0 Testbed: A Preliminary Study Dimo Brockhoff, Bernd Bischl, Tobias Wagner To cite this version: Dimo Brockhoff, Bernd

More information

Benchmarking a Hybrid Multi Level Single Linkage Algorithm on the BBOB Noiseless Testbed

Benchmarking a Hybrid Multi Level Single Linkage Algorithm on the BBOB Noiseless Testbed Benchmarking a Hyrid ulti Level Single Linkage Algorithm on the BBOB Noiseless Tested László Pál Sapientia - Hungarian University of Transylvania 00 iercurea-ciuc, Piata Liertatii, Nr., Romania pallaszlo@sapientia.siculorum.ro

More information

A Restart CMA Evolution Strategy With Increasing Population Size

A Restart CMA Evolution Strategy With Increasing Population Size Anne Auger and Nikolaus Hansen A Restart CMA Evolution Strategy ith Increasing Population Size Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2005 c IEEE A Restart CMA Evolution Strategy

More information

Adaptive Coordinate Descent

Adaptive Coordinate Descent Adaptive Coordinate Descent Ilya Loshchilov 1,2, Marc Schoenauer 1,2, Michèle Sebag 2,1 1 TAO Project-team, INRIA Saclay - Île-de-France 2 and Laboratoire de Recherche en Informatique (UMR CNRS 8623) Université

More information

Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed

Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed Lukáš Bajer Institute of Computer Science Academy of Sciences of the Czech Republic and Faculty of Mathematics

More information

A (1+1)-CMA-ES for Constrained Optimisation

A (1+1)-CMA-ES for Constrained Optimisation A (1+1)-CMA-ES for Constrained Optimisation Dirk Arnold, Nikolaus Hansen To cite this version: Dirk Arnold, Nikolaus Hansen. A (1+1)-CMA-ES for Constrained Optimisation. Terence Soule and Jason H. Moore.

More information

Investigating the Impact of Adaptation Sampling in Natural Evolution Strategies on Black-box Optimization Testbeds

Investigating the Impact of Adaptation Sampling in Natural Evolution Strategies on Black-box Optimization Testbeds Investigating the Impact o Adaptation Sampling in Natural Evolution Strategies on Black-box Optimization Testbeds Tom Schaul Courant Institute o Mathematical Sciences, New York University Broadway, New

More information

arxiv: v1 [cs.ne] 9 May 2016

arxiv: v1 [cs.ne] 9 May 2016 Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES) arxiv:1605.02720v1 [cs.ne] 9 May 2016 ABSTRACT Ilya Loshchilov University of Freiburg Freiburg, Germany ilya.loshchilov@gmail.com

More information

Problem Statement Continuous Domain Search/Optimization. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms.

Problem Statement Continuous Domain Search/Optimization. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms Anne Auger & Nikolaus Hansen INRIA Saclay - Ile-de-France, project team TAO Universite Paris-Sud, LRI, Bat. 49 945 ORSAY

More information

Covariance Matrix Adaptation in Multiobjective Optimization

Covariance Matrix Adaptation in Multiobjective Optimization Covariance Matrix Adaptation in Multiobjective Optimization Dimo Brockhoff INRIA Lille Nord Europe October 30, 2014 PGMO-COPI 2014, Ecole Polytechnique, France Mastertitelformat Scenario: Multiobjective

More information

Cumulative Step-size Adaptation on Linear Functions

Cumulative Step-size Adaptation on Linear Functions Cumulative Step-size Adaptation on Linear Functions Alexandre Chotard, Anne Auger, Nikolaus Hansen To cite this version: Alexandre Chotard, Anne Auger, Nikolaus Hansen. Cumulative Step-size Adaptation

More information

Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation

Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation Anne Auger & Nikolaus Hansen INRIA Research Centre Saclay Île-de-France Project team TAO University Paris-Sud, LRI (UMR 8623), Bat.

More information

BI-population CMA-ES Algorithms with Surrogate Models and Line Searches

BI-population CMA-ES Algorithms with Surrogate Models and Line Searches BI-population CMA-ES Algorithms with Surrogate Models and Line Searches Ilya Loshchilov 1, Marc Schoenauer 2 and Michèle Sebag 2 1 LIS, École Polytechnique Fédérale de Lausanne 2 TAO, INRIA CNRS Université

More information

Introduction to Black-Box Optimization in Continuous Search Spaces. Definitions, Examples, Difficulties

Introduction to Black-Box Optimization in Continuous Search Spaces. Definitions, Examples, Difficulties 1 Introduction to Black-Box Optimization in Continuous Search Spaces Definitions, Examples, Difficulties Tutorial: Evolution Strategies and CMA-ES (Covariance Matrix Adaptation) Anne Auger & Nikolaus Hansen

More information

Investigating the Local-Meta-Model CMA-ES for Large Population Sizes

Investigating the Local-Meta-Model CMA-ES for Large Population Sizes Investigating the Local-Meta-Model CMA-ES for Large Population Sizes Zyed Bouzarkouna, Anne Auger, Didier Yu Ding To cite this version: Zyed Bouzarkouna, Anne Auger, Didier Yu Ding. Investigating the Local-Meta-Model

More information

Real-Parameter Black-Box Optimization Benchmarking 2010: Presentation of the Noiseless Functions

Real-Parameter Black-Box Optimization Benchmarking 2010: Presentation of the Noiseless Functions Real-Parameter Black-Box Optimization Benchmarking 2010: Presentation of the Noiseless Functions Steffen Finck, Nikolaus Hansen, Raymond Ros and Anne Auger Report 2009/20, Research Center PPE, re-compiled

More information

Benchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed

Benchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed Benchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed Lukáš Bajer,, Zbyněk Pitra,, Martin Holeňa Faculty of Mathematics and Physics, Charles University, Institute of Computer

More information

A CMA-ES for Mixed-Integer Nonlinear Optimization

A CMA-ES for Mixed-Integer Nonlinear Optimization A CMA-ES for Mixed-Integer Nonlinear Optimization Nikolaus Hansen To cite this version: Nikolaus Hansen. A CMA-ES for Mixed-Integer Nonlinear Optimization. [Research Report] RR-, INRIA.. HAL Id: inria-

More information

Experimental Comparisons of Derivative Free Optimization Algorithms

Experimental Comparisons of Derivative Free Optimization Algorithms Experimental Comparisons of Derivative Free Optimization Algorithms Anne Auger Nikolaus Hansen J. M. Perez Zerpa Raymond Ros Marc Schoenauer TAO Project-Team, INRIA Saclay Île-de-France, and Microsoft-INRIA

More information

Evolution Strategies and Covariance Matrix Adaptation

Evolution Strategies and Covariance Matrix Adaptation Evolution Strategies and Covariance Matrix Adaptation Cours Contrôle Avancé - Ecole Centrale Paris Anne Auger January 2014 INRIA Research Centre Saclay Île-de-France University Paris-Sud, LRI (UMR 8623),

More information

Empirical comparisons of several derivative free optimization algorithms

Empirical comparisons of several derivative free optimization algorithms Empirical comparisons of several derivative free optimization algorithms A. Auger,, N. Hansen,, J. M. Perez Zerpa, R. Ros, M. Schoenauer, TAO Project-Team, INRIA Saclay Ile-de-France LRI, Bat 90 Univ.

More information

Optimisation numérique par algorithmes stochastiques adaptatifs

Optimisation numérique par algorithmes stochastiques adaptatifs Optimisation numérique par algorithmes stochastiques adaptatifs Anne Auger M2R: Apprentissage et Optimisation avancés et Applications anne.auger@inria.fr INRIA Saclay - Ile-de-France, project team TAO

More information

Tuning Parameters across Mixed Dimensional Instances: A Performance Scalability Study of Sep-G-CMA-ES

Tuning Parameters across Mixed Dimensional Instances: A Performance Scalability Study of Sep-G-CMA-ES Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Tuning Parameters across Mixed Dimensional Instances: A Performance Scalability

More information

arxiv: v1 [cs.ne] 29 Jul 2014

arxiv: v1 [cs.ne] 29 Jul 2014 A CUDA-Based Real Parameter Optimization Benchmark Ke Ding and Ying Tan School of Electronics Engineering and Computer Science, Peking University arxiv:1407.7737v1 [cs.ne] 29 Jul 2014 Abstract. Benchmarking

More information

Local-Meta-Model CMA-ES for Partially Separable Functions

Local-Meta-Model CMA-ES for Partially Separable Functions Local-Meta-Model CMA-ES for Partially Separable Functions Zyed Bouzarkouna, Anne Auger, Didier Yu Ding To cite this version: Zyed Bouzarkouna, Anne Auger, Didier Yu Ding. Local-Meta-Model CMA-ES for Partially

More information

Advanced Optimization

Advanced Optimization Advanced Optimization Lecture 3: 1: Randomized Algorithms for for Continuous Discrete Problems Problems November 22, 2016 Master AIC Université Paris-Saclay, Orsay, France Anne Auger INRIA Saclay Ile-de-France

More information

Bio-inspired Continuous Optimization: The Coming of Age

Bio-inspired Continuous Optimization: The Coming of Age Bio-inspired Continuous Optimization: The Coming of Age Anne Auger Nikolaus Hansen Nikolas Mauny Raymond Ros Marc Schoenauer TAO Team, INRIA Futurs, FRANCE http://tao.lri.fr First.Last@inria.fr CEC 27,

More information

Benchmarking Projection-Based Real Coded Genetic Algorithm on BBOB-2013 Noiseless Function Testbed

Benchmarking Projection-Based Real Coded Genetic Algorithm on BBOB-2013 Noiseless Function Testbed Benchmarking Projection-Based Real Coded Genetic Algorithm on BBOB-2013 Noiseless Function Testbed Babatunde Sawyerr 1 Aderemi Adewumi 2 Montaz Ali 3 1 University of Lagos, Lagos, Nigeria 2 University

More information

Stochastic Methods for Continuous Optimization

Stochastic Methods for Continuous Optimization Stochastic Methods for Continuous Optimization Anne Auger et Dimo Brockhoff Paris-Saclay Master - Master 2 Informatique - Parcours Apprentissage, Information et Contenu (AIC) 2016 Slides taken from Auger,

More information

Introduction to Randomized Black-Box Numerical Optimization and CMA-ES

Introduction to Randomized Black-Box Numerical Optimization and CMA-ES Introduction to Randomized Black-Box Numerical Optimization and CMA-ES July 3, 2017 CEA/EDF/Inria summer school "Numerical Analysis" Université Pierre-et-Marie-Curie, Paris, France Anne Auger, Asma Atamna,

More information

Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions

Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions Nikolaus Hansen, Steffen Finck, Raymond Ros, Anne Auger To cite this version: Nikolaus Hansen, Steffen Finck, Raymond

More information

Addressing Numerical Black-Box Optimization: CMA-ES (Tutorial)

Addressing Numerical Black-Box Optimization: CMA-ES (Tutorial) Addressing Numerical Black-Box Optimization: CMA-ES (Tutorial) Anne Auger & Nikolaus Hansen INRIA Research Centre Saclay Île-de-France Project team TAO University Paris-Sud, LRI (UMR 8623), Bat. 490 91405

More information

Stochastic optimization and a variable metric approach

Stochastic optimization and a variable metric approach The challenges for stochastic optimization and a variable metric approach Microsoft Research INRIA Joint Centre, INRIA Saclay April 6, 2009 Content 1 Introduction 2 The Challenges 3 Stochastic Search 4

More information

Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation

Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation Anne Auger & Nikolaus Hansen INRIA Research Centre Saclay Île-de-France Project team TAO University Paris-Sud, LRI (UMR 8623), Bat.

More information

A new simple recursive algorithm for finding prime numbers using Rosser s theorem

A new simple recursive algorithm for finding prime numbers using Rosser s theorem A new simple recursive algorithm for finding prime numbers using Rosser s theorem Rédoane Daoudi To cite this version: Rédoane Daoudi. A new simple recursive algorithm for finding prime numbers using Rosser

More information

Comparison-Based Optimizers Need Comparison-Based Surrogates

Comparison-Based Optimizers Need Comparison-Based Surrogates Comparison-Based Optimizers Need Comparison-Based Surrogates Ilya Loshchilov 1,2, Marc Schoenauer 1,2, and Michèle Sebag 2,1 1 TAO Project-team, INRIA Saclay - Île-de-France 2 Laboratoire de Recherche

More information

Fast Computation of Moore-Penrose Inverse Matrices

Fast Computation of Moore-Penrose Inverse Matrices Fast Computation of Moore-Penrose Inverse Matrices Pierre Courrieu To cite this version: Pierre Courrieu. Fast Computation of Moore-Penrose Inverse Matrices. Neural Information Processing - Letters and

More information

Easter bracelets for years

Easter bracelets for years Easter bracelets for 5700000 years Denis Roegel To cite this version: Denis Roegel. Easter bracelets for 5700000 years. [Research Report] 2014. HAL Id: hal-01009457 https://hal.inria.fr/hal-01009457

More information

On Newton-Raphson iteration for multiplicative inverses modulo prime powers

On Newton-Raphson iteration for multiplicative inverses modulo prime powers On Newton-Raphson iteration for multiplicative inverses modulo prime powers Jean-Guillaume Dumas To cite this version: Jean-Guillaume Dumas. On Newton-Raphson iteration for multiplicative inverses modulo

More information

A CMA-ES with Multiplicative Covariance Matrix Updates

A CMA-ES with Multiplicative Covariance Matrix Updates A with Multiplicative Covariance Matrix Updates Oswin Krause Department of Computer Science University of Copenhagen Copenhagen,Denmark oswin.krause@di.ku.dk Tobias Glasmachers Institut für Neuroinformatik

More information

Methylation-associated PHOX2B gene silencing is a rare event in human neuroblastoma.

Methylation-associated PHOX2B gene silencing is a rare event in human neuroblastoma. Methylation-associated PHOX2B gene silencing is a rare event in human neuroblastoma. Loïc De Pontual, Delphine Trochet, Franck Bourdeaut, Sophie Thomas, Heather Etchevers, Agnes Chompret, Véronique Minard,

More information

Efficient Covariance Matrix Update for Variable Metric Evolution Strategies

Efficient Covariance Matrix Update for Variable Metric Evolution Strategies Efficient Covariance Matrix Update for Variable Metric Evolution Strategies Thorsten Suttorp, Nikolaus Hansen, Christian Igel To cite this version: Thorsten Suttorp, Nikolaus Hansen, Christian Igel. Efficient

More information

Trench IGBT failure mechanisms evolution with temperature and gate resistance under various short-circuit conditions

Trench IGBT failure mechanisms evolution with temperature and gate resistance under various short-circuit conditions Trench IGBT failure mechanisms evolution with temperature and gate resistance under various short-circuit conditions Adel Benmansour, Stephane Azzopardi, Jean-Christophe Martin, Eric Woirgard To cite this

More information

The CMA Evolution Strategy: A Tutorial

The CMA Evolution Strategy: A Tutorial The CMA Evolution Strategy: A Tutorial Nikolaus Hansen November 6, 205 Contents Nomenclature 3 0 Preliminaries 4 0. Eigendecomposition of a Positive Definite Matrix... 5 0.2 The Multivariate Normal Distribution...

More information

Quality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions

Quality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions Quality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions Youhei Akimoto, Anne Auger, Nikolaus Hansen To cite this version: Youhei Akimoto, Anne Auger,

More information

Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating Support Vector Machines and Covariance-Matrix Adaptation-ES

Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating Support Vector Machines and Covariance-Matrix Adaptation-ES Covariance Matrix Adaptation-Evolution Strategy Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating and Covariance-Matrix Adaptation-ES Ilya Loshchilov, Marc Schoenauer,

More information

Passerelle entre les arts : la sculpture sonore

Passerelle entre les arts : la sculpture sonore Passerelle entre les arts : la sculpture sonore Anaïs Rolez To cite this version: Anaïs Rolez. Passerelle entre les arts : la sculpture sonore. Article destiné à l origine à la Revue de l Institut National

More information

Adaptive Generation-Based Evolution Control for Gaussian Process Surrogate Models

Adaptive Generation-Based Evolution Control for Gaussian Process Surrogate Models J. Hlaváčová (Ed.): ITAT 07 Proceedings, pp. 36 43 CEUR Workshop Proceedings Vol. 885, ISSN 63-0073, c 07 J. Repický, L. Bajer, Z. Pitra, M. Holeňa Adaptive Generation-Based Evolution Control for Gaussian

More information

Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES

Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES Andrea Maesani and Dario Floreano Laboratory of Intelligent Systems, Institute of Microengineering, Ecole Polytechnique Fédérale de

More information

Case report on the article Water nanoelectrolysis: A simple model, Journal of Applied Physics (2017) 122,

Case report on the article Water nanoelectrolysis: A simple model, Journal of Applied Physics (2017) 122, Case report on the article Water nanoelectrolysis: A simple model, Journal of Applied Physics (2017) 122, 244902 Juan Olives, Zoubida Hammadi, Roger Morin, Laurent Lapena To cite this version: Juan Olives,

More information

Smart Bolometer: Toward Monolithic Bolometer with Smart Functions

Smart Bolometer: Toward Monolithic Bolometer with Smart Functions Smart Bolometer: Toward Monolithic Bolometer with Smart Functions Matthieu Denoual, Gilles Allègre, Patrick Attia, Olivier De Sagazan To cite this version: Matthieu Denoual, Gilles Allègre, Patrick Attia,

More information

CMA-ES a Stochastic Second-Order Method for Function-Value Free Numerical Optimization

CMA-ES a Stochastic Second-Order Method for Function-Value Free Numerical Optimization CMA-ES a Stochastic Second-Order Method for Function-Value Free Numerical Optimization Nikolaus Hansen INRIA, Research Centre Saclay Machine Learning and Optimization Team, TAO Univ. Paris-Sud, LRI MSRC

More information

L institution sportive : rêve et illusion

L institution sportive : rêve et illusion L institution sportive : rêve et illusion Hafsi Bedhioufi, Sida Ayachi, Imen Ben Amar To cite this version: Hafsi Bedhioufi, Sida Ayachi, Imen Ben Amar. L institution sportive : rêve et illusion. Revue

More information

A non-commutative algorithm for multiplying (7 7) matrices using 250 multiplications

A non-commutative algorithm for multiplying (7 7) matrices using 250 multiplications A non-commutative algorithm for multiplying (7 7) matrices using 250 multiplications Alexandre Sedoglavic To cite this version: Alexandre Sedoglavic. A non-commutative algorithm for multiplying (7 7) matrices

More information

Nodal and divergence-conforming boundary-element methods applied to electromagnetic scattering problems

Nodal and divergence-conforming boundary-element methods applied to electromagnetic scattering problems Nodal and divergence-conforming boundary-element methods applied to electromagnetic scattering problems M. Afonso, Joao Vasconcelos, Renato Mesquita, Christian Vollaire, Laurent Nicolas To cite this version:

More information

Factorisation of RSA-704 with CADO-NFS

Factorisation of RSA-704 with CADO-NFS Factorisation of RSA-704 with CADO-NFS Shi Bai, Emmanuel Thomé, Paul Zimmermann To cite this version: Shi Bai, Emmanuel Thomé, Paul Zimmermann. Factorisation of RSA-704 with CADO-NFS. 2012. HAL Id: hal-00760322

More information

Classification of high dimensional data: High Dimensional Discriminant Analysis

Classification of high dimensional data: High Dimensional Discriminant Analysis Classification of high dimensional data: High Dimensional Discriminant Analysis Charles Bouveyron, Stephane Girard, Cordelia Schmid To cite this version: Charles Bouveyron, Stephane Girard, Cordelia Schmid.

More information

Multiple sensor fault detection in heat exchanger system

Multiple sensor fault detection in heat exchanger system Multiple sensor fault detection in heat exchanger system Abdel Aïtouche, Didier Maquin, Frédéric Busson To cite this version: Abdel Aïtouche, Didier Maquin, Frédéric Busson. Multiple sensor fault detection

More information

Gaia astrometric accuracy in the past

Gaia astrometric accuracy in the past Gaia astrometric accuracy in the past François Mignard To cite this version: François Mignard. Gaia astrometric accuracy in the past. IMCCE. International Workshop NAROO-GAIA A new reduction of old observations

More information

SOLAR RADIATION ESTIMATION AND PREDICTION USING MEASURED AND PREDICTED AEROSOL OPTICAL DEPTH

SOLAR RADIATION ESTIMATION AND PREDICTION USING MEASURED AND PREDICTED AEROSOL OPTICAL DEPTH SOLAR RADIATION ESTIMATION AND PREDICTION USING MEASURED AND PREDICTED AEROSOL OPTICAL DEPTH Carlos M. Fernández-Peruchena, Martín Gastón, Maria V Guisado, Ana Bernardos, Íñigo Pagola, Lourdes Ramírez

More information

Exact Comparison of Quadratic Irrationals

Exact Comparison of Quadratic Irrationals Exact Comparison of Quadratic Irrationals Phuc Ngo To cite this version: Phuc Ngo. Exact Comparison of Quadratic Irrationals. [Research Report] LIGM. 20. HAL Id: hal-0069762 https://hal.archives-ouvertes.fr/hal-0069762

More information

Hardware Operator for Simultaneous Sine and Cosine Evaluation

Hardware Operator for Simultaneous Sine and Cosine Evaluation Hardware Operator for Simultaneous Sine and Cosine Evaluation Arnaud Tisserand To cite this version: Arnaud Tisserand. Hardware Operator for Simultaneous Sine and Cosine Evaluation. ICASSP 6: International

More information

On Symmetric Norm Inequalities And Hermitian Block-Matrices

On Symmetric Norm Inequalities And Hermitian Block-Matrices On Symmetric Norm Inequalities And Hermitian lock-matrices Antoine Mhanna To cite this version: Antoine Mhanna On Symmetric Norm Inequalities And Hermitian lock-matrices 015 HAL Id: hal-0131860

More information

The sound power output of a monopole source in a cylindrical pipe containing area discontinuities

The sound power output of a monopole source in a cylindrical pipe containing area discontinuities The sound power output of a monopole source in a cylindrical pipe containing area discontinuities Wenbo Duan, Ray Kirby To cite this version: Wenbo Duan, Ray Kirby. The sound power output of a monopole

More information

Solution to Sylvester equation associated to linear descriptor systems

Solution to Sylvester equation associated to linear descriptor systems Solution to Sylvester equation associated to linear descriptor systems Mohamed Darouach To cite this version: Mohamed Darouach. Solution to Sylvester equation associated to linear descriptor systems. Systems

More information

Towards an active anechoic room

Towards an active anechoic room Towards an active anechoic room Dominique Habault, Philippe Herzog, Emmanuel Friot, Cédric Pinhède To cite this version: Dominique Habault, Philippe Herzog, Emmanuel Friot, Cédric Pinhède. Towards an active

More information

On size, radius and minimum degree

On size, radius and minimum degree On size, radius and minimum degree Simon Mukwembi To cite this version: Simon Mukwembi. On size, radius and minimum degree. Discrete Mathematics and Theoretical Computer Science, DMTCS, 2014, Vol. 16 no.

More information

On the longest path in a recursively partitionable graph

On the longest path in a recursively partitionable graph On the longest path in a recursively partitionable graph Julien Bensmail To cite this version: Julien Bensmail. On the longest path in a recursively partitionable graph. 2012. HAL Id:

More information

Completeness of the Tree System for Propositional Classical Logic

Completeness of the Tree System for Propositional Classical Logic Completeness of the Tree System for Propositional Classical Logic Shahid Rahman To cite this version: Shahid Rahman. Completeness of the Tree System for Propositional Classical Logic. Licence. France.

More information

MODal ENergy Analysis

MODal ENergy Analysis MODal ENergy Analysis Nicolas Totaro, Jean-Louis Guyader To cite this version: Nicolas Totaro, Jean-Louis Guyader. MODal ENergy Analysis. RASD, Jul 2013, Pise, Italy. 2013. HAL Id: hal-00841467

More information

Full-order observers for linear systems with unknown inputs

Full-order observers for linear systems with unknown inputs Full-order observers for linear systems with unknown inputs Mohamed Darouach, Michel Zasadzinski, Shi Jie Xu To cite this version: Mohamed Darouach, Michel Zasadzinski, Shi Jie Xu. Full-order observers

More information

Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress

Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Petr Pošík Czech Technical University, Faculty of Electrical Engineering, Department of Cybernetics Technická, 66 7 Prague

More information

Quantum efficiency and metastable lifetime measurements in ruby ( Cr 3+ : Al2O3) via lock-in rate-window photothermal radiometry

Quantum efficiency and metastable lifetime measurements in ruby ( Cr 3+ : Al2O3) via lock-in rate-window photothermal radiometry Quantum efficiency and metastable lifetime measurements in ruby ( Cr 3+ : Al2O3) via lock-in rate-window photothermal radiometry A. Mandelis, Z. Chen, R. Bleiss To cite this version: A. Mandelis, Z. Chen,

More information

The influence of the global atmospheric properties on the detection of UHECR by EUSO on board of the ISS

The influence of the global atmospheric properties on the detection of UHECR by EUSO on board of the ISS The influence of the global atmospheric properties on the detection of UHECR by EUSO on board of the ISS C. Berat, D. Lebrun, A. Stutz, E. Plagnol To cite this version: C. Berat, D. Lebrun, A. Stutz, E.

More information

From Unstructured 3D Point Clouds to Structured Knowledge - A Semantics Approach

From Unstructured 3D Point Clouds to Structured Knowledge - A Semantics Approach From Unstructured 3D Point Clouds to Structured Knowledge - A Semantics Approach Christophe Cruz, Helmi Ben Hmida, Frank Boochs, Christophe Nicolle To cite this version: Christophe Cruz, Helmi Ben Hmida,

More information

AC Transport Losses Calculation in a Bi-2223 Current Lead Using Thermal Coupling With an Analytical Formula

AC Transport Losses Calculation in a Bi-2223 Current Lead Using Thermal Coupling With an Analytical Formula AC Transport Losses Calculation in a Bi-2223 Current Lead Using Thermal Coupling With an Analytical Formula Kévin Berger, Jean Lévêque, Denis Netter, Bruno Douine, Abderrezak Rezzoug To cite this version:

More information

Climbing discrepancy search for flowshop and jobshop scheduling with time-lags

Climbing discrepancy search for flowshop and jobshop scheduling with time-lags Climbing discrepancy search for flowshop and jobshop scheduling with time-lags Wafa Karoui, Marie-José Huguet, Pierre Lopez, Mohamed Haouari To cite this version: Wafa Karoui, Marie-José Huguet, Pierre

More information

Evolution of the cooperation and consequences of a decrease in plant diversity on the root symbiont diversity

Evolution of the cooperation and consequences of a decrease in plant diversity on the root symbiont diversity Evolution of the cooperation and consequences of a decrease in plant diversity on the root symbiont diversity Marie Duhamel To cite this version: Marie Duhamel. Evolution of the cooperation and consequences

More information

Particle-in-cell simulations of high energy electron production by intense laser pulses in underdense plasmas

Particle-in-cell simulations of high energy electron production by intense laser pulses in underdense plasmas Particle-in-cell simulations of high energy electron production by intense laser pulses in underdense plasmas Susumu Kato, Eisuke Miura, Mitsumori Tanimoto, Masahiro Adachi, Kazuyoshi Koyama To cite this

More information

Thomas Lugand. To cite this version: HAL Id: tel

Thomas Lugand. To cite this version: HAL Id: tel Contribution à la Modélisation et à l Optimisation de la Machine Asynchrone Double Alimentation pour des Applications Hydrauliques de Pompage Turbinage Thomas Lugand To cite this version: Thomas Lugand.

More information

A Context free language associated with interval maps

A Context free language associated with interval maps A Context free language associated with interval maps M Archana, V Kannan To cite this version: M Archana, V Kannan. A Context free language associated with interval maps. Discrete Mathematics and Theoretical

More information

Learning an Adaptive Dictionary Structure for Efficient Image Sparse Coding

Learning an Adaptive Dictionary Structure for Efficient Image Sparse Coding Learning an Adaptive Dictionary Structure for Efficient Image Sparse Coding Jérémy Aghaei Mazaheri, Christine Guillemot, Claude Labit To cite this version: Jérémy Aghaei Mazaheri, Christine Guillemot,

More information

Some tight polynomial-exponential lower bounds for an exponential function

Some tight polynomial-exponential lower bounds for an exponential function Some tight polynomial-exponential lower bounds for an exponential function Christophe Chesneau To cite this version: Christophe Chesneau. Some tight polynomial-exponential lower bounds for an exponential

More information

Entropies and fractal dimensions

Entropies and fractal dimensions Entropies and fractal dimensions Amelia Carolina Sparavigna To cite this version: Amelia Carolina Sparavigna. Entropies and fractal dimensions. Philica, Philica, 2016. HAL Id: hal-01377975

More information

The constrained stochastic matched filter subspace tracking

The constrained stochastic matched filter subspace tracking The constrained stochastic matched filter subspace tracking Maissa Chagmani, Bernard Xerri, Bruno Borloz, Claude Jauffret To cite this version: Maissa Chagmani, Bernard Xerri, Bruno Borloz, Claude Jauffret.

More information

A Novel Aggregation Method based on Graph Matching for Algebraic MultiGrid Preconditioning of Sparse Linear Systems

A Novel Aggregation Method based on Graph Matching for Algebraic MultiGrid Preconditioning of Sparse Linear Systems A Novel Aggregation Method based on Graph Matching for Algebraic MultiGrid Preconditioning of Sparse Linear Systems Pasqua D Ambra, Alfredo Buttari, Daniela Di Serafino, Salvatore Filippone, Simone Gentile,

More information

The Accelerated Euclidean Algorithm

The Accelerated Euclidean Algorithm The Accelerated Euclidean Algorithm Sidi Mohamed Sedjelmaci To cite this version: Sidi Mohamed Sedjelmaci The Accelerated Euclidean Algorithm Laureano Gonzales-Vega and Thomas Recio Eds 2004, University

More information

Analysis of Boyer and Moore s MJRTY algorithm

Analysis of Boyer and Moore s MJRTY algorithm Analysis of Boyer and Moore s MJRTY algorithm Laurent Alonso, Edward M. Reingold To cite this version: Laurent Alonso, Edward M. Reingold. Analysis of Boyer and Moore s MJRTY algorithm. Information Processing

More information

Influence of a Rough Thin Layer on the Potential

Influence of a Rough Thin Layer on the Potential Influence of a Rough Thin Layer on the Potential Ionel Ciuperca, Ronan Perrussel, Clair Poignard To cite this version: Ionel Ciuperca, Ronan Perrussel, Clair Poignard. Influence of a Rough Thin Layer on

More information

Dissipative Systems Analysis and Control, Theory and Applications: Addendum/Erratum

Dissipative Systems Analysis and Control, Theory and Applications: Addendum/Erratum Dissipative Systems Analysis and Control, Theory and Applications: Addendum/Erratum Bernard Brogliato To cite this version: Bernard Brogliato. Dissipative Systems Analysis and Control, Theory and Applications:

More information

A Simple Model for Cavitation with Non-condensable Gases

A Simple Model for Cavitation with Non-condensable Gases A Simple Model for Cavitation with Non-condensable Gases Mathieu Bachmann, Siegfried Müller, Philippe Helluy, Hélène Mathis To cite this version: Mathieu Bachmann, Siegfried Müller, Philippe Helluy, Hélène

More information