Benchmarking a Hybrid Multi Level Single Linkage Algorithm on the BBOB Noiseless Testbed

Size: px
Start display at page:

Download "Benchmarking a Hybrid Multi Level Single Linkage Algorithm on the BBOB Noiseless Testbed"

Transcription

1 Benchmarking a Hyrid ulti Level Single Linkage Algorithm on the BBOB Noiseless Tested László Pál Sapientia - Hungarian University of Transylvania 00 iercurea-ciuc, Piata Liertatii, Nr., Romania pallaszlo@sapientia.siculorum.ro ABSTRACT ulti Level Single Linkage () is a well known stochastic gloal optimization method. In this paper, a new hyrid variant (H) of the algorithm is presented. The most important improvements are related to the sampling phase: the sample is generated from a Sool quasi-random sequence and a few percent of the population is further improved y using crossover and mutation operators like in a traditional differential evolution () method. The aim of this study is to evaluate the performance of the new H algorithm on the tested of noiseless functions. The new algorithm is also compared against a simple and a traditional in order to identify the enefits of the applied improvements. The results confirm that the H outperforms the and methods. The new method has a larger proaility of success and usually is faster especially in the final stage of the optimization than the other two algorithms. Categories and Suject Descriptors G.. [Numerical Analysis]: Optimization gloal optimization, unconstrained optimization; F.. [Analysis of Algorithms and Prolem Complexity]: Numerical Algorithms and Prolems General Terms Algorithms Keywords Benchmarking, Black-ox optimization, ulti level methods, Differential evolution. INTRODUCTION The ulti Level Single Linkage () [] method has een derived from clustering [] methods which enale the exploration of the whole feasile region through random Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distriuted for profit or commercial advantage and that copies ear this notice and the full citation on the first page. To copy otherwise, to repulish, to post on servers or to redistriute to lists, requires prior specific permission and/or a fee. GECCO Companion, July 0, 0, Amsterdam, The Netherlands. Copyright 0 AC //07...$.00. sampling followed y local search methods. It is considered one of the est known and efficient stochastic algorithm for gloal optimization prolems of moderate size of dimensions. A similar clustering type algorithm [] achieved good results [] on the BBOB-009 functions with moderate numer of local minima using a small udget of function evaluations. In this paper, we introduce a new hyrid variant of the method denoted y H. The most important improvements are related two the sampling phase: the sample is generated from a Sool quasi-random sequence [7] and a few percent of the sample is further improved using crossover and mutation operators like in a traditional differential evolution () [] method. The purpose of this paper is to evaluate the performance of the H algorithm using the COCO framework [] and to assess the enefits of the introduced improvements. We also compare the H method against a simple and a traditional method. The rest of this article is organized as follows. Section reviews the algorithm, and also presents the new hyrid version of the and methods. In Section, we descrie the experiment design together with the algorithms parameter settings. The results are presented in Section and discussed in Section. Section concludes the paper and points out some directions for future work.. ALGORITH PRESENTATION Similarly to the clustering methods, has two phases: a gloal and a local one. The gloal phase consists of sampling, while the local phase is ased on local searches. The local minimizer points are found y means of a local search procedure (LS), starting from appropriately chosen points from the sample drawn uniformly within the set of feasiility. The local search procedure is applied to every sample point from the reduced sample, except if there is another sample point within some critical distance r k, which has a lower function value (see Algorithm ). The reduced sample consists of the γkn est points (0<γ ) from the cumulated sample x,..., x kn. The critical distance will e chosen to depend on kn only so as to minimize the proailities of two possile failures of the method: the proaility that a local search is started, although the resulting minimum is known already, and the proaility that no local search is started in a level set which contains reduced sample points. The critical distance is given y the following formula r k (x) = (Γ( + n /n ζ ln(kn) ) m(x) π kn ),

2 Algorithm : The algorithm X ; k 0 repeat k k + Generate N points x (k )N+,..., x kn with uniform distriution on X. Determine the reduced sample (X r) consisting of the γkn est points from the cumulated sample x,..., x kn. for i to length(x r ) do 7 if NOT (there is such a j that f(x j ) < f(x i ) and x j x i < r k ) then 8 Start a local search method (LS) from x i. 9 x LS(x i ) 0 X X {x } until Some gloal stopping rule is satisfied. return The smallest local minimum value found. where Γ is the gamma function, n is the numer of variales of the prolem, m(x) is the Leesgue measure of the domain X, kn is the total numer of sampled points, k is the iteration counter and ζ is some positive constant. The algorithm continues repeating the gloal and local phases until some stopping rule is satisfied. It has een proved that the algorithm has good asymptotic properties (depending on the ζ value): the asymptotic proailistic correctness and proailistic guarantee of finding all local minimizers. In our calculations the parameter ζ was taken to e. Based on the presented method, we introduced some improvements which are mainly related to the gloal step of the algorithm. Low-discrepancy sequences have een used instead of purely random samples. We use sample points from Sool quasi-random sequences [7] which fill the space more uniformly. Sool low-discrepancy sequences are superior to pseudorandom sampling especially for low and moderately dimensional prolems [8]. Furthermore a few percent of the sample points are improved y using crossover and mutation operators similar to those used in the method. In other words, a few iterations are applied to the est points of the actual sample. This last step is executed in each iteration efore the local phase of the optimization. The aim of these improvements are to help the method to overcome the difficulties arising in prolems with a large numer of local optima or in the cases when the local search method cannot make further improvements.. EXPERIENTAL PROCEDURE The main purpose of the experiment is to assess the enefits of the improvements applied to the method. Thus we compare the three algorithms on the noiseless function tested. Each of the algorithms was run on instances of all the functions in dimensions,,, 0, and 0. The evaluations udget was set to 0 D for each run. The applied udget is enough to capture all relevant features of the three algorithms. has four parameters to set: the numer of sample points in an iteration, the size of the reduced sample, the maximum numer of function evaluations for local search, and the used local search procedure. The sample size in each iteration was set to 0D, while the size of the reduced sample to D. This latter setting is also motivated y the same population size of the method (see elow). On the whole tested we use the ATLAB s fmincon local search method in all dimensions. fmincon is an interior-point algorithm for constrained nonlinear prolems which approximates the gradient using the finite difference method and ased on a recent study [9], it performed well on most of the test functions. The maximum numer of function evaluations for local search was set to 0% of the total udget, while the termination tolerance parameter value was set to 0. The H method posses the same parameter settings as the algorithm. Additionally we apply D iterations to the reduced sample. The iterations numer was selected after a small systematic study and provides a good alance etween the two methods. The population size for was set to D while the crossover and mutation rates to 0.. Similar population size was also applied in [0]. The crossover strategy is the exponential one and the mutation operator comines the est memer with other two randomly chosen individuals.. RESULTS Results from experiments according to [] on the enchmark functions given in [, ] are presented in Figures, and and in Tales and. The expected running time (ERT), used in the figures and tale, depends on a given target function value, f t = f opt + f, and is computed over all relevant trials as the numer of function evaluations executed during each trial while the est function value did not reach f t, summed over all trials and divided y the numer of trials that actually reached f t [, ]. Statistical significance is tested with the rank-sum test for a given target f t (0 8 as in Figure ) using, for each trial, either the numer of needed function evaluations to reach f t (inverted and multiplied y ), or, if the target was not reached, the est f-value achieved, measured only up to the smallest numer of overall function evaluations for any unsuccessful trial under consideration.. CPU Timing Experiments The three algorithms were run on the test function f 8, and restarted until at least 0 seconds had passed. These experiments were carried out on a machine with Intel Dual- Core processor,. Ghz, with GB RA, on Windows 7 it in ATLAB R0 it. The average time per function evaluation in,,, 0, 0, 0 dimensions was aout, 9.,.7,.0,.9,.7 0 s for H, aout, 8.9,.9,.,.9,.7 0 s for, and aout.,.,.,.,.,. 0 s for.. DISCUSSION As a result of the hyridization the HLS method is usually etter than the algorithm in terms of the ERT needed to find the f = 0 8. oreover H is significantly faster than on the f, f, f 7, f 0, f, f, f, f, f 7, and f 8 functions (see Figure ). Compared to the method, H is significantly faster on the f, f 8, f 9, f, and f functions. Considering the proportion of solved instances we can state that the new H method inherits the speed of

3 the simple algorithm on the initial phase of the optimization, while the use of the method inside the provides a etter performance in the final stage. In -D (see Figure ), the general aspect is that the H method is as fast as the algorithm in the initial stage (#FEs < 00D) of the optimization, while in the middle and final phases (#FEs > 00D) it is usually faster than the and methods. These properties can e nicely followed in the figure with all functions aggregated and on the multi-modal functions sugroup. On the weakly structured multi-modal functions the is slightly faster than H in the initial stage (00D<#FEs < 700D) of the optimization. After 700D evaluations the H method ecomes the leader and up to the final udget solves around 78% of the prolems. As a result of the hyridization, the H method is significantly etter then the on the separale, moderate and multi-modal function sugroups. This increase is caused y solving the f, f, f, f 7, f 8, and f 9 functions, where was ale to solve only the prolems with loose target levels. In the 0-D space, similar aspects can e oserved as in -D (see Figure ). Considering all functions aggregated for larger udgets than 0 D, H is the est algorithm, solving almost 70% of the prolems, followed y, and solving aout 8%, and % of the prolems, respectively. Significant improvements can e oserved on the moderate functions sugroup where H solved 00% of the prolems, followed y and (7% and 70%, respectively). This is due to the one solved instance of the f 7 function y H. The lowest percentage (aout %) of the solved prolems y H can e oserved on the multi-modal functions sugroup. This is due to the difficulties of the and methods on these functions. On the ill-conditioned and weakly structured functions the method is slightly faster in the middle stage of the optimization, while on the moderate and ill-conditioned function sugroups the H method (with ) is even faster than the est algorithm from BBOB-009 on the initial phase (D < #FEs < 00D) of the optimization.. CONCLUSIONS We enchmarked the H algorithm, a hyrid version of the classic and methods. The new hyrid algorithm differs from in that it applies a few iterations in the gloal phase. The new algorithm was extensively compared with the and methods on the tested of noiseless functions in order to reveal the enefits of the new improvements. The results show that the H outperforms the and methods. The new method has a larger success proaility and is as fast as the method in the initial and middle phase while in the final stage of the optimization it is usually faster than the other two algorithms. Further improvements y using adaptive remains to e investigated as a future work. 7. REFERENCES [] C. G. E. Boender, A. H. G. Rinnooy Kan, G. T. Timmer, and L. Stougie. A stochastic method for gloal optimization. athematical Programming, : 0, 98. [] T. Csendes, L. Pál, J.-O. H. Sendín, and J. R. Banga. The GLOBAL optimization method revisited. Optimization Letters, ():, 008. [] S. Finck, N. Hansen, R. Ros, and A. Auger. Real-parameter lack-ox optimization enchmarking 009: Presentation of the noiseless functions. Technical Report 009/0, Research Center PPE, 009. Updated Feruary 00. [] N. Hansen, A. Auger, S. Finck, and R. Ros. Real-parameter lack-ox optimization enchmarking 0: Experimental setup. Technical report, INRIA, 0. [] N. Hansen, A. Auger, R. Ros, S. Finck, and P. Pošík. Comparing results of algorithms from the lack-ox optimization enchmarking BBOB-009. In GECCO 0: Proceedings of the th annual conference comp on Genetic and evolutionary computation, pages 89 9, New York, NY, USA, 00. AC. [] N. Hansen, S. Finck, R. Ros, and A. Auger. Real-parameter lack-ox optimization enchmarking 009: Noiseless functions definitions. Technical Report RR-89, INRIA, 009. Updated Feruary 00. [7] H. S. Hong and F. J. Hickernell. Algorithm 8: Implementing Scramled Digital Sequences. AC Transactions on athematical Software, 9:9 09, 00. [8] S. Kucherenko and Y. Sytsko. Application of Deterministic Low-Discrepancy Sequences in Gloal Optimization. Computational Optimization and Applications, 0:97 8, 00. [9] L. Pál, T. Csendes,. C. arkót, and A. Neumaier. Black-ox optimization enchmarking of the GLOBAL method. Evolutionary Computation, 0:09 9, 0. [0] P. Pošík and V. Klemš. JA, an Adaptive Differential Evolution Algorithm, Benchmarked on the BBOB Noiseless Tested. In GECCO 0: Genetic and Evolutionary Computation Conference Companion, pages 97 0, New York, NY, USA, 0. AC. [] K. Price. Differential evolution vs. the functions of the second ICEO. In Proceedings of the IEEE International Congress on Evolutionary Computation, pages 7, 997. [] A. H. G. Rinnooy Kan and G. T. Timmer. Stochastic gloal optimization methods part II: ulti level methods. athematical Programming, 9:7 78, 987. [] R. Storn and K. Price. Differential evolution a simple and efficient heuristic for gloal optimization over continuous spaces. Journal of Gloal Optimization, : 9, 997. Acknowledgements This work was supported y the Sapientia Foundation - Institute for Scientific Research with the grant No. 0/9/0.

4 Sphere Ellipsoid separale Rastrigin separale Skew Rastrigin-Bueche separ H 0ftarget=e ftarget=e ftarget=e ftarget=e Linear slope Attractive sector 7 Step-ellipsoid 8 Rosenrock original 0ftarget=e ftarget=e ftarget=e ftarget=e Rosenrock rotated 0 Ellipsoid Discus Bent cigar 0ftarget=e Sharp ridge 0ftarget=e Sum of different powers 0ftarget=e Rastrigin 0ftarget=e Weierstrass 0ftarget=e ftarget=e ftarget=e ftarget=e Schaffer F7, condition 0 8 Schaffer F7, condition Griewank-Rosenrock F8F 0 Schwefel x*sin(x) 0ftarget=e ftarget=e ftarget=e ftarget=e Gallagher 0 peaks Gallagher peaks Katsuuras 7 Lunacek i-rastrigin H 0ftarget=e ftarget=e ftarget=e ftarget=e Figure : Expected running time (ERT in numer of f-evaluations) divided y dimension for target function value 0 8 as log 0 values versus dimension. Different symols correspond to different algorithms given in the legend of f and f. Light symols give the maximum numer of function evaluations from the longest trial divided y dimension. Horizontal lines give linear scaling, slanted dotted lines give quadratic scaling. Black stars indicate statistically etter result compared to all other algorithms with p < and Bonferroni correction numer of dimensions (six). Legend: :, :, :H

5 .0 f-,-d separale fcts est 009 HH D log0 of (# f-evals / dimension) ill-conditioned fcts.0 f0-,-d est 009 D HH log0 of (# f-evals / dimension) weakly structured multi-modal fcts.0 f0-,-d D log0 of (# f-evals / dimension) est 009 HH.0 f-9,-d moderate fcts est 009 D HH log0 of (# f-evals / dimension) multi-modal fcts.0 f-9,-d est 009 HH D log0 of (# f-evals / dimension) all functions.0 f-,-d est est H H log0 of (# f-evals / dimension) Figure : Bootstrapped empirical cumulative distriution of the numer of ojective function evaluations divided y dimension (FEvals/D) for 0 targets in 0 [ 8..] for all functions and sugroups in -D. The est 009 line corresponds to the est ERT oserved during BBOB 009 for each single target.

6 .0 f-,0-d separale fcts est 009 HH D log0 of (# f-evals / dimension) ill-conditioned fcts.0 f0-,0-d D log0 of (# f-evals / dimension) weakly structured multi-modal fcts.0 f0-,0-d D log0 of (# f-evals / dimension) est 009 HH est 009 HH.0 f-9,0-d moderate fcts D log0 of (# f-evals / dimension) multi-modal fcts.0 f-9,0-d est 009 HH est 009 HH D log0 of (# f-evals / dimension) all functions.0 f-,0-d log0 of (# f-evals / dimension) est est H H Figure : Bootstrapped empirical cumulative distriution of the numer of ojective function evaluations divided y dimension (FEvals/D) for 0 targets in 0 [ 8..] for all functions and sugroups in 0-D. The est 009 line corresponds to the est ERT oserved during BBOB 009 for each single target.

7 f opt e e0 e- e- e- e-7 #succ f / 0.7(0.).(0.).0(0).(0.).(0.).9(0.) /.9() 0(7) 9() 79(7) 0(9) 7(8) / H0.7(0.).(0.).0(0).(0.).(0.).9(0.) / f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f /.9().().().().7().() / (0.) 0.(0.) 0.(0.) 0.7(0.) 0.7(0.) 79(909) 0/ () () () 0() () 0() /.().() () (7) () (9) / H.9().().().().9().(7) / H(0.) 0.(0.) 0.(0.) 0.7(0.) 0.7(0.) 7() / f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f /.() 79() e 0/.(0.).7(0.).9(0.).(0.).(0.) 0(7) e 0/.8(0.) /.().().9() 7.9(7) 7.7() 7.(8) 8/ H.(0.).().(.0).().().() / H.().() 0() 0(0) 0() 0(0) / f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f / () e 0/.(0.) ().7() 70() 9e 0/ (08) (9) 9(8) 7(8) /.() (8) 8(8) e 0/ H.(0.7) () 9(9) 87() 8(0) 8(0) / H.() (0) 0(9) e 0/ f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f /.(0).(0).(0).(0).7(0) 8() / () (7) e 0/ 9() 08(7) 9() 89() 7(7) 77() /.().().(0.9).7(0.).(0.9).(0.9) / H.(0).(0).(0).(0).7(0) 87(9) / f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f /.(0.).(0.).(0.) (0.) 0.7(0.) 0.7(0.) / 0() 0() e 0/.().() 7.() 8.9() 8.() 8.9() / H.(0.).(0.).(0.) (0.) 0.7(0.) 0.7(0.) /.9(.0).9().().(0.).().() / f opt e e0 e- e- e- e-7 #succ f f / opt e e0 e- e- e- e-7 #succ () 097() e 0/ f9.e.e.e / 9.0(9).8().7().8().8().0() / (0) (0) 0.7() e 0/ H().8() 8.7() 8.() 8.() 8.8() / (8) 0(88) 087(0) e 0/ f opt e e0 e- e- e- e-7 #succ f / f opt e e0 e- e- e- e-7 #succ.0(0.).0(.0) 0.99() 0.9(0.7) 0.97(0.) 0.9(0.) / f / 8.0() 7.() 7() () () () /.(0) 7.(0) () (8) () () / H.0(0.).().().().().() / 7.7().().().7().7().7() 8/ f opt e e0 e- e- e- e-7 #succ f / f opt e e0 e- e- e- e-7 #succ 0.9(0) 0.(0.) 0.(0.) 0.7() 0.() 0.() / f / (8) (9) 0() 8(0) 9() 7() /.8() 0.7(0.7).().().().() / H 0.9(0) 0.(0.) 0.(0.) 0.7() 0.() 0.() /.().9() () () () () / f opt e e0 e- e- e- e-7 #succ f f / opt e e0 e- e- e- e-7 #succ (0.) 0.(0.) 0.(0.) (0.) () 9(7) / f / (9) () 9() () () () /.7().8().().().().() / H (0.) 0.(0.) 0.(0.) (0.) 8(7) 0() 8/.() 9() 9(08) 9(0) 9() 9(9) 8/ f opt e e0 e- e- e- e-7 #succ f / 0.(0.) 0.(0.) 7().() () 88(98) 0/ () () () 8() 0() () / H 0.(0.) 0.(0.) 7().0() 7(0) (7) / f opt e e0 e- e- e- e-7 #succ f / 0.7(0.) 0.7(0.) 0(0.).(0.) 0() 7e 0/ () (0) () 0(7) (7) 8() / H 0.7(0.) 0.7(0.) 0(0.).0() () (7) 0/ H() (8) 8.(8).().().9() / H0(8) 0(7).().().9() () / H (0) (0) 0.7() () () () / H.(0).().().9().8().8() / H.() 8(0.7).(0.).(0.).(0.).(0.) / H.().().().(0.7).(0.7).() / f opt e e0 e- e- e- e-7 #succ f / 9.().9().0() () e 0/.9() (7) e 0/ H 9.().().7() () e 0/ f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f.e.e 9.e.e7.e7 /.(0.) 0.9(0.) 9(0.7) 0.9().() 8() /.().() e 0/ (8) 0(0) 70(7) 9(8) 8() () / 9.(8) e 0/ H.(0.) 0.9(0.) 9(0.7) 0.9().() 0() 8/ H.8() e 0/ Tale : Expected running time (ERT in numer of function evaluations) divided y the respective est ERT measured during BBOB-009 (given in the respective first row) for different f values in dimension. The central 80% range divided y two is given in races. The median numer of conducted function evaluations is additionally given in italics, if ERT(0 7 ) =. #succ is the numer of trials that reached the final target f opt Best results are printed in old.

8 f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f / 0.77(0.).7(0.).9(0.).8(0.).7(0.).7(0.) /.(0.) (0.) 0.97(.0) 0.9(0.7) e 0/ () 0() 0() 77(9) 0(0) 87() / 0(9) 0(0) (99) e 0/ H 0.77(0.).7(0.).9(0.).8(0.).7(0.).7(0.) / H.(0.) (0.) () () e 0/ f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f /.().7().() 7.() 0() (9) / 0.7(0.) 0.9(0.) 0.(0.) 0.(0.) 0.7(0.) e 0/ () () () 8() 8() 0() / (7) 9() () e 0/ H.().7().() 7.() 0() () / H 0.7(0.) 0.9(0.) 0.(0.) 0.(0.) 0.7(0.) e 0/ f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f 078.e.e.e.e.e / e 0/ e 0/ e 0/ e 0/ H9(08) e 0/ H e 0/ f opt e e0 e- e- e- e-7 #succ f e 9/ e 0/ 0(9) e 0/ H () e 0/ f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f /.(0).(0).7(0).7(0) 7.(0) 8(9) / (0) e 0/ 98() 8(0) 98() 987(7) 70(97) 7() / 8.() 7() () 0(7) 0(7) e 0/ H.(0).(0).7(0).7(0) 7.(0) () / H() (7) 8() (9) e 0/ f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f e.e /.8().(0.9).(0.9).8(0.7).(0.).7(0.) / e 0/ 9(7) (8) (8) (8) e 0/ () 9() 9() e 0/ H.8().(0.9).(0.9).8(0.7).(0.).(0.) / H 8() () 0() e 0/ f opt e e0 e- e- e- e-7 #succ f opt e e0 e- e- e- e-7 #succ f / f9.e.e.7e.7e / e 0/ (0) (0) 7.e- e 0/ () 77(7) 7() 8() 8() (9) / (0) H() 7() (9) 7(8) 7(7) (9) / 7() e 0/ f opt e e0 e- e- e- e-7 #succ f / (0.) 0.78(0.) 0.79(0.) 0.79(0.) 0.79(0.) 0.78(0.) / () 7() 7(0) 9(8) 0() () / H (0.).0().0().0().0() 0.99() / f opt e e0 e- e- e- e-7 #succ f / 0.7() 0.() () () () () / e 0/ H0.7() 0.() () () () () / f opt e e0 e- e- e- e-7 #succ f / 0.() 0.() 0.() 0.() (0) e 0/ e 0/ H0.() 0.() 0.() 0.() (0) e 0/ f opt e e0 e- e- e- e-7 #succ f / 0.7() () (9e-) () e 0/ e 0/ H0.7() () (9e-).() e 0/ f opt e e0 e- e- e- e-7 #succ f e.0e.e / 7() e 0/ e 0/ H 78(97) e 0/ H (0) (0) 7.e- (0) e 0/ f opt e e0 e- e- e- e-7 #succ f0 8 0.e.e.e.e /.(0) (0) e 0/ ().8() e 0/ H.(0).(0.).9() e 0/ f opt e e0 e- e- e- e-7 #succ f /.().0(0.9).00() 0.98() 0.9() 0.9() 8/ (7) 9() 8(7) (8) () 8(7) / H.(9) () () () 0() 9.() 7/ f opt e e0 e- e- e- e-7 #succ f e /.().().().().8() () / 8(89) 00() e 0/ H0(9) (08) (8) 0(8) 97() () / f opt e e0 e- e- e- e-7 #succ f. 77.9e 8.e 8.e / ().() (0) e 0/ f opt e e0 e- e- e- e-7 #succ.9() e 0/ f / H().() 8(9) e 0/ (0.) 9(0.) (0.) 0.7(0.) (0.) () / f opt e e0 e- e- e- e-7 #succ 0() (0) 08() e 0/ f.e 7.e.e7.e7.e7.e7 / H (0.) 9(0.) (0.) 0.7(0.) 8.8() 7(7) / e 0/ e 0/ H e 0/ Tale : Expected running time (ERT in numer of function evaluations) divided y the respective est ERT measured during BBOB-009 (given in the respective first row) for different f values in dimension 0. The central 80% range divided y two is given in races. The median numer of conducted function evaluations is additionally given in italics, if ERT(0 7 ) =. #succ is the numer of trials that reached the final target f opt Best results are printed in old.

Bounding the Population Size of IPOP-CMA-ES on the Noiseless BBOB Testbed

Bounding the Population Size of IPOP-CMA-ES on the Noiseless BBOB Testbed Bounding the Population Size of OP-CMA-ES on the Noiseless BBOB Testbed Tianjun Liao IRIDIA, CoDE, Université Libre de Bruxelles (ULB), Brussels, Belgium tliao@ulb.ac.be Thomas Stützle IRIDIA, CoDE, Université

More information

Comparison of NEWUOA with Different Numbers of Interpolation Points on the BBOB Noiseless Testbed

Comparison of NEWUOA with Different Numbers of Interpolation Points on the BBOB Noiseless Testbed Comparison of NEWUOA with Different Numbers of Interpolation Points on the BBOB Noiseless Testbed Raymond Ros To cite this version: Raymond Ros. Comparison of NEWUOA with Different Numbers of Interpolation

More information

Mirrored Variants of the (1,4)-CMA-ES Compared on the Noiseless BBOB-2010 Testbed

Mirrored Variants of the (1,4)-CMA-ES Compared on the Noiseless BBOB-2010 Testbed Author manuscript, published in "GECCO workshop on Black-Box Optimization Benchmarking (BBOB') () 9-" DOI :./8.8 Mirrored Variants of the (,)-CMA-ES Compared on the Noiseless BBOB- Testbed [Black-Box Optimization

More information

BBOB-Benchmarking Two Variants of the Line-Search Algorithm

BBOB-Benchmarking Two Variants of the Line-Search Algorithm BBOB-Benchmarking Two Variants of the Line-Search Algorithm Petr Pošík Czech Technical University in Prague, Faculty of Electrical Engineering, Dept. of Cybernetics Technická, Prague posik@labe.felk.cvut.cz

More information

Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed

Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed Raymond Ros To cite this version: Raymond Ros. Black-Box Optimization Benchmarking the IPOP-CMA-ES on the Noisy Testbed. Genetic

More information

Benchmarking the Nelder-Mead Downhill Simplex Algorithm With Many Local Restarts

Benchmarking the Nelder-Mead Downhill Simplex Algorithm With Many Local Restarts Benchmarking the Nelder-Mead Downhill Simplex Algorithm With Many Local Restarts Example Paper The BBOBies ABSTRACT We have benchmarked the Nelder-Mead downhill simplex method on the noisefree BBOB- testbed

More information

Benchmarking Natural Evolution Strategies with Adaptation Sampling on the Noiseless and Noisy Black-box Optimization Testbeds

Benchmarking Natural Evolution Strategies with Adaptation Sampling on the Noiseless and Noisy Black-box Optimization Testbeds Benchmarking Natural Evolution Strategies with Adaptation Sampling on the Noiseless and Noisy Black-box Optimization Testbeds Tom Schaul Courant Institute of Mathematical Sciences, New York University

More information

Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed

Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed Benchmarking a BI-Population CMA-ES on the BBOB- Function Testbed Nikolaus Hansen Microsoft Research INRIA Joint Centre rue Jean Rostand Orsay Cedex, France Nikolaus.Hansen@inria.fr ABSTRACT We propose

More information

Benchmarking the (1+1)-CMA-ES on the BBOB-2009 Function Testbed

Benchmarking the (1+1)-CMA-ES on the BBOB-2009 Function Testbed Benchmarking the (+)-CMA-ES on the BBOB-9 Function Testbed Anne Auger, Nikolaus Hansen To cite this version: Anne Auger, Nikolaus Hansen. Benchmarking the (+)-CMA-ES on the BBOB-9 Function Testbed. ACM-GECCO

More information

Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB-2010 Noiseless Testbed

Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB-2010 Noiseless Testbed Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB- Noiseless Testbed Nikolaus Hansen, Raymond Ros To cite this version: Nikolaus Hansen, Raymond Ros. Benchmarking a Weighted Negative

More information

Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed

Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed Lukáš Bajer Institute of Computer Science Academy of Sciences of the Czech Republic and Faculty of Mathematics

More information

The Impact of Initial Designs on the Performance of MATSuMoTo on the Noiseless BBOB-2015 Testbed: A Preliminary Study

The Impact of Initial Designs on the Performance of MATSuMoTo on the Noiseless BBOB-2015 Testbed: A Preliminary Study The Impact of Initial Designs on the Performance of MATSuMoTo on the Noiseless BBOB-0 Testbed: A Preliminary Study Dimo Brockhoff, Bernd Bischl, Tobias Wagner To cite this version: Dimo Brockhoff, Bernd

More information

Benchmarking Projection-Based Real Coded Genetic Algorithm on BBOB-2013 Noiseless Function Testbed

Benchmarking Projection-Based Real Coded Genetic Algorithm on BBOB-2013 Noiseless Function Testbed Benchmarking Projection-Based Real Coded Genetic Algorithm on BBOB-2013 Noiseless Function Testbed Babatunde Sawyerr 1 Aderemi Adewumi 2 Montaz Ali 3 1 University of Lagos, Lagos, Nigeria 2 University

More information

Investigating the Impact of Adaptation Sampling in Natural Evolution Strategies on Black-box Optimization Testbeds

Investigating the Impact of Adaptation Sampling in Natural Evolution Strategies on Black-box Optimization Testbeds Investigating the Impact o Adaptation Sampling in Natural Evolution Strategies on Black-box Optimization Testbeds Tom Schaul Courant Institute o Mathematical Sciences, New York University Broadway, New

More information

Benchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed

Benchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed Benchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed Lukáš Bajer,, Zbyněk Pitra,, Martin Holeňa Faculty of Mathematics and Physics, Charles University, Institute of Computer

More information

arxiv: v1 [cs.ne] 9 May 2016

arxiv: v1 [cs.ne] 9 May 2016 Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES) arxiv:1605.02720v1 [cs.ne] 9 May 2016 ABSTRACT Ilya Loshchilov University of Freiburg Freiburg, Germany ilya.loshchilov@gmail.com

More information

Real-Parameter Black-Box Optimization Benchmarking 2010: Presentation of the Noiseless Functions

Real-Parameter Black-Box Optimization Benchmarking 2010: Presentation of the Noiseless Functions Real-Parameter Black-Box Optimization Benchmarking 2010: Presentation of the Noiseless Functions Steffen Finck, Nikolaus Hansen, Raymond Ros and Anne Auger Report 2009/20, Research Center PPE, re-compiled

More information

arxiv: v1 [cs.ne] 29 Jul 2014

arxiv: v1 [cs.ne] 29 Jul 2014 A CUDA-Based Real Parameter Optimization Benchmark Ke Ding and Ying Tan School of Electronics Engineering and Computer Science, Peking University arxiv:1407.7737v1 [cs.ne] 29 Jul 2014 Abstract. Benchmarking

More information

2 discretized variales approach those of the original continuous variales. Such an assumption is valid when continuous variales are represented as oat

2 discretized variales approach those of the original continuous variales. Such an assumption is valid when continuous variales are represented as oat Chapter 1 CONSTRAINED GENETIC ALGORITHMS AND THEIR APPLICATIONS IN NONLINEAR CONSTRAINED OPTIMIZATION Benjamin W. Wah and Yi-Xin Chen Department of Electrical and Computer Engineering and the Coordinated

More information

Modified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds

Modified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds Modified Differential Evolution for Nonlinear Optimization Problems with Simple Bounds Md. Abul Kalam Azad a,, Edite M.G.P. Fernandes b a Assistant Researcher, b Professor Md. Abul Kalam Azad Algoritmi

More information

Adaptive Coordinate Descent

Adaptive Coordinate Descent Adaptive Coordinate Descent Ilya Loshchilov 1,2, Marc Schoenauer 1,2, Michèle Sebag 2,1 1 TAO Project-team, INRIA Saclay - Île-de-France 2 and Laboratoire de Recherche en Informatique (UMR CNRS 8623) Université

More information

Covariance Matrix Adaptation in Multiobjective Optimization

Covariance Matrix Adaptation in Multiobjective Optimization Covariance Matrix Adaptation in Multiobjective Optimization Dimo Brockhoff INRIA Lille Nord Europe October 30, 2014 PGMO-COPI 2014, Ecole Polytechnique, France Mastertitelformat Scenario: Multiobjective

More information

A Restart CMA Evolution Strategy With Increasing Population Size

A Restart CMA Evolution Strategy With Increasing Population Size Anne Auger and Nikolaus Hansen A Restart CMA Evolution Strategy ith Increasing Population Size Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2005 c IEEE A Restart CMA Evolution Strategy

More information

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables Minimizing a convex separale exponential function suect to linear equality constraint and ounded variales Stefan M. Stefanov Department of Mathematics Neofit Rilski South-Western University 2700 Blagoevgrad

More information

Introduction to Black-Box Optimization in Continuous Search Spaces. Definitions, Examples, Difficulties

Introduction to Black-Box Optimization in Continuous Search Spaces. Definitions, Examples, Difficulties 1 Introduction to Black-Box Optimization in Continuous Search Spaces Definitions, Examples, Difficulties Tutorial: Evolution Strategies and CMA-ES (Covariance Matrix Adaptation) Anne Auger & Nikolaus Hansen

More information

Experimental Comparisons of Derivative Free Optimization Algorithms

Experimental Comparisons of Derivative Free Optimization Algorithms Experimental Comparisons of Derivative Free Optimization Algorithms Anne Auger Nikolaus Hansen J. M. Perez Zerpa Raymond Ros Marc Schoenauer TAO Project-Team, INRIA Saclay Île-de-France, and Microsoft-INRIA

More information

Beta Damping Quantum Behaved Particle Swarm Optimization

Beta Damping Quantum Behaved Particle Swarm Optimization Beta Damping Quantum Behaved Particle Swarm Optimization Tarek M. Elbarbary, Hesham A. Hefny, Atef abel Moneim Institute of Statistical Studies and Research, Cairo University, Giza, Egypt tareqbarbary@yahoo.com,

More information

Point-Based Value Iteration for Constrained POMDPs

Point-Based Value Iteration for Constrained POMDPs Point-Based Value Iteration for Constrained POMDPs Dongho Kim Jaesong Lee Kee-Eung Kim Department of Computer Science Pascal Poupart School of Computer Science IJCAI-2011 2011. 7. 22. Motivation goals

More information

Problem Statement Continuous Domain Search/Optimization. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms.

Problem Statement Continuous Domain Search/Optimization. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms. Tutorial Evolution Strategies and Related Estimation of Distribution Algorithms Anne Auger & Nikolaus Hansen INRIA Saclay - Ile-de-France, project team TAO Universite Paris-Sud, LRI, Bat. 49 945 ORSAY

More information

Luis Manuel Santana Gallego 100 Investigation and simulation of the clock skew in modern integrated circuits. Clock Skew Model

Luis Manuel Santana Gallego 100 Investigation and simulation of the clock skew in modern integrated circuits. Clock Skew Model Luis Manuel Santana Gallego 100 Appendix 3 Clock Skew Model Xiaohong Jiang and Susumu Horiguchi [JIA-01] 1. Introduction The evolution of VLSI chips toward larger die sizes and faster clock speeds makes

More information

Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions

Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions Nikolaus Hansen, Steffen Finck, Raymond Ros, Anne Auger To cite this version: Nikolaus Hansen, Steffen Finck, Raymond

More information

Empirical comparisons of several derivative free optimization algorithms

Empirical comparisons of several derivative free optimization algorithms Empirical comparisons of several derivative free optimization algorithms A. Auger,, N. Hansen,, J. M. Perez Zerpa, R. Ros, M. Schoenauer, TAO Project-Team, INRIA Saclay Ile-de-France LRI, Bat 90 Univ.

More information

Introduction to Randomized Black-Box Numerical Optimization and CMA-ES

Introduction to Randomized Black-Box Numerical Optimization and CMA-ES Introduction to Randomized Black-Box Numerical Optimization and CMA-ES July 3, 2017 CEA/EDF/Inria summer school "Numerical Analysis" Université Pierre-et-Marie-Curie, Paris, France Anne Auger, Asma Atamna,

More information

Stochastic Methods for Continuous Optimization

Stochastic Methods for Continuous Optimization Stochastic Methods for Continuous Optimization Anne Auger et Dimo Brockhoff Paris-Saclay Master - Master 2 Informatique - Parcours Apprentissage, Information et Contenu (AIC) 2016 Slides taken from Auger,

More information

1 Hoeffding s Inequality

1 Hoeffding s Inequality Proailistic Method: Hoeffding s Inequality and Differential Privacy Lecturer: Huert Chan Date: 27 May 22 Hoeffding s Inequality. Approximate Counting y Random Sampling Suppose there is a ag containing

More information

Essential Maths 1. Macquarie University MAFC_Essential_Maths Page 1 of These notes were prepared by Anne Cooper and Catriona March.

Essential Maths 1. Macquarie University MAFC_Essential_Maths Page 1 of These notes were prepared by Anne Cooper and Catriona March. Essential Maths 1 The information in this document is the minimum assumed knowledge for students undertaking the Macquarie University Masters of Applied Finance, Graduate Diploma of Applied Finance, and

More information

On the Pathological Behavior of Adaptive Differential Evolution on Hybrid Objective Functions

On the Pathological Behavior of Adaptive Differential Evolution on Hybrid Objective Functions On the Pathological Behavior of Adaptive Differential Evolution on Hybrid Objective Functions ABSTRACT Ryoji Tanabe Graduate School of Arts and Sciences The University of Tokyo Tokyo, Japan rt.ryoji.tanabe@gmail.com

More information

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Karol Opara 1 and Jaros law Arabas 2 1 Systems Research Institute, Polish Academy of Sciences 2 Institute of Electronic

More information

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear

More information

A parallel metaheuristics for the single machine total weighted tardiness problem with sequence-dependent setup times

A parallel metaheuristics for the single machine total weighted tardiness problem with sequence-dependent setup times A parallel metaheuristics for the single machine total weighted tardiness problem with sequence-dependent setup times Wojciech Bożejko Wroc law University of Technology Institute of Computer Engineering,

More information

Bio-inspired Continuous Optimization: The Coming of Age

Bio-inspired Continuous Optimization: The Coming of Age Bio-inspired Continuous Optimization: The Coming of Age Anne Auger Nikolaus Hansen Nikolas Mauny Raymond Ros Marc Schoenauer TAO Team, INRIA Futurs, FRANCE http://tao.lri.fr First.Last@inria.fr CEC 27,

More information

2 Differential Evolution and its Control Parameters

2 Differential Evolution and its Control Parameters COMPETITIVE DIFFERENTIAL EVOLUTION AND GENETIC ALGORITHM IN GA-DS TOOLBOX J. Tvrdík University of Ostrava 1 Introduction The global optimization problem with box constrains is formed as follows: for a

More information

A (1+1)-CMA-ES for Constrained Optimisation

A (1+1)-CMA-ES for Constrained Optimisation A (1+1)-CMA-ES for Constrained Optimisation Dirk Arnold, Nikolaus Hansen To cite this version: Dirk Arnold, Nikolaus Hansen. A (1+1)-CMA-ES for Constrained Optimisation. Terence Soule and Jason H. Moore.

More information

Adaptive Generation-Based Evolution Control for Gaussian Process Surrogate Models

Adaptive Generation-Based Evolution Control for Gaussian Process Surrogate Models J. Hlaváčová (Ed.): ITAT 07 Proceedings, pp. 36 43 CEUR Workshop Proceedings Vol. 885, ISSN 63-0073, c 07 J. Repický, L. Bajer, Z. Pitra, M. Holeňa Adaptive Generation-Based Evolution Control for Gaussian

More information

Crossover and the Different Faces of Differential Evolution Searches

Crossover and the Different Faces of Differential Evolution Searches WCCI 21 IEEE World Congress on Computational Intelligence July, 18-23, 21 - CCIB, Barcelona, Spain CEC IEEE Crossover and the Different Faces of Differential Evolution Searches James Montgomery Abstract

More information

Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation

Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation Tutorial CMA-ES Evolution Strategies and Covariance Matrix Adaptation Anne Auger & Nikolaus Hansen INRIA Research Centre Saclay Île-de-France Project team TAO University Paris-Sud, LRI (UMR 8623), Bat.

More information

Test Generation for Designs with Multiple Clocks

Test Generation for Designs with Multiple Clocks 39.1 Test Generation for Designs with Multiple Clocks Xijiang Lin and Rob Thompson Mentor Graphics Corp. 8005 SW Boeckman Rd. Wilsonville, OR 97070 Abstract To improve the system performance, designs with

More information

The Dispersion Metric and the CMA Evolution Strategy

The Dispersion Metric and the CMA Evolution Strategy The Dispersion Metric and the CMA Evolution Strategy Monte Lunacek Department of Computer Science Colorado State University Fort Collins, CO 80523 lunacek@cs.colostate.edu Darrell Whitley Department of

More information

BI-population CMA-ES Algorithms with Surrogate Models and Line Searches

BI-population CMA-ES Algorithms with Surrogate Models and Line Searches BI-population CMA-ES Algorithms with Surrogate Models and Line Searches Ilya Loshchilov 1, Marc Schoenauer 2 and Michèle Sebag 2 1 LIS, École Polytechnique Fédérale de Lausanne 2 TAO, INRIA CNRS Université

More information

Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating Support Vector Machines and Covariance-Matrix Adaptation-ES

Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating Support Vector Machines and Covariance-Matrix Adaptation-ES Covariance Matrix Adaptation-Evolution Strategy Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating and Covariance-Matrix Adaptation-ES Ilya Loshchilov, Marc Schoenauer,

More information

Evolution Strategies and Covariance Matrix Adaptation

Evolution Strategies and Covariance Matrix Adaptation Evolution Strategies and Covariance Matrix Adaptation Cours Contrôle Avancé - Ecole Centrale Paris Anne Auger January 2014 INRIA Research Centre Saclay Île-de-France University Paris-Sud, LRI (UMR 8623),

More information

Rollout Policies for Dynamic Solutions to the Multi-Vehicle Routing Problem with Stochastic Demand and Duration Limits

Rollout Policies for Dynamic Solutions to the Multi-Vehicle Routing Problem with Stochastic Demand and Duration Limits Rollout Policies for Dynamic Solutions to the Multi-Vehicle Routing Prolem with Stochastic Demand and Duration Limits Justin C. Goodson Jeffrey W. Ohlmann Barrett W. Thomas Octoer 2, 2012 Astract We develop

More information

Constrained Optimization by the Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites

Constrained Optimization by the Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites 2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 6-2, 2006 Constrained Optimization by the Constrained Differential Evolution with Gradient-Based

More information

Weak bidders prefer first-price (sealed-bid) auctions. (This holds both ex-ante, and once the bidders have learned their types)

Weak bidders prefer first-price (sealed-bid) auctions. (This holds both ex-ante, and once the bidders have learned their types) Econ 805 Advanced Micro Theory I Dan Quint Fall 2007 Lecture 9 Oct 4 2007 Last week, we egan relaxing the assumptions of the symmetric independent private values model. We examined private-value auctions

More information

The Mean Version One way to write the One True Regression Line is: Equation 1 - The One True Line

The Mean Version One way to write the One True Regression Line is: Equation 1 - The One True Line Chapter 27: Inferences for Regression And so, there is one more thing which might vary one more thing aout which we might want to make some inference: the slope of the least squares regression line. The

More information

Noisy Optimization: A Theoretical Strategy Comparison of ES, EGS, SPSA & IF on the Noisy Sphere

Noisy Optimization: A Theoretical Strategy Comparison of ES, EGS, SPSA & IF on the Noisy Sphere Noisy Optimization: A Theoretical Strategy Comparison of ES, EGS, SPSA & IF on the Noisy Sphere S. Finck Vorarlberg University of Applied Sciences Hochschulstrasse 1 Dornbirn, Austria steffen.finck@fhv.at

More information

Tuning Parameters across Mixed Dimensional Instances: A Performance Scalability Study of Sep-G-CMA-ES

Tuning Parameters across Mixed Dimensional Instances: A Performance Scalability Study of Sep-G-CMA-ES Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Tuning Parameters across Mixed Dimensional Instances: A Performance Scalability

More information

Fast inverse for big numbers: Picarte s iteration

Fast inverse for big numbers: Picarte s iteration Fast inverse for ig numers: Picarte s iteration Claudio Gutierrez and Mauricio Monsalve Computer Science Department, Universidad de Chile cgutierr,mnmonsal@dcc.uchile.cl Astract. This paper presents an

More information

arxiv: v2 [math.oc] 29 Jul 2016

arxiv: v2 [math.oc] 29 Jul 2016 Stochastic Frank-Wolfe Methods for Nonconvex Optimization arxiv:607.0854v [math.oc] 9 Jul 06 Sashank J. Reddi sjakkamr@cs.cmu.edu Carnegie Mellon University Barnaás Póczós apoczos@cs.cmu.edu Carnegie Mellon

More information

Advanced Optimization

Advanced Optimization Advanced Optimization Lecture 3: 1: Randomized Algorithms for for Continuous Discrete Problems Problems November 22, 2016 Master AIC Université Paris-Saclay, Orsay, France Anne Auger INRIA Saclay Ile-de-France

More information

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Miguel Leon Ortiz and Ning Xiong Mälardalen University, Västerås, SWEDEN Abstract. Differential evolution

More information

Module 9: Further Numbers and Equations. Numbers and Indices. The aim of this lesson is to enable you to: work with rational and irrational numbers

Module 9: Further Numbers and Equations. Numbers and Indices. The aim of this lesson is to enable you to: work with rational and irrational numbers Module 9: Further Numers and Equations Lesson Aims The aim of this lesson is to enale you to: wor with rational and irrational numers wor with surds to rationalise the denominator when calculating interest,

More information

Differential Evolution: Competitive Setting of Control Parameters

Differential Evolution: Competitive Setting of Control Parameters Proceedings of the International Multiconference on Computer Science and Information Technology pp. 207 213 ISSN 1896-7094 c 2006 PIPS Differential Evolution: Competitive Setting of Control Parameters

More information

1Number ONLINE PAGE PROOFS. systems: real and complex. 1.1 Kick off with CAS

1Number ONLINE PAGE PROOFS. systems: real and complex. 1.1 Kick off with CAS 1Numer systems: real and complex 1.1 Kick off with CAS 1. Review of set notation 1.3 Properties of surds 1. The set of complex numers 1.5 Multiplication and division of complex numers 1.6 Representing

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

Optimal Power Allocation in Server Farms

Optimal Power Allocation in Server Farms Optimal Power Allocation in Server Farms ABSTRACT Anshul Gandhi Carnegie Mellon University Pittsurgh, PA, USA anshulg@cs.cmu.edu Rajarshi Das IBM Research Hawthorne, NY, USA rajarshi@us.im.com Server farms

More information

Mathematical Ideas Modelling data, power variation, straightening data with logarithms, residual plots

Mathematical Ideas Modelling data, power variation, straightening data with logarithms, residual plots Kepler s Law Level Upper secondary Mathematical Ideas Modelling data, power variation, straightening data with logarithms, residual plots Description and Rationale Many traditional mathematics prolems

More information

Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES

Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES Andrea Maesani and Dario Floreano Laboratory of Intelligent Systems, Institute of Microengineering, Ecole Polytechnique Fédérale de

More information

FinQuiz Notes

FinQuiz Notes Reading 9 A time series is any series of data that varies over time e.g. the quarterly sales for a company during the past five years or daily returns of a security. When assumptions of the regression

More information

1 Caveats of Parallel Algorithms

1 Caveats of Parallel Algorithms CME 323: Distriuted Algorithms and Optimization, Spring 2015 http://stanford.edu/ reza/dao. Instructor: Reza Zadeh, Matroid and Stanford. Lecture 1, 9/26/2015. Scried y Suhas Suresha, Pin Pin, Andreas

More information

Linear Programming. Our market gardener example had the form: min x. subject to: where: [ acres cabbages acres tomatoes T

Linear Programming. Our market gardener example had the form: min x. subject to: where: [ acres cabbages acres tomatoes T Our market gardener eample had the form: min - 900 1500 [ ] suject to: Ñ Ò Ó 1.5 2 â 20 60 á ã Ñ Ò Ó 3 â 60 á ã where: [ acres caages acres tomatoes T ]. We need a more systematic approach to solving these

More information

OPTIMAL DG UNIT PLACEMENT FOR LOSS REDUCTION IN RADIAL DISTRIBUTION SYSTEM-A CASE STUDY

OPTIMAL DG UNIT PLACEMENT FOR LOSS REDUCTION IN RADIAL DISTRIBUTION SYSTEM-A CASE STUDY 2006-2007 Asian Research Pulishing Network (ARPN). All rights reserved. OPTIMAL DG UNIT PLACEMENT FOR LOSS REDUCTION IN RADIAL DISTRIBUTION SYSTEM-A CASE STUDY A. Lakshmi Devi 1 and B. Suramanyam 2 1 Department

More information

An Introduction to Differential Evolution. Kelly Fleetwood

An Introduction to Differential Evolution. Kelly Fleetwood An Introduction to Differential Evolution Kelly Fleetwood Synopsis Introduction Basic Algorithm Example Performance Applications The Basics of Differential Evolution Stochastic, population-based optimisation

More information

The Capacity Region of 2-Receiver Multiple-Input Broadcast Packet Erasure Channels with Channel Output Feedback

The Capacity Region of 2-Receiver Multiple-Input Broadcast Packet Erasure Channels with Channel Output Feedback IEEE TRANSACTIONS ON INFORMATION THEORY, ONLINE PREPRINT 2014 1 The Capacity Region of 2-Receiver Multiple-Input Broadcast Packet Erasure Channels with Channel Output Feedack Chih-Chun Wang, Memer, IEEE,

More information

Stochastic optimization and a variable metric approach

Stochastic optimization and a variable metric approach The challenges for stochastic optimization and a variable metric approach Microsoft Research INRIA Joint Centre, INRIA Saclay April 6, 2009 Content 1 Introduction 2 The Challenges 3 Stochastic Search 4

More information

Two-Stage Improved Group Plans for Burr Type XII Distributions

Two-Stage Improved Group Plans for Burr Type XII Distributions American Journal of Mathematics and Statistics 212, 2(3): 33-39 DOI: 1.5923/j.ajms.21223.4 Two-Stage Improved Group Plans for Burr Type XII Distriutions Muhammad Aslam 1,*, Y. L. Lio 2, Muhammad Azam 1,

More information

Adaptive Differential Evolution and Exponential Crossover

Adaptive Differential Evolution and Exponential Crossover Proceedings of the International Multiconference on Computer Science and Information Technology pp. 927 931 ISBN 978-83-60810-14-9 ISSN 1896-7094 Adaptive Differential Evolution and Exponential Crossover

More information

arxiv: v1 [cs.lg] 15 Jun 2016

arxiv: v1 [cs.lg] 15 Jun 2016 A Class of Parallel Douly Stochastic Algorithms for Large-Scale Learning arxiv:1606.04991v1 [cs.lg] 15 Jun 2016 Aryan Mokhtari Alec Koppel Alejandro Rieiro Department of Electrical and Systems Engineering

More information

Reducing Delay Uncertainty in Deeply Scaled Integrated Circuits Using Interdependent Timing Constraints

Reducing Delay Uncertainty in Deeply Scaled Integrated Circuits Using Interdependent Timing Constraints Reducing Delay Uncertainty in Deeply Scaled Integrated Circuits Using Interdependent Timing Constraints Emre Salman and Eby G. Friedman Department of Electrical and Computer Engineering University of Rochester

More information

Optimisation numérique par algorithmes stochastiques adaptatifs

Optimisation numérique par algorithmes stochastiques adaptatifs Optimisation numérique par algorithmes stochastiques adaptatifs Anne Auger M2R: Apprentissage et Optimisation avancés et Applications anne.auger@inria.fr INRIA Saclay - Ile-de-France, project team TAO

More information

Section 8.5. z(t) = be ix(t). (8.5.1) Figure A pendulum. ż = ibẋe ix (8.5.2) (8.5.3) = ( bẋ 2 cos(x) bẍ sin(x)) + i( bẋ 2 sin(x) + bẍ cos(x)).

Section 8.5. z(t) = be ix(t). (8.5.1) Figure A pendulum. ż = ibẋe ix (8.5.2) (8.5.3) = ( bẋ 2 cos(x) bẍ sin(x)) + i( bẋ 2 sin(x) + bẍ cos(x)). Difference Equations to Differential Equations Section 8.5 Applications: Pendulums Mass-Spring Systems In this section we will investigate two applications of our work in Section 8.4. First, we will consider

More information

1. Define the following terms (1 point each): alternative hypothesis

1. Define the following terms (1 point each): alternative hypothesis 1 1. Define the following terms (1 point each): alternative hypothesis One of three hypotheses indicating that the parameter is not zero; one states the parameter is not equal to zero, one states the parameter

More information

Expansion formula using properties of dot product (analogous to FOIL in algebra): u v 2 u v u v u u 2u v v v u 2 2u v v 2

Expansion formula using properties of dot product (analogous to FOIL in algebra): u v 2 u v u v u u 2u v v v u 2 2u v v 2 Least squares: Mathematical theory Below we provide the "vector space" formulation, and solution, of the least squares prolem. While not strictly necessary until we ring in the machinery of matrix algera,

More information

Modifying Shor s algorithm to compute short discrete logarithms

Modifying Shor s algorithm to compute short discrete logarithms Modifying Shor s algorithm to compute short discrete logarithms Martin Ekerå Decemer 7, 06 Astract We revisit Shor s algorithm for computing discrete logarithms in F p on a quantum computer and modify

More information

Beyond Loose LP-relaxations: Optimizing MRFs by Repairing Cycles

Beyond Loose LP-relaxations: Optimizing MRFs by Repairing Cycles Beyond Loose LP-relaxations: Optimizing MRFs y Repairing Cycles Nikos Komodakis 1 and Nikos Paragios 2 1 University of Crete, komod@csd.uoc.gr 2 Ecole Centrale de Paris, nikos.paragios@ecp.fr Astract.

More information

Depth versus Breadth in Convolutional Polar Codes

Depth versus Breadth in Convolutional Polar Codes Depth versus Breadth in Convolutional Polar Codes Maxime Tremlay, Benjamin Bourassa and David Poulin,2 Département de physique & Institut quantique, Université de Sherrooke, Sherrooke, Quéec, Canada JK

More information

Scale Parameter Estimation of the Laplace Model Using Different Asymmetric Loss Functions

Scale Parameter Estimation of the Laplace Model Using Different Asymmetric Loss Functions Scale Parameter Estimation of the Laplace Model Using Different Asymmetric Loss Functions Said Ali (Corresponding author) Department of Statistics, Quaid-i-Azam University, Islamaad 4530, Pakistan & Department

More information

Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators

Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators Andrea Mambrini 1 University of Birmingham, Birmingham UK 6th June 2013 1 / 33 Andrea Mambrini GSGP: theory-laden

More information

Feasibility-Preserving Crossover for Maximum k-coverage Problem

Feasibility-Preserving Crossover for Maximum k-coverage Problem Feasibility-Preserving Crossover for Maximum -Coverage Problem Yourim Yoon School of Computer Science & Engineering Seoul National University Sillim-dong, Gwana-gu Seoul, 151-744, Korea yryoon@soar.snu.ac.r

More information

Solving Systems of Linear Equations Symbolically

Solving Systems of Linear Equations Symbolically " Solving Systems of Linear Equations Symolically Every day of the year, thousands of airline flights crisscross the United States to connect large and small cities. Each flight follows a plan filed with

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Blackbox Optimization Marc Toussaint U Stuttgart Blackbox Optimization The term is not really well defined I use it to express that only f(x) can be evaluated f(x) or 2 f(x)

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

The particle swarm optimization algorithm: convergence analysis and parameter selection

The particle swarm optimization algorithm: convergence analysis and parameter selection Information Processing Letters 85 (2003) 317 325 www.elsevier.com/locate/ipl The particle swarm optimization algorithm: convergence analysis and parameter selection Ioan Cristian Trelea INA P-G, UMR Génie

More information

Gravitational Model of the Three Elements Theory: Mathematical Explanations

Gravitational Model of the Three Elements Theory: Mathematical Explanations Journal of odern Physics, 3, 4, 7-35 http://dxdoiorg/436/jmp34738 Pulished Online July 3 (http://wwwscirporg/journal/jmp) Gravitational odel of the Three Elements Theory: athematical Explanations Frederic

More information

HarperCollinsPublishers Ltd

HarperCollinsPublishers Ltd Guidance on the use of codes for this mark scheme M A C P cao oe ft Method mark Accuracy mark Working mark Communication mark Process, proof or justification mark Correct answer only Or equivalent Follow

More information

Egocentric Particle Swarm Optimization

Egocentric Particle Swarm Optimization Egocentric Particle Swarm Optimization Foundations of Evolutionary Computation Mandatory Project 1 Magnus Erik Hvass Pedersen (971055) February 2005, Daimi, University of Aarhus 1 Introduction The purpose

More information

Simple Examples. Let s look at a few simple examples of OI analysis.

Simple Examples. Let s look at a few simple examples of OI analysis. Simple Examples Let s look at a few simple examples of OI analysis. Example 1: Consider a scalar prolem. We have one oservation y which is located at the analysis point. We also have a ackground estimate

More information

INTRODUCTION. 2. Characteristic curve of rain intensities. 1. Material and methods

INTRODUCTION. 2. Characteristic curve of rain intensities. 1. Material and methods Determination of dates of eginning and end of the rainy season in the northern part of Madagascar from 1979 to 1989. IZANDJI OWOWA Landry Régis Martial*ˡ, RABEHARISOA Jean Marc*, RAKOTOVAO Niry Arinavalona*,

More information

AMULTIOBJECTIVE optimization problem (MOP) can

AMULTIOBJECTIVE optimization problem (MOP) can 1 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1 Letters 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Decomposition-Based Multiobjective Evolutionary Algorithm with an Ensemble of Neighborhood Sizes Shi-Zheng

More information