A Merit Function Approach for Evolution Strategies

Size: px
Start display at page:

Download "A Merit Function Approach for Evolution Strategies"

Transcription

1 A Merit Function Approach for Evolution Strategies Youssef Diouane April 28, 2018 Abstract In this paper, we ext a class of evolution strategies to handle general constrained optimization problems. The proposed algorithm is based on a merit function approach, where one tries to combine both the objective function and the constraints violation function. While a feasible approach is used to handle unrelaxable constraints, for relaxable constraints our algorithm takes the violation of those constraints into consideration, and tries progressively to decrease it as far as the minimization process advances. The introduced extension guaranties to the regarded class of evolution strategies global convergence properties for first order stationary constraints. Preliminary numerical experiments are carried out on a set of known test problems and on a multidisciplinary design optimization problem. Keywords: Nonlinear optimization, constrained optimization, evolution strategies, merit function, global convergence. 1 Introduction In this paper, we are interested to the following constrained optimization problem: min f(x) s.t. x Ω = Ω r Ω nr. (1) The feasible region Ω R n of this problem is defined by relaxable and/or unrelaxable constraints. In our notation, Ω r is the set of relaxable constraints (also known as soft constraints) which is assumed to be of the form: Ω r = {x R n : c i (x) 0, i I}. Generally, the relaxable constraints allowed their violation and need only to be satisfied approximately or asymptotically. For the set Ω nr R n of unrelaxable constraints (also known as hard constraints) no violation is allowed and they need to be satisfied for all the algorithm iterations. The constraints Ω nr can be seen as bounds or linear constraints. Many practical optimization problems contain both relaxable and unrelaxable constraints. For instance, in multidisciplinary design optimization problems [32], one may have different coupled disciplines (e.g., structure, Institut Supérieur de l Aéronautique et de l Espace (ISAE-SUPAERO), Université de Toulouse, Toulouse Cedex 4, France (youssef.diouane@isae.fr). 1

2 aerodynamics, propulsion) that represent the aircraft model. In this case, the appropriate design can be chosen by maximizing the aircraft range under bounds constraints and subject to many relaxable constraints (i.e., the involved disciplines). Evolution strategies (ES s) [33] are one of the most successful stochastic optimization algorithms, seen as a class of evolutionary algorithms that are naturally parallelizable, appropriate for continuous optimization, and that lead to interesting results [6, 34, 7]. In [14, 15], the authors dealt with a large class of ES s, where a certain number λ of points (called offspring) are randomly generated in each iteration, among which µ λ of them (called parents) are selected. ES s have been growing rapidly in popularity and start to be used for solving challenging optimization problems [20, 5]. In [15], the authors proposed a general globally convergent framework for unrelaxable constraints using two different approaches. The first one relied on techniques inspired from directional direct-search methods [12, 25], where one uses an extreme barrier function to prevent unfeasible displacements together with the possible use of directions that conform to the local geometry of the feasible region. The second approach was based on enforcing all the generated sample points to be feasible, by using a projection mapping approach. Both proposed strategies were compared to some of the best available solvers for minimizing a function without derivatives (including some designed for global optimization). The obtained numerical results confirmed the competitiveness of the two approaches in terms of efficiency as well as robustness. Motivated by the recent availability of massively parallel computing platforms, the authors in [13] proposed a highly parallel globally convergent ES (inspired by [15]) adapted to the full-waveform inversion setting. By combining model reduction and ES s in a parallel environment, the authors helped solving realistic instances of the full-waveform inversion problem. For ES s enormous state of the art of constraints handling algorithms have been proposed [9]. Coello [10] and Kramer [27] outlined a comprehensive survey of the most popular handling constraints methods currently used with ES s. In the framework of deterministic Derivative free optimization (DFO), only few works were interested to handle both kinds (relaxable and unrelaxable) of constraints separately. For instance, Audet and Dennis [4] outlines a globally convergent general approach based on a progressive barrier, it combines an extreme barrier approach for unrelaxable constraints and non-dominance filters [16] to handle relaxable constraints. In [29] a derivative-free two-phase class of algorithms deal with cases where finding a feasible point is easier than minimizing the objective function. In the frame of directional direct-search methods, Vicente and Gratton [18] proposed an alternative where one handles relaxable constraints by means of a merit function. The latter approach ensures global convergence by imposing a sufficient decrease condition on a merit function combining information from both objective function and constraints violation. Inspired by the merit function approach for direct search methods [18], we propose in this paper to adapt the class of ES algorithms proposed in [15] to handle relaxable constraints as well. In our proposed analysis, the objective function and the relaxable constraint function c i will be assumed to be locally Lipschitz continuous near an accumulation point produced by the algorithm. The obtained ES algorithm will rely on a merit function to decide and control the distribution of the points. The selection will be also based on the value of such function at given points. The proposed convergence theory generalizes the work in [15] by including relaxable constraints, all in the spirit of what was introduced for direct search methods [18]. The contributions of this paper are the following. We propose an adaptation of the merit function approach algorithm to the ES setting to handle relaxable constraints, a convergence theory of 2

3 the proposed approach is given. We provide also a detailed practical implementation on some known global optimization problems as well as tests on a multidisciplinary design optimization (MDO) problem. The performance of our proposed solver are compared with the progressive barrier approach implemented in mesh adaptive direct search (MADS) solver [4]. The paper is organized as follows. Section 2 reminds the proposed approach where a class of ES algorithms is adapted to handle unrelaxable constraints. The proposed merit function approach is then given in Section 3 with a full description of the changes introduced to our ES to handle general constraints. The convergence results of the adapted approach are then detailed in Section 4. In Section 5, we test the proposed algorithm on well-known constrained optimization test problems and an MDO problem. Finally, we conclude the paper in Section 6 with some conclusions and prospects of future work. 2 A globally convergent ES for unrelaxable constraints This paper focuses on a class of ES s, denoted by (µ/µ W, λ) ES, which evolves a single candidate solution. In fact, at the k-th iteration, a new population yk+1 1,..., yλ k+1 (called offspring) is generated around a weighted mean x k of the previous parents (candidate solution). The symbol /µ W in (µ/µ W, λ) ES specifies that µ parents are recombined into a weighted mean. The parents are selected as the µ best offspring of the previous iteration in terms of the objective function value. The mutation operator of the new offspring points is done by yk+1 i = x k +σk ESdi k, i = 1,..., λ, where d i k is drawn from a certain distribution C k and σk ES is a chosen step size. The weights used to compute the means belong to the simplex set S = {(ω 1,..., ω µ ) R µ : µ i=1 wi = 1, w i 0, i = 1,..., µ}. The (µ/µ W, λ) ES adapts the sampling distribution to the landscape of the objective function. An adaptation mechanism for the step size parameter is also possible. The latter one increases or decreases deping on the landscape of the objective function. One relevant instance of such an ES is covariance matrix adaptation ES (CMA-ES) [21]. In [14, 15], the authors proposed a framework for making a class of ES s enjoying some global convergence properties while solving unconstrained and unrelaxable constrained optimization problems. In fact, in [14], by imposing some form of sufficient decreasing on the objective function value, the proposed algorithm was monitoring the step size σ k of the algorithm to ensure its convergence to zero (i.e., leading to the existence of a stationary point). The imposed sufficient decreasing condition applied directly to the weighted mean x trial k+1 of the new parents. By sufficient decreasing condition we mean f(x trial k+1 ) f(x k) ρ(σ k ), where ρ( ) is a forcing function [25], i.e., a positive, nondecreasing function satisfying ρ(σ)/σ 0 when σ 0. To handle unrelaxable constraints [15], one starts with a feasible iterate (i.e., known as parent) and then prevent stepping outside the feasible region by means of a barrier approach. In this case, the sufficient decrease condition is applied not to f but to the extreme barrier function [3] (also known as the death penalty function in the terminology of evolutionary algorithms ) f Ωnr defined by: { f(x) if x Ωnr, f Ωnr (x) = (2) + otherwise. We consider that ties of + are broken arbitrarily in the ordering of the offspring samples. The obtained globally convergent ES is described in detail below, in Algorithm 1. We note that the directions d i k used to compute the offspring are not necessarily the randomly 3

4 Algorithm 1: A globally convergent ES for unrelaxable constraints (Ω = Ω nr ) Data: choose positive integers λ and µ such that λ µ. Select an initial x 0 Ω nr and evaluate f(x 0 ). Choose initial step lengths σ 0, σ ES 0 > 0 and initial weights (ω 1 0,..., ωµ 0 ) S. Choose constants β 1, β 2, d min, d max such that 0 < β 1 β 2 < 1 and 0 < d min < d max. Select a forcing function ρ( ). for k = 0, 1,... do Step 1: compute new sample points Y k+1 = {yk+1 1,..., yλ k+1 } such that y i k+1 = x k + σ k di k, i = 1,..., λ, where the directions d i k s are computed from the original ES directions di k s (which in turn are drawn from a chosen ES distribution C k and scaled if necessary to satisfy d min d i k d max); Step 2: evaluate f Ωnr (yk+1 i ), i = 1,..., λ, and reorder the offspring points in Y k+1 = {ỹk+1 1,..., ỹλ k+1 } by increasing order: f Ω nr (ỹk+1 1 ) f Ω nr (ỹk+1 λ ). Select the new parents as the best µ offspring sample points {ỹk+1 1,..., ỹµ k+1 }, and compute their weighted mean x trial µ k+1 = ω kỹi i k+1. Evaluate f Ωnr (x trial k+1 ); Step 3: if f Ωnr (x trial k+1 ) f(x k) ρ(σ k ) then consider the iteration successful, set x k+1 = x trial k+1, and σ k+1 σ k (for example σ k+1 = max{σ k, σk ES}).; else consider the iteration unsuccessful, set x k+1 = x k and σ k+1 = β k σ k, with β k (β 1, β 2 ); Step 4: update the ES step length σk+1 ES, the distribution C k+1, and the weights (ωk+1 1,..., ωµ k+1 ) S; i=1 generated ES directions, in what can be seen as a modification made to get adapted to the local geometry of unrelaxable constraints. For this purpose, two approaches were proposed in [15]. The first one is based on extreme barrier and the inclusion of positive generators. In this case we form the set of directions { d i k } by adding positive generators of an appropriated tangent cone to the ES randomly generated directions d i k, whenever the current iterate is closer to the boundary of the feasible region. The second proposed approach for generating the direction { d i k } is based on projecting onto the feasible domain all the generated sampled points x k + σ k d i k, and then taking instead Φ Ωnr (x k + σ k d i k ) where Φ Ω nr is given projection mapping. This procedure is however equivalent to consider d i k = Φ Ωnr (x k+σ k d i k ) x k σ k in the framework of Algorithm 1. For typical choices of the projection Φ Ωnr, in the case of bound constraints, one can use the l 2 -projection (as it is trivial to evaluate) and in the case of linear constraints one may use the 4

5 l 1 -projection (as it reduces to the solution of an LP problem). For both approaches, we note that Steps 2 and 3 of Algorithm 1 make use of the extreme barrier function (2). Due to the sufficient decrease condition, one can guarantee that a subsequence of step sizes will converge to zero. From this property and the fact that the step size is significantly reduced (at least by β 2 ) in unsuccessful iterations, one proves that there exists a subsequence K of unsuccessful iterates driving the step size to zero [15, Lemma 2.1]. The global convergence is then achieved by establishing that some type of directional derivatives are nonnegative at limit points of refining subsequences along certain limit directions (see [15, Theorem 2.1]). 3 A globally convergent ES for general constraints The main issue addressed in this paper is how to change Algorithm 1, in a minimal way, so that we take into account relaxable constraints while it enjoys some form of convergence properties. To handle a mixture of relaxable and unrelaxable constraints, we define the merit function as follows: { f(x) + δg(x) if x Ωnr, M(x) = (3) + otherwise. where δ > 0 is positive constant and g is the constraint violation function (over relaxable constraints). The l 1 -norm is commonly used to define the constraint violation function, i.e., g(x) = i I max(c i(x), 0). Other choices for g exist, for instance, using the l 2 -norm i.e., g(x) = i I max(c i(x), 0) 2. The merit function and the corresponding penalty parameter are only used in the evaluation of an already computed trial step, to decide whether it will be accepted or not. The extension of the globally convergent ES to a general constrained setting follows a hybridization of a feasible approach, where one uses the extreme barrier or the projection approach for unrelaxable constraints, and of a merit function approach to handle relaxable constraints. For a given iteration k, the trial mean parent x trial k+1 will be computed as the weighted mean of the µ best points not in terms of the objective function but regarded to the merit function values of the offspring population. The current iteration will be considered successful if one of the two following situations occur. The first one is when a sufficient decrease in the constraints violation function g is observed, i.e. g Ωnr (x trial k+1 ) < g(x k) ρ(σ k ), while one is sufficiently away from the feasible region, i.e. g(x k ) > Cρ(σ k ) for some constant C > 1 and where, similarly to f Ωnr, the function g Ωnr denotes g if x trial k+1 Ω nr and + otherwise. The second successful situation is when the merit function is sufficiently decreased, i.e., M(x trial k+1 ) < M(x k) ρ(σ k ). To sum up, the trial mean point x trial k+1 will be declared as a successful point if the following procedure is fulfilled: Begin (Successful point). Given a parent x k and a step size σ k, the trial parent x trial k+1 is successful if g Ωnr (x trial k+1 ) < g(x k) ρ(σ k ) and g(x k ) > Cρ(σ k ) or, End (Successful point). M(x trial k+1 ) < M(x k) ρ(σ k ). 5

6 Before checking whether the trial point is successful or not, the algorithm will try first to restore the feasibility or at least decrease the constraints violation using ES algorithm. The restoration process will be entered if one has g Ωnr (x trial k+1 ) < g(x k) ρ(σ k ), g(x k ) > Cρ(σ k ), and M(x trial k+1 ) M(x k). When the restoration process is activated, it is because the constraints violation g is sufficiently decreasing at a current point x k which sufficiently away from the feasible region and for which the objective function has increased. More specifically, the restoration will be entered if one has g Ωnr (x trial k+1 ) < g(x k) ρ(σ k ), g(x k ) > Cρ(σ k ), and M(x trial k+1 ) M(x k). Begin (Restoration identifier). Given a parent x k and a step size σ k, the trial parent x trial k+1 is a Restoration identifier if and g Ωnr (x trial k+1 ) < g(x k) ρ(σ k ) and g(x k ) > Cρ(σ k ) M(x trial k+1 ) M(x k). End (Restoration identifier). The restoration algorithm will be left as far as progress in the reduction of the constraints can not be achieved and without any considerable increase in f. The complete description of the restoration procedure is given in Algorithm 3. As a result, a main iteration of the proposed merit function approach is splitted into two steps: restoration and minimization. In the restoration step the aim is to decrease infeasibility without evaluating the objective function (by minimizing the function g Ωnr ). In the minimization step the objective function f is minimized over a relaxed feasible set by means of the merit function M. The final obtained approach is described in Algorithm 2. For both algorithms (main and restoration), our global convergence analysis will be done indepently of the choice of the distribution C k, the weights (ωk 1,..., ωµ k ) S, and the step size σk ES. Therefore and similarly to Algorithm 1, the update of the ES parameters is left unspecified. Note that one also imposes bounds on all directions d i k used by the algorithm. This modification is, however, very mild since the lower bound d min can be chosen very close to zero and the upper bound d max set to a very large number. The construction of the set of directions { d i k } is done with respect to local geometry of the unrelaxable constraints as in [15]. 4 Global convergence The following convergence results are in the vein of those first established for the merit function approach for direct search methods [18]. For the convergence analysis, we will consider a sequence of iterations generated by Algorithm 2 without any stopping criterion. The analysis is organized deping on the number of times Restoration is entered. To keep the presentation of the paper simpler, only the case where Restoration is entered finitely time is treated in our convergence analysis. The analysis of the two other cases (namely when (a) an infinite run of consecutive steps inside Restoration or (b) one enters the restoration an infinite number of times) is given in Appix A. The analysis of both cases shows that such behaviors would lead to feasibility and optimality results similar to the case where Restoration is entered finitely time. Due to the sufficient decrease in the merit function or in the constraints violation function (when the iterates are sufficiently away from the feasible region), one can guarantee that a subsequence of step sizes will converge to zero. From this property and the fact that the step 6

7 Algorithm 2: A globally convergent ES for general constraints (Main) Data: choose positive integers λ and µ such that λ µ. Select an initial x 0 Ω nr and evaluate f(x 0 ). Choose initial step lengths σ 0, σ ES 0 > 0 and initial weights (ω 1 0,..., ωµ 0 ) S. Choose constants β 1, β 2, d min, d max such that 0 < β 1 β 2 < 1 and 0 < d min < d max. Select a forcing function ρ( ). for k = 0, 1,... do Step 1: compute new sample points Y k+1 = {yk+1 1,..., yλ k+1 } such that y i k+1 = x k + σ k di k, i = 1,..., λ, where the directions d i k s are computed from the original ES directions di k s (which in turn are drawn from a chosen ES distribution C k and scaled if necessary to satisfy d min d i k d max).; Step 2: evaluate M(yk+1 i ), i = 1,..., λ, and reorder the offspring points in Y k+1 = {ỹk+1 1,..., ỹλ k+1 } by increasing order: M(ỹ1 k+1 ) M(ỹλ k+1 ). Select the new parents as the best µ offspring sample points {ỹk+1 1,..., ỹµ k+1 }, and compute their weighted mean x trial µ k+1 = ω kỹi i k+1 ; Step 3: if x trial k+1 / Ω nr then the iteration is declared unsuccessful; else if x trial k+1 is a Restoration identifier then enter Restoration (with k r = k); else if x trial k+1 is a Successful point then declare the iteration successful, set x k+1 = x trial k+1, and σ k+1 σ k (for example σ k+1 = max{σ k, σk ES}); else the iteration is declared unsuccessful; if the iteration is declared unsuccessful then set x k+1 = x k and σ k+1 = β k σ k, with β k (β 1, β 2 ); Step 4: update the ES step length σk+1 ES, the distribution C k+1, and the weights (ωk+1 1,..., ωµ k+1 ) S; i=1 size is significantly reduced (at least by β 2 ) in unsuccessful iterations, one proves that there exists a subsequence K of unsuccessful iterates driving the step size to zero (what is referred to 7

8 Algorithm 3: A globally convergent ES for general constraints (Restoration) Data: Start from x kr Ω nr given from the Main algorithm and consider the same parameter as in there. for k = k r, k r + 1, k r + 2,... do Step 1: compute new sample points Y k+1 = {yk+1 1,..., yλ k+1 } such that y i k+1 = x k + σ k di k, i = 1,..., λ, where the directions d i k s are computed from the original ES directions di k s (which in turn are drawn from a chosen ES distribution C k and scaled if necessary to satisfy d min d i k d max); Step 2: evaluate g Ωnr (yk+1 i ), i = 1,..., λ, and reorder the offspring points in Y k+1 = {ỹk+1 1,..., ỹλ k+1 } by increasing order: g Ω nr (ỹk+1 1 ) g Ω nr (ỹk+1 λ ). Select the new parents as the best µ offspring sample points {ỹk+1 1,..., ỹµ k+1 }, and compute their weighted mean x trial Step 3: if x trial k+1 / Ω nr then the iteration is declared unsuccessful; else if µ k+1 = ω kỹi i k+1 ; g(x trial k+1 ) < g(x k) ρ(σ k ) and g(x k ) > Cρ(σ k ) then the iteration is declared successful, set x k+1 = x trial i=1 example σ k+1 = max{σ k, σ ES k }); else the iteration is declared unsuccessful; if the iteration is declared unsuccessful then k+1, and σ k+1 σ k (for if M(x trial k+1 ) < M(x k) then leave Restoration and return to the Main algorithm (starting at a new (k + 1)-th iteration using x k+1 and σ k+1 ); else set x k+1 = x k and σ k+1 = β k σ k, with β k (β 1, β 2 ); Step 4: update the ES step length σ ES k+1, the distribution C k+1, and the weights (ω 1 k+1,..., ωµ k+1 ) S; as a refining subsequence in [3]). Consequently, assuming boundness of the sequence of iterates, it is possible to assure the existence of a convergent refining subsequence. Lemma 4.1 Let f be bounded below and assuming that the restoration is not entered after a 8

9 certain order. Then, lim inf σ k = 0. k + Proof. Suppose that there exists a k > 0 and σ > 0 such that σ k > σ and k k is a given iteration of Algorithm 2. If there is an infinite sequence J 1 of successful iterations after k, this leads to a contradiction with the fact that g and f are bounded below. In fact, since ρ is a nondecreasing positive function, one has ρ(σ k ) ρ(σ) > 0. Hence, if g(x k+1 ) < g(x k ) ρ(σ k ) and g(x k ) > Cρ(σ k ) for all k J 1, then g(x k+1 ) < g(x k ) ρ(σ), which obviously contradicts the boundness below of g by 0. Thus there must exists an infinite subsequence J 2 J 1 of iterates for which M(x k+1 ) < M(x k ) ρ(σ k ). Hence, M(x k+1 ) < M(x k ) ρ(σ) for all k J 2. Thus M(x k ) ts to which is a contradiction, since both f and g are bounded below. The proof is thus completed if there is an infinite number of successful iterations. However, if no more successful iterations occur after a certain order, then this also leads to a contradiction. The conclusion is that one must have a subsequence of iterations driving σ k to zero. Theorem 4.1 Let f be bounded below and assuming that the restoration is not entered after a certain order. There exists a subsequence K of unsuccessful iterates for which lim k K σ k = 0. Moreover, if the sequence {x k } is bounded, there exists an x and a refining subsequence K such that lim k K x k = x. Proof. From Lemma 4.1, there must exist an infinite subsequence K of unsuccessful iterates for which σ k+1 goes to zero. In a such case we have σ k = (1/β k )σ k+1, β k (β 1, β 2 ), and β 1 > 0, and thus σ k 0, for k K, too. The second part of the theorem is proved by extracting a convergent subsequence K K for which x k converges to x. The global convergence will be achieved by establishing that some type of directional derivatives are nonnegative at limit points of refining subsequences along certain limit directions (known as refining directions, which will be defined later). By refining subsequence [3], we mean a subsequence of unsuccessful iterates in the Main algorithm (see Algorithm 2) for which the step-size parameter converges to zero. When h is Lipschitz continuous near x Ω nr, one can make use of the Clarke-Jahn generalized derivative along a direction d h (x ; d) = lim sup x x, x Ω nr t 0, x + td Ω nr h(x + td) h(x). t (Such a derivative is essentially the Clarke generalized directional derivative [8], adapted by Jahn [24] to the presence of constraints). However, for the proper definition of h (x ; d), one needs to guarantee that x + td Ω nr for x Ω nr arbitrarily close to x which is assured if d is hypertangent to Ω nr at x. In the following, B(x; ɛ) is the closed ball formed by all points which dist no more than ɛ to x. 9

10 Definition 4.1 A vector d R n is said to be a hypertangent vector to the set Ω nr R n at the point x in Ω nr if there exists a scalar ɛ > 0 such that y + tw Ω nr, y Ω nr B(x; ɛ), w B(d; ɛ), and 0 < t < ɛ. The hypertangent cone to Ω nr at x, denoted by TΩ H nr (x), is the set of all hypertangent vectors to Ω nr at x. Then, the Clarke tangent cone to Ω nr at x (denoted by TΩ CL nr (x)) can be defined as the closure of the hypertangent cone TΩ H nr (x) The Clarke tangent cone generalizes the notion of tangent cone in Nonlinear Programming [31], and the original definition d TΩ CL nr (x) is given below. Definition 4.2 A vector d R n is said to be a Clarke tangent vector to the set Ω nr R n at the point x in the closure of Ω nr if for every sequence {y k } of elements of Ω nr that converges to x and for every sequence of positive real numbers {t k } converging to zero, there exists a sequence of vectors {w k } converging to d such that y k + t k w k Ω nr. Given a direction v in the tangent cone, possibly not in the hypertangent one, one can consider the Clarke-Jahn generalized derivative to Ω nr at x as the limit h (x ; v) = lim h (x ; d) d TΩnr H (x ),d v (see [3]). A point x Ω nr is considered Clarke stationary if h (x ; d) 0, d TΩ CL nr (x ). It remains now to define the notion of refining direction [3], associated with a convergent refining subsequence K, as a limit point of {a k / a k } for all k K sufficiently large such that x k + σ k a k Ω nr, where, in the particular case of taking the weighted mean as the object of evaluation, one has a k = µ i=1 ωi d k i k. The following convergence result is concerning the determination of feasibility. Theorem 4.2 Let a k = µ i=1 ωi k di k and assume that f is bounded below. Suppose that the restoration is not entered after a certain order. Let x Ω nr be the limit point of a convergent subsequence of unsuccessful iterates {x k } K for which lim k K σ k = 0. Assume that g is Lipschitz continuous near x with constant ν g > 0. If d TΩ H nr (x ) is a refining direction associated with {a k / a k } K, then either g(x ) = 0 or g (x ; d) 0. Proof. Let d be a limit point of {a k / a k } K. Then it must exist a subsequence K of K such that a k / a k d on K. On the other hand, we have for all k that x trial µ µ k+1 = ω kỹi i k+1 = x k + σ k ωk i di k = x k + σ k a k, i=1 Since the iteration k K is unsuccessful, g(x trial k+1 ) g(x k) ρ(σ k ) or g(x k ) Cρ(σ k ), and then either there exists an infinite number of the first inequality or the second one as follows: 1. For the case where there exists a subsequence K 1 K such that g(x k ) Cρ(σ k ), it is trivial to obtain g(x ) = 0 using both the continuity of g and the fact that σ k ts to zero in K 1. i=1 10

11 2. For the case where there exists a subsequence K 2 K such that the sequence {a k / a k } K2 converges to d TΩ H nr (x ) in K 2 and the sequence { a k σ k } k K2 goes to zero in K 2 (a k is bounded above for all k, and so σ k a k ts to zero when σ k does). Thus one must have necessarily for k sufficiently large in K 2, x k + σ k a k Ω nr such that g(x k + σ k a k ) g(x k ) ρ(σ k ). From the definition of the Clarke-Jahn generalized derivative along directions d T H Ω nr (x ), where, g g(x + td) g(x) (x ; d) = lim sup x x,t 0,x+td Ω nr t g(x k + σ k a k d) g(x k ) lim sup k K 2 σ k a k g(x k + σ k a k (a k / a k )) g(x k ) lim sup g k, k K 2 σ k a k g k = g(x k + σ k a k ) g(x k + σ k a k d) σ k a k from the Lipschitz continuity of g near x g k = g(x k + σ k a k ) g(x k + σ k a k d) σ k a k a k ν g a k d ts to zero on K 2. Finally, g g(x k + σ k a k ) g(x k ) + ρ(σ k ) (x ; d) lim sup ρ(σ k) k K 2 σ k a k σ k a k g k One then obtains g (x ; d) 0. g(x k + σ k a k ) g(x k ) + ρ(σ k ) = lim sup. k K 2 σ k a k Moreover, assuming that the set of the refining directions d T H Ω nr (x ), associated with {a k / a k } K, is dense in the unite sphere. One can show that the limit point x is Clarke stationary for the flowing optimization problem, known as the constraint violation problem: min g(x) (4) s.t. x Ω nr. 11

12 Theorem 4.3 Let a k = µ i=1 ωi k di k and assume that f is bounded below. Suppose that the restoration is not entered after a certain order. Assume that the directions d i k s and the weights ωk i s are such that (i) σ k a k ts to zero when σ k does, and (ii) ρ(σ k )/(σ k a k ) also ts to zero. Let x Ω nr be the limit point of a convergent subsequence of unsuccessful iterates {x k } K for which lim k K σ k = 0 and that TΩ CL(x ). Assume that g is Lipschitz continuous near x with constant ν > 0 Then either (a) g(x ) = 0 (implying x Ω r and thus x Ω) or (b) if the set of refining directions d TΩ CL nr (x ) associated with {a k / a k } K (where K is a subsequence of K for which g(x k + σ k a k ) g(x k ) ρ(σ k )) is dense in TΩ CL nr (x ) {d R n : d = 1}, then g (x ; v) 0 for all v TΩ CL nr (x ) and x is a Clarke stationary point of the constraint violation problem (4). Proof. See the proof of [18, Theorem 4.2]. We now move to an intermediate optimality result. As in [18], we will not use x Ω r explicitly in the proof but only g (x ; d) 0. The latter inequality describes the cone of first order linearized directions under feasibility assumption x Ω r. Theorem 4.4 Let a k = µ i=1 ωi k di k and assume that f is bounded below. Suppose that the restoration is not entered after a certain order. Let x Ω nr be the limit point of a convergent subsequence of unsuccessful iterates {x k } K for which lim k K σ k = 0. Assume that g and f are Lipschitz continuous near x. If d TΩ H nr (x ) is a refining direction associated with {a k / a k } K such that g (x ; d) 0. Then f (x ; d) 0. Proof. By assumption there exists a subsequence K K such that the sequence {a k / a k } K converges to d TΩ H nr (x ) in K and the sequence { a k σ k } K goes to zero in K, Thus one must have necessarily for k sufficiently large in K, x trial k+1 = x k + σ k a k Ω nr. Since the iteration k K is unsuccessful, one has M(x trial k+1 ) M(x k) ρ(σ k ), and thus f(x k + σ k a k ) f(x k ) a k σ k δ g(x k + σ k a k ) g(x k ) a k σ k ρ(σ k) σ k a k (5) On the other hand, where, f (x ; d) = f(x + td) f(x) lim sup x x,t 0,x+td Ω t f(x k + σ k a k d) f(x k ) lim sup k K σ k a k f(x k + σ k a k (a k / a k )) f(x k ) lim sup f k, k K σ k a k f k = f(x k + σ k a k ) f(x k + σ k a k d), σ k a k 12

13 which then implies from (5) f f(x k + σ k a k (a k / a k )) f(x k ) (x ; d) lim sup f k, k K σ k a k lim sup δ g(x k + σ k a k ) g(x k ) ρ(σ k) k K a k σ k σ k a k f k lim sup δ g(x k + σ k a k d) g(x k ) + k K σ k a k δg k ρ(σ k) σ k a k f k, where g k = g(x k + σ k a k ) g(x k + σ k a k d). σ k a k From the assumption g (x ; d) 0, one has g(x k + σ k a k d) g(x k ) g(x + td) g(x) lim sup lim sup 0, k K σ k a k x x,t 0,x+td Ω nr t one obtains then f (x ; d) lim sup k K δgk ρ(σ k) σ k a k f k. (6) The Lipschitz continuity of both g and f near x guaranties that the quantities f k and g k t to zero in K. Thus, the proof is completed since the right-hand-side of (6) ts to zero in K. Theorem 4.5 Assuming that f is bounded below and that Restoration is not entered after a certain order. Let x Ω nr be the limit point of a convergent subsequence of unsuccessful iterates {x k } k K for which lim k K σ k = 0. Assume that g and f are Lipschitz continuous near x. Assume that the set T (x ) = T H Ω nr (x ) {d R n : d = 1, g (x, d) 0} (7) has a non-empty interior. Let the set of refining directions be dense in T (x ). Then f (x, v) 0 for all v TΩ CL nr (x ) such that g (x, v) 0, and x is a Clarke stationary point of the problem (1). Proof. See the proof of [18, Theorem 4.4]. 5 Numerical experiments To quantify the efficiency of the proposed merit approach, we compare our solver with the direct search method MADS where the progressive barrier approach is been implemented [4] to handle both relaxable and unrelaxable constraints. We used the implementation given in the NOMAD package [1, 2, 28], version (C++ version linked to Matlab via a mex interface), where we 13

14 enabled the option DISABLE MODELS, meaning that no modeling is used in MADS. The models are disabled since our solvers are not using any modeling to speed up the convergence. The parameter choices of Algorithm 2 and Algorithm 3 followed those in [15]. The values of λ and µ and of the initial weights are those of CMA-ES for unconstrained optimization (see [19]): λ = 4 + floor(3 log(n)), µ = floor(λ/2), where floor( ) rounds to the nearest integer, and ω0 i = a i/(a a µ ), a i = log(λ/2 + 1/2) log(i), i = 1,..., µ. The choices of the distribution C k and of the update of σk ES also followed CMA-ES for unconstrained optimization (see [19]). The forcing function selected was ρ(σ) = 10 4 σ 2. To reduce the step length in unsuccessful iterations we used σ k+1 = 0.9σ k which corresponds to setting β 1 = β 2 = 0.9. In successful iterations we set σ k+1 = max{σ k, σk CMA-ES } (with σk CMA-ES the CMA step size used in ES). The directions d i k, i = 1,..., λ, were scaled if necessary to obey the safeguards d min d i k d max, with d min = and d max = For the update of the penalty parameter we picked δ = max{10, g(x 0 )} and C = 100. The measure for constraint violation was using the l 1 -norm penalty. The initial step size is estimated using only the bound constraints: If there is a pair of finite lower and upper bounds for a variable, then σ 0 is set to the half of the minimum of such distances, otherwise σ 0 = 1. The starting point is the same for all solvers and set to (LB + UB)/2 when the bounds LB and UB are given, otherwise it is chosen randomly in the search space. In our test results, we consider that all the constraints are relaxable except the bounds. In this case, the merit function (MF) and the progressive approaches (PB) are respectively enabled, the related solvers will be called ES-MF and MADS-PB respectively. 5.1 Results on known test problems Our test set is the one used in [22, 23, 26, 30] and comprises 13 well-known test problems G1 G13 (see Table 1). These test problems, coded in Matlab, exhibit a diversity of features and the kind of difficulties that appear in constrained global optimization. In addition to such problems, we added three other engineering optimization problems [22, 11]: PVD the pressure vessel design problem, TCS the tension-compression string problem, and WBD the welded beam design problem. Problems G2, G3, and G8 are maximization problems and were converted to minimization. Problems G3, G5, G11, and WBD contain equality constraints. When a constraint is of the form c e i (x) = 0, we use the following relaxed inequality constraint instead c i(x) = c e i (x) Name G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12 G13 PVD TCS WBD n m Table 1: Some of the features of the non-linear constrained optimization problems: the dimension n and the number of the constraints m (in addition to the bounds). Tables 2 and 3, report results (for the average of 10 runs) for both ES-MF and MADS-PB using a maximal budget of 1000 and 20000, respectively. For each problem, we display the optimal objective value found by the solver f(x ), the associated constrained violation g(x ), and the number of objective function evaluations #f needed to reach x. When a solver returns a flag error or encounters an internal problem, we display instead of the values of f(x ) and 14

15 g(x ). At the solution x, one requires at least tolerance of 10 5 on the constraints violation (i.e. g(x ) < 10 5 ). Name Best known ES-MF MADS-PB f opt f(x ) #f g(x ) f(x ) #f g(x ) Starting feasible G e G G e G G e G G G G G G e G G e PVD e TCS WBD Starting infeasible G G G e G G e G G G G G G e G G PVD TCS WBD Table 2: Comparison results for the merit approach using a maximal budget of 1000 (average of 10 runs). Table 2 gives the obtained results for a maximal budget of 1000 function evaluations. For both starting strategies (feasible or not) and except few problems, the tested solvers were not able to converge with the regarded budget. ES-MF is shown to have comparable performance with 15

16 MADS-PB on the tested problems. In fact, with a feasible starting point, ES-MF is performing well on the problems G1, G2, G3, G8, G11, G12, and G13. While MADS-PB is being the best on the problems G4, G5, G6, G7, G9, G10, PVD, TCS and WBD. Using an infeasible starting point, ES-MF is performing better on the problems G1, G2, G3, G7, G8, G11, G12, TCS, and WBD. For a large maximal number of function evaluation of (Table 3), ES-MF and MADS- PB achieve convergence to a stationary point on more problems. We note that MADS-PB requires more function evaluations for four problems G2, G3, G10 and G13. When the starting point is feasible, enlarging the budget allows having exact feasibility at the solution point. In this case, regarding the value of the objective function, we note that increasing the number of function evaluation does not affect the results compare to the ones obtained using 1000 function evaluations. When using an infeasible starting point, the advantage of ES-MF over MADS-PB is more evident compared to the small budget case. In fact, one can observe that ES-MF is better than MADS-PB on nine of the sixteen tested problems (i.e. G1, G2, G5, G6, G7, G8, G9, G11, G12, G13 and WBD). MADS-PB is shown to be better on the following four problems: G3, G4, G6 and PVD. Both solvers did not succeed to find a feasible solution for the problem G10, for the TCS problem MADS returns a flag error while ES-MF converge to an unfeasible solution. 5.2 Application to a multidisciplinary design optimization problem MDO problems are typical real applications where one has to minimize a given objectif function subject to a set of relaxable and unrelaxable constraints. In this section, we test our proposed algorithm in an MDO problem taken from [17, 35] where a simplified wing design (built around a tube) is regarded. In this test case, one tries to find the best wing design considering interdisciplinary trade-off, which is between aerodynamic (a minimum drag) and structural (a minimum weight) performances. Typically, the two disciplines are evaluated sequentially by means of a fixed point iterative method until the coupling is solved with the appropriate accuracy. More details on the problem are given in [17]. The optimization problem has 7 design variables, see Table 5.2. In addition to the bounds, the test case has three nonlinear constraints which are treated as relaxable. The bound contraints x lb and x ub are regarded as unrelaxable and will be treated using l 2 projection approach. We run our code using the proposed starting guess x 0 = (37.5, 9.0, 0.39, 1.1, 1.0, 3.3, 0.545) as in [35]. The provided starting point is infeasible towards the nonlinear constraints. A large maximal number of function evaluation of is used to quantify the asymptotic efficiency and the robustness of the tested methods. From the obtained results, one can see that ES-MF converges to an asymptotically feasible solution x = (43.043, 6.738, 0.28, 3.000, 0.749, 3.942, 0.300) with f(x ) = and g(x ) = using function evaluations. For MADS-PB, using 3848 function evaluations, converges to the feasible point x = (44.170, 6.746, 0.28, 3.000, 0.721, 4.028, 0.300) with f(x ) = We note that while MADS-PB seems to converge to a local minimum but with a reasonable budget, the obtained solution using ES-MF solver seems to be better than even the best know optimum but with a very small constraints violation. To confirm the obtained performance of ES-MF on this MDO problem, we test also 10 random starting points generated inside the hyper-cube x lb x ub as follows for 10 values of α uniformly generated in (0, 1). x 0 = αx lb + (1 α)x ub, 16

17 Name Best known ES-MF MADS-PB f opt f(x ) #f g(x ) f(x ) #f g(x ) Starting feasible G G G G G G G G G G G G G PVD TCS WBD Starting infeasible G G G G G G G G G G G G G PVD e e TCS WBD Table 3: Comparison results for the merit approach using a maximal budget of (average of 10 runs). The obtained results using MADS-PB and ES-MF are given in Table 5. One can see that for all the chosen starting points ES-MF converges to the global minimum of the MDO problem while MADS-PB gets trapped by local minima. We note that in general ES-MF requires more function evaluations than MADS for all the tested instances. 17

18 Design variable Best known x lb x ub Starting guess Wing span x Root cord x Taper ratio x Angle of attack at root x Angle of attack at tip and at rest x Tube external diameter x Tube thickness x Objective function value Constraint violation Table 4: Description of the MDO problem variables. The coordinates and the value of the best known solution have been rounded. Problem f at x 0 ES-MF MADS-PB Instance f(x 0 ) f(x ) g(x ) #f f(x ) #f g(x ) MDO e MDO e MDO e MDO e MDO e MDO e MDO e MDO e MDO e MDO e Table 5: Comparison results obtained on the tested MDO problem using 10 different starting points and with a maximal budget of (average of 10 runs for each starting point). 6 Conclusion In this paper, a new ES algorithm is proposed by using a merit function to decide and control the distribution of the generated points. The obtained algorithm enjoyed global convergence and generalized the work [15] by including relaxable constraints in the spirit of what is done in [18]. The proposed convergence analysis was organized deping on the number of times Restoration is entered. We provided a practical implementation on well-known global optimization problems as well as a multidisciplinary design optimization problem. In terms of global exploration, the obtained results were promising compared to the progressive barrier approach proposed in MADS algorithm. 18

19 A Appix A.1 Case where algorithm is never left Theorem A.1 Assume that f is bounded below and that the restoration is entered and never left. (i) Then there exists a refining subsequence. (ii) Let x Ω nr be the limit point of a convergent subsequence of unsuccessful of iterates {x k } K for which lim k K σ k = 0. Assume that g is Lipschitz continuous near x, and let d TΩ H nr (x ) a corresponding refining direction. Then either g(x ) = 0 or g (x ; d) 0. (iii) Let x Ω nr be the limit point of a convergent subsequence of unsuccessful of iterates {x k } K for which lim k K σ k = 0. Assume that g and f are Lipschitz continuous near x, and let d TΩ H nr (x ) a corresponding refining direction such that g (x ; d) 0. Then f (x ; d) 0. (iv) Assume that the interior of the set T (x ) given in (7) is non-empty. Let the set of refining directions be dense in T (x ). Then f (x, v) 0 for all v TΩ CL nr (x ) such that g (x, v) 0, and x is a Clarke stationary point of the problem (1). Proof. (i) There must exist a refining subsequence K within this call of the restoration, by applying the same argument of the case where one has g(x k+1 ) < g(x k ) ρ(σ k ) and g(x k ) > Cρ(σ k ) for an infinite subsequence of successful iterations (see the proof of Theorem 4.1). By assumption there exists a subsequence K K such that the sequence {a k / a k } k K converges to d TΩ H nr (x ) in K and the sequence { a k σ k } k K goes to zero in K. Thus one must have necessarily for k sufficiently large in K, x trial k+1 = x k + σ k a k Ω nr. (ii) Since the iteration k K is unsuccessful in the Restoration, g(x k + σ k a k ) g(x k ) ρ(σ k ) or g(x k+1 ) Cρ(σ k ), and the proof follows an argument already seen (see the proof of Theorem 4.2). (iii) Since at the unsuccessful iteration k K, Restoration is never left, so one has M(x k + σ k a k ) M(x k ), and the proof follows an argument already seen (see the proof of Theorem 4.4). (iv) The same proof as [18, Theorem 4.4]. A.2 Case where restoration algorithm is entered and left infinite times Theorem A.2 Consider Algorithm 2 and assume that f is bounded below. Assume that Restoration is entered and left an infinite number of times. (i) Then there exists a refining subsequence. (ii) Let x Ω nr be the limit point of a convergent subsequence of unsuccessful of iterates {x k } K for which lim k K σ k = 0. Assume that g is Lipschitz continuous near x, and let d TΩ H nr (x ) a corresponding refining direction. Then either g(x ) = 0 (implying x Ω r and thus x Ω) or g (x ; d) 0. (iii) Let x Ω nr be the limit point of a convergent subsequence of unsuccessful of iterates {x k } K for which lim k K σ k = 0. Assume that g and f are Lipschitz continuous near x, and let d TΩ H nr (x ) a corresponding refining direction such that g (x ; d) 0. Then f (x ; d) 0. (iv) Assume that the interior of the set T (x ) given in (7) is non-empty. Let the set of refining directions be dense in T (x ).Then f (x, v) 0 for all v TΩ CL nr (x ) such that g (x, v) 0, and x is a Clarke stationary point. 19

20 Proof. (i) Let K 1 K and K 2 K be two subsequences where Restoration is entered and left respectively. Since the iteration k K 2 is unsuccessful in the Restoration, one knows that the step size σ k is reduced and never increased, one then obtains that σ k ts to zero. By assumption there exists a subsequence K K 2 such that the sequence {a k / a k } k K converges to d T H Ω nr (x ) in K 2 and the sequence { a k σ k } k K goes to zero in K. (ii) For all k K, one has g(x k + σ k a k ) g(x k ) ρ(σ k ) or g(x k ) Cρ(σ k ), one concludes that either g(x ) = 0 or g (x ; d) 0. (iii) For all k K, one has M(x k + σ k a k ) M(x k ), and from this we conclude that f (x ; d) 0 if g (x ; d) 0. (iv) The same proof as [18, Theorem 4.4]. References [1] M. A. Abramson, C. Audet, G. Couture, J. E. Dennis Jr., S. Le Digabel, and C. Tribes. The NOMAD project. Software available at [2] C. Audet, S. Le Digabel, and C. Tribes. NOMAD user guide. Technical Report G , Les cahiers du GERAD, [3] C. Audet and J. E. Dennis Jr. Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim., 17: , [4] C. Audet and J. E. Dennis Jr. A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim., 20(1): , [5] A. Auger, D. Brockhoff, and N. Hansen. Benchmarking the local metamodel CMA-ES on the noiseless BBOB 2013 test bed. In Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation, GECCO 13 Companion, pages , New York, NY, USA, ACM. [6] A. Auger, N. Hansen, Z. J. Perez, R. Ros, and M. Schoenauer. Experimental comparisons of derivative free optimization algorithms. In 8th International Symposium on Experimental Algorithms, pages Springer Verlag, [7] Z. Bouzarkouna. Well placement optimization. PhD thesis, University Paris-Sud - Laboratoire de Recherche en Informatique, [8] F. H. Clarke. Optimization and Nonsmooth Analysis. John Wiley & Sons, New York, Reissued by SIAM, Philadelphia, [9] C. A. C. Coello. List of references on constraint-handling techniques used with evolutionary algorithms. [10] C. A. C. Coello. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Computer Methods in Applied Mechanics and Engineering, 91: , January [11] C. A. Coello Coello and E. M. Montes. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Advanced Engineering Informatics, 16: , [12] A. R. Conn, K. Scheinberg, and L. N. Vicente. Introduction to Derivative-Free Optimization. MPS- SIAM Series on Optimization. SIAM, Philadelphia, [13] Y. Diouane, S. Gratton, X. Vasseur, L. N. Vicente, and H. Calandra. A parallel evolution strategy for an earth imaging problem in geophysics. Optim. Eng., 17:3 26,

Globally Convergent Evolution Strategies

Globally Convergent Evolution Strategies Globally Convergent Evolution Strategies Y. Diouane S. Gratton L. N. Vicente July 9, 24 Abstract In this paper we show how to modify a large class of evolution strategies (ES s) for unconstrained optimization

More information

Globally Convergent Evolution Strategies and CMA-ES

Globally Convergent Evolution Strategies and CMA-ES Globally Convergent Evolution Strategies and CMA-ES Y. Diouane, S. Gratton and L. N. Vicente Technical Report TR/PA/2/6 Publications of the Parallel Algorithms Team http://www.cerfacs.fr/algor/publications/

More information

Worst Case Complexity of Direct Search

Worst Case Complexity of Direct Search Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient

More information

Implicitely and Densely Discrete Black-Box Optimization Problems

Implicitely and Densely Discrete Black-Box Optimization Problems Implicitely and Densely Discrete Black-Box Optimization Problems L. N. Vicente September 26, 2008 Abstract This paper addresses derivative-free optimization problems where the variables lie implicitly

More information

MULTIGLODS: GLOBAL AND LOCAL MULTIOBJECTIVE OPTIMIZATION USING DIRECT SEARCH

MULTIGLODS: GLOBAL AND LOCAL MULTIOBJECTIVE OPTIMIZATION USING DIRECT SEARCH MULTIGLODS: GLOBAL AND LOCAL MULTIOBJECTIVE OPTIMIZATION USING DIRECT SEARCH A. L. CUSTÓDIO AND J. F. A. MADEIRA Abstract. The optimization of multimodal functions is a challenging task, in particular

More information

MESH ADAPTIVE DIRECT SEARCH ALGORITHMS FOR CONSTRAINED OPTIMIZATION

MESH ADAPTIVE DIRECT SEARCH ALGORITHMS FOR CONSTRAINED OPTIMIZATION SIAM J. OPTIM. Vol. 17, No. 1, pp. 188 217 c 2006 Society for Industrial and Applied Mathematics MESH ADAPTIVE DIRECT SEARCH ALGORITHMS FOR CONSTRAINED OPTIMIZATION CHARLES AUDET AND J. E. DENNIS, JR.

More information

MultiGLODS: Global and Local Multiobjective Optimization using Direct Search

MultiGLODS: Global and Local Multiobjective Optimization using Direct Search Noname manuscript No. (will be inserted by the editor) MultiGLODS: Global and Local Multiobjective Optimization using Direct Search A. L. Custódio J. F. A. Madeira the date of receipt and acceptance should

More information

Multiobjective Optimization

Multiobjective Optimization Multiobjective Optimization MTH6418 S Le Digabel, École Polytechnique de Montréal Fall 2015 (v2) MTH6418: Multiobjective 1/36 Plan Introduction Metrics BiMADS Other methods References MTH6418: Multiobjective

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Mesh adaptive direct search algorithms for mixed variable optimization

Mesh adaptive direct search algorithms for mixed variable optimization Optim Lett (2009) 3:35 47 DOI 10.1007/s11590-008-0089-2 ORIGINAL PAPER Mesh adaptive direct search algorithms for mixed variable optimization Mark A. Abramson Charles Audet James W. Chrissis Jennifer G.

More information

Generalized Pattern Search Algorithms : unconstrained and constrained cases

Generalized Pattern Search Algorithms : unconstrained and constrained cases IMA workshop Optimization in simulation based models Generalized Pattern Search Algorithms : unconstrained and constrained cases Mark A. Abramson Air Force Institute of Technology Charles Audet École Polytechnique

More information

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements 3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture

More information

Worst Case Complexity of Direct Search

Worst Case Complexity of Direct Search Worst Case Complexity of Direct Search L. N. Vicente October 25, 2012 Abstract In this paper we prove that the broad class of direct-search methods of directional type based on imposing sufficient decrease

More information

Constrained Derivative-Free Optimization on Thin Domains

Constrained Derivative-Free Optimization on Thin Domains Constrained Derivative-Free Optimization on Thin Domains J. M. Martínez F. N. C. Sobral September 19th, 2011. Abstract Many derivative-free methods for constrained problems are not efficient for minimizing

More information

A recursive model-based trust-region method for derivative-free bound-constrained optimization.

A recursive model-based trust-region method for derivative-free bound-constrained optimization. A recursive model-based trust-region method for derivative-free bound-constrained optimization. ANKE TRÖLTZSCH [CERFACS, TOULOUSE, FRANCE] JOINT WORK WITH: SERGE GRATTON [ENSEEIHT, TOULOUSE, FRANCE] PHILIPPE

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Key words. nonlinear programming, pattern search algorithm, derivative-free optimization, convergence analysis, second order optimality conditions

Key words. nonlinear programming, pattern search algorithm, derivative-free optimization, convergence analysis, second order optimality conditions SIAM J. OPTIM. Vol. x, No. x, pp. xxx-xxx c Paper submitted to Society for Industrial and Applied Mathematics SECOND ORDER BEHAVIOR OF PATTERN SEARCH MARK A. ABRAMSON Abstract. Previous analyses of pattern

More information

A PATTERN SEARCH FILTER METHOD FOR NONLINEAR PROGRAMMING WITHOUT DERIVATIVES

A PATTERN SEARCH FILTER METHOD FOR NONLINEAR PROGRAMMING WITHOUT DERIVATIVES A PATTERN SEARCH FILTER METHOD FOR NONLINEAR PROGRAMMING WITHOUT DERIVATIVES CHARLES AUDET AND J.E. DENNIS JR. Abstract. This paper formulates and analyzes a pattern search method for general constrained

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

Mesures de criticalité d'ordres 1 et 2 en recherche directe

Mesures de criticalité d'ordres 1 et 2 en recherche directe Mesures de criticalité d'ordres 1 et 2 en recherche directe From rst to second-order criticality measures in direct search Clément Royer ENSEEIHT-IRIT, Toulouse, France Co-auteurs: S. Gratton, L. N. Vicente

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Adaptive Coordinate Descent

Adaptive Coordinate Descent Adaptive Coordinate Descent Ilya Loshchilov 1,2, Marc Schoenauer 1,2, Michèle Sebag 2,1 1 TAO Project-team, INRIA Saclay - Île-de-France 2 and Laboratoire de Recherche en Informatique (UMR CNRS 8623) Université

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS

USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 06 48 USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS A. L. CUSTÓDIO, J. E. DENNIS JR. AND

More information

On the Convergence of Adaptive Stochastic Search Methods for Constrained and Multi-Objective Black-Box Global Optimization

On the Convergence of Adaptive Stochastic Search Methods for Constrained and Multi-Objective Black-Box Global Optimization On the Convergence of Adaptive Stochastic Search Methods for Constrained and Multi-Objective Black-Box Global Optimization Rommel G. Regis Citation: R. G. Regis. 2016. On the convergence of adaptive stochastic

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

arxiv: v1 [math.oc] 1 Jul 2016

arxiv: v1 [math.oc] 1 Jul 2016 Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Global convergence of trust-region algorithms for constrained minimization without derivatives

Global convergence of trust-region algorithms for constrained minimization without derivatives Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose

More information

Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems

Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems Mark A. Abramson Air Force Institute of Technology Department of Mathematics and Statistics 2950 Hobson Way, Building

More information

USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS

USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS A. L. CUSTÓDIO, J. E. DENNIS JR., AND L. N. VICENTE Abstract. It has been shown recently that the efficiency of direct search methods

More information

Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES

Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES Viability Principles for Constrained Optimization Using a (1+1)-CMA-ES Andrea Maesani and Dario Floreano Laboratory of Intelligent Systems, Institute of Microengineering, Ecole Polytechnique Fédérale de

More information

On constraint qualifications with generalized convexity and optimality conditions

On constraint qualifications with generalized convexity and optimality conditions On constraint qualifications with generalized convexity and optimality conditions Manh-Hung Nguyen, Do Van Luu To cite this version: Manh-Hung Nguyen, Do Van Luu. On constraint qualifications with generalized

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Interpolation-Based Trust-Region Methods for DFO

Interpolation-Based Trust-Region Methods for DFO Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv

More information

On the complexity of an Inexact Restoration method for constrained optimization

On the complexity of an Inexact Restoration method for constrained optimization On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization

More information

arxiv: v1 [cs.ne] 9 May 2016

arxiv: v1 [cs.ne] 9 May 2016 Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES) arxiv:1605.02720v1 [cs.ne] 9 May 2016 ABSTRACT Ilya Loshchilov University of Freiburg Freiburg, Germany ilya.loshchilov@gmail.com

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

8 Barrier Methods for Constrained Optimization

8 Barrier Methods for Constrained Optimization IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information

Key words. Global optimization, multistart strategies, direct-search methods, pattern search methods, nonsmooth calculus.

Key words. Global optimization, multistart strategies, direct-search methods, pattern search methods, nonsmooth calculus. GLODS: GLOBAL AND LOCAL OPTIMIZATION USING DIRECT SEARCH A. L. CUSTÓDIO AND J. F. A. MADEIRA Abstract. Locating and identifying points as global minimizers is, in general, a hard and timeconsuming task.

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Week 2: Sequences and Series

Week 2: Sequences and Series QF0: Quantitative Finance August 29, 207 Week 2: Sequences and Series Facilitator: Christopher Ting AY 207/208 Mathematicians have tried in vain to this day to discover some order in the sequence of prime

More information

Worst case complexity of direct search

Worst case complexity of direct search EURO J Comput Optim (2013) 1:143 153 DOI 10.1007/s13675-012-0003-7 ORIGINAL PAPER Worst case complexity of direct search L. N. Vicente Received: 7 May 2012 / Accepted: 2 November 2012 / Published online:

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Direct search based on probabilistic feasible descent for bound and linearly constrained problems

Direct search based on probabilistic feasible descent for bound and linearly constrained problems Direct search based on probabilistic feasible descent for bound and linearly constrained problems S. Gratton C. W. Royer L. N. Vicente Z. Zhang March 8, 2018 Abstract Direct search is a methodology for

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Convex Analysis and Optimization Chapter 2 Solutions

Convex Analysis and Optimization Chapter 2 Solutions Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Introduction to Black-Box Optimization in Continuous Search Spaces. Definitions, Examples, Difficulties

Introduction to Black-Box Optimization in Continuous Search Spaces. Definitions, Examples, Difficulties 1 Introduction to Black-Box Optimization in Continuous Search Spaces Definitions, Examples, Difficulties Tutorial: Evolution Strategies and CMA-ES (Covariance Matrix Adaptation) Anne Auger & Nikolaus Hansen

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence

Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations

More information

Online Supplement for. Engineering Optimization

Online Supplement for. Engineering Optimization Online Supplement for Constrained Optimization by Radial Basis Function Interpolation for High-Dimensional Expensive Black-Box Problems with Infeasible Initial Points Engineering Optimization Rommel G.

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Advanced Optimization

Advanced Optimization Advanced Optimization Lecture 3: 1: Randomized Algorithms for for Continuous Discrete Problems Problems November 22, 2016 Master AIC Université Paris-Saclay, Orsay, France Anne Auger INRIA Saclay Ile-de-France

More information

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS Xiaofei Fan-Orzechowski Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony

More information

Derivative-Free Trust-Region methods

Derivative-Free Trust-Region methods Derivative-Free Trust-Region methods MTH6418 S. Le Digabel, École Polytechnique de Montréal Fall 2015 (v4) MTH6418: DFTR 1/32 Plan Quadratic models Model Quality Derivative-Free Trust-Region Framework

More information

WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY

WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY WORST CASE OPTIMIZATION USING CHEBYSHEV INEQUALITY Kiyoharu Tagawa School of Science and Engineering, Kindai University, Japan tagawa@info.kindai.ac.jp Abstract In real-world optimization problems, a wide

More information

A line-search algorithm inspired by the adaptive cubic regularization framework with a worst-case complexity O(ɛ 3/2 )

A line-search algorithm inspired by the adaptive cubic regularization framework with a worst-case complexity O(ɛ 3/2 ) A line-search algorithm inspired by the adaptive cubic regularization framewor with a worst-case complexity Oɛ 3/ E. Bergou Y. Diouane S. Gratton December 4, 017 Abstract Adaptive regularized framewor

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Empirical comparisons of several derivative free optimization algorithms

Empirical comparisons of several derivative free optimization algorithms Empirical comparisons of several derivative free optimization algorithms A. Auger,, N. Hansen,, J. M. Perez Zerpa, R. Ros, M. Schoenauer, TAO Project-Team, INRIA Saclay Ile-de-France LRI, Bat 90 Univ.

More information

A multistart multisplit direct search methodology for global optimization

A multistart multisplit direct search methodology for global optimization 1/69 A multistart multisplit direct search methodology for global optimization Ismael Vaz (Univ. Minho) Luis Nunes Vicente (Univ. Coimbra) IPAM, Optimization and Optimal Control for Complex Energy and

More information

Module 04 Optimization Problems KKT Conditions & Solvers

Module 04 Optimization Problems KKT Conditions & Solvers Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

On the optimal order of worst case complexity of direct search

On the optimal order of worst case complexity of direct search On the optimal order of worst case complexity of direct search M. Dodangeh L. N. Vicente Z. Zhang May 14, 2015 Abstract The worst case complexity of direct-search methods has been recently analyzed when

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems

Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems Mark A. Abramson Air Force Institute of Technology Department of Mathematics and Statistics 2950 Hobson Way, Building

More information

A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem

A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem Jun He 1, Yuren Zhou 2, and Xin Yao 3 1 J. He is with the Department of Computer Science,

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions Economics 24 Fall 211 Problem Set 2 Suggested Solutions 1. Determine whether the following sets are open, closed, both or neither under the topology induced by the usual metric. (Hint: think about limit

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

Key words. Pattern search, linearly constrained optimization, derivative-free optimization, degeneracy, redundancy, constraint classification

Key words. Pattern search, linearly constrained optimization, derivative-free optimization, degeneracy, redundancy, constraint classification PATTERN SEARCH METHODS FOR LINEARLY CONSTRAINED MINIMIZATION IN THE PRESENCE OF DEGENERACY OLGA A. BREZHNEVA AND J. E. DENNIS JR. Abstract. This paper deals with generalized pattern search (GPS) algorithms

More information

Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating Support Vector Machines and Covariance-Matrix Adaptation-ES

Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating Support Vector Machines and Covariance-Matrix Adaptation-ES Covariance Matrix Adaptation-Evolution Strategy Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating and Covariance-Matrix Adaptation-ES Ilya Loshchilov, Marc Schoenauer,

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

An asymptotic ratio characterization of input-to-state stability

An asymptotic ratio characterization of input-to-state stability 1 An asymptotic ratio characterization of input-to-state stability Daniel Liberzon and Hyungbo Shim Abstract For continuous-time nonlinear systems with inputs, we introduce the notion of an asymptotic

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Karol Opara 1 and Jaros law Arabas 2 1 Systems Research Institute, Polish Academy of Sciences 2 Institute of Electronic

More information

Derivative-free methods for nonlinear programming with general lower-level constraints*

Derivative-free methods for nonlinear programming with general lower-level constraints* Volume 30, N. 1, pp. 19 52, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam Derivative-free methods for nonlinear programming with general lower-level constraints* M.A. DINIZ-EHRHARDT 1, J.M.

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Meta-heuristics for combinatorial optimisation

Meta-heuristics for combinatorial optimisation Meta-heuristics for combinatorial optimisation João Pedro Pedroso Centro de Investigação Operacional Faculdade de Ciências da Universidade de Lisboa and Departamento de Ciência de Computadores Faculdade

More information

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018

More information