How to Solve a Semi-infinite Optimization Problem

Size: px
Start display at page:

Download "How to Solve a Semi-infinite Optimization Problem"

Transcription

1 How to Solve a Semi-infinite Optimization Problem Oliver Stein March 27, 2012 Abstract After an introduction to main ideas of semi-infinite optimization, this article surveys recent developments in theory and numerical methods for standard and generalized semi-infinite optimization problems. Particular attention is paid to connections with mathematical programs with complementarity constraints, lower level Wolfe duality, semi-smooth approaches, as well as branch and bound techniques in adaptive convexification procedures. A section on recent genericity results includes a discussion of the symmetry effect in generalized semiinfinite optimization. Keywords: Semi-infinite optimization, design centering, robust optimization, mathematical program with complementarity constraints, Wolfe duality, semi-smooth equation, adaptive convexification, genericity, symmetry. AMS subject classifications: 90C34, 90C30, 49M37, 65K10, Institute of Operations Research, Karlsruhe Institute of Technology (KIT), Germany, stein@kit.edu 1

2 1 Introduction This article reviews recent developments in theory, applications and numerical methods of so-called semi-infinite optimization problems, where finitely many variables are subject to infinitely many inequality constraints. In a general form, these problems can be stated as GSIP : minimize f(x) subject to x M with the set of feasible points M = {x R n g(x, y) 0 for all y Y (x)} (1) and a set-valued mapping Y : R n R m describing the index set of inequality constraints. The defining functions f and g are assumed to be real-valued and at least continuous on their respective domains. Standard assumptions on the set-valued mapping Y are somewhat technical and will be stated in Section 3 below. Clearly, the main numerical challenge in semi-infinite optimization is to find ways to treat the infinitely many constraints. While in the case of an x dependent index set Y (x) one speaks of a generalized semi-infinite problem (thus, the acronym GSIP), the important subclass formed by problems with constant index set Y R m is termed standard semi-infinite optimization (SIP). As basic references we mention [34] for an introduction to semi-infinite optimization, [36, 74] for numerical methods in SIP, and the monographs [23] for linear semi-infinite optimization as well as [70] for algorithmic aspects. The monograph [85] contains a detailed study of generalized semi-infinite optimization. The most recent comprehensive reviews on theory, applications and numerical methods of standard and generalized semi-infinite optimization are [58] and [31], respectively. The reader is referred to these articles for descriptions of the state of art in semi-infinite optimization around the year The present paper, on the other hand, will focus on important developments of this very active field during the past five years. As internet resources we recommend the semi-infinite programming bibliography [59] with several hundreds of references, and the NEOS semi-infinite programming directory [22]. Furthermore, since the most recent results on stability theory for linear semi-infinite problems are excellently surveyed in [57], we will not discuss them in the present paper. 2

3 This article is structured as follows. In Section 2 we will briefly discuss some classical as well as recent applications of semi-infinite optimization. Section 3 introduces basic theoretical concepts on which both the derivation of optimality conditions and the design of numerical methods rely. Section 4 explains how the bilevel structure of semi-infinite optimization can be employed numerically, where besides a well-known lifting approach leading to mathematical programs with complementarity constraints, we also present recent results on a lifting idea resulting in nondegenerate smooth problems. Section 5 describes a feasible point algorithm for standard SIP s which is based on recent developments in global optimization. Important genericity results in generalized semi-infinite optimization, which were obtained only in the past few years, are reviewed in Section 6, before we close the article with some final remarks in Section 7. 2 Applications Among the numerous applications of semi-infinite optimization, in this section we focus on design centering and on robust optimization. We emphasize, however, that semi-infinite optimization historically emerged as a smooth reformulation of the intrinsically nonsmooth Chebyshev approximation problem (cf., e.g., [31, 36, 85]). Further applications include the optimal layout of an assembly line ([47, 102]), time minimal control ([51, 54, 102]), disjunctive optimization ([85]) and, more recently, robust portfolio optimization ([13, 105, 106]), the identification of regression models as well as the dynamics of networks in the presence of uncertainty ([55, 103]). We also remark that semi-definite optimization ([100, 109]) can be interpreted as a special case of SIP. This approach is elaborated in [18, 101]. 2.1 Design centering A design centering problem consists in maximizing some measure f(x), for example the volume, of a parametrized body Y (x) while it is inscribed into a container set G(x) : DC : max f(x) s.t. Y (x) G(x). x Rn Design centering problems have been studied extensively, see for example [25, 38, 68, 69, 87] and the references therein. They are also related to the so-called set containment problem from [61]. 3

4 In applications the set G(x) often is independent of x and has a complicated structure, while Y (x) possesses a simpler geometry. For example, in the socalled maneuverability problem of a robot from [24] the authors determine lower bounds for the volume of a complicated container set G by inscribing an ellipsoid Y (x) into G whose volume can be calculated. This approach actually gave rise to one of the first formulations of a generalized semi-infinite optimization problem in [35]. An obvious application of design centering is the cutting stock problem. The problem of cutting a gem of maximal volume with prescribed shape features from a raw gem is treated in [68] and, with the bilevel algorithm for semiinfinite optimization from [93] (cf. also Sec. 4.1), in [108]. For a gentle description of this industrial application see [56]. The connection with semi-infinite optimization becomes apparent when we assume that the container set G(x) is described by a functional constraint, G(x) = {y R m g(x, y) 0}. (2) Then the inclusion constraint Y (x) G(x) of DC is equivalent to the generalized semi-infinite constraint g(x, y) 0 for all y Y (x), so that the design centering problem becomes a GSIP. Note that an (easier to treat) standard SIP appears, if the body Y is independent of x, while the container G(x) is allowed to vary with x. Many design centering problems can actually be reformulated as standard SIP s, if the x dependent transformations of Y (x) are, for example, translations, rotations, or scaling, whose inverse transformations can as well be imposed on the container set. We point out that any GSIP with a given constraint function g can be interpreted as a design centering problem by defining G(x) as in (2). Thus, generalized semi-infinite optimization and design centering are equivalent problem classes. 2.2 Robust optimization Robustness questions arise when an optimization problem is subject to uncertain data. If, for example, an inequality constraint function g(x, y) depends on some uncertain parameter vector y from a so-called uncertainty set Y R m, then the robust or pessimistic way to deal with this constraint 4

5 is to use its worst case reformulation g(x, y) 0 for all y Y, which is clearly of semi-infinite type. A point x which is feasible for this semiinfinite constraint satisfies g(x, y) 0, no matter what the actual parameter y Y turns out to be. This approach is also known as the principle of guaranteed results (cf. [21]). When the uncertainty set Y also depends on the decision variable x, we arrive at a generalized semi-infinite constraint. For example, uncertainties concerning small displacements of an aircraft may be modeled as being dependent on its speed. For an example from portfolio analysis see [93]. In [7] it is shown that under special structural assumptions the semi-infinite problem appearing in robust optimization can be reformulated as a semidefinite problem and then be solved with polynomial time algorithms ([5]). Under similarly special assumptions a saddle point approach for robust programs is given in [99]. We will discuss these assumption in some more detail in the subsequent Section 3. The current state of the art in robust optimization is surveyed in [8]. Again, any GSIP can be interpreted as a robust optimization problem, so that also these two problem classes are equivalent. The reason for rather separate bodies of literature on semi-infinite optimization on the one hand, and robust optimization on the other hand, lies in the fact that solution methods for robust optimization are mainly motivated from a complexity point of view, whereas methods in semi-infinite optimization try to deal with the numerical difficulties which arise if the stronger assumptions of robust optimization are relaxed. 3 The lower level problem The key to the theoretical as well as algorithmic treatment of semi-infinite optimization problems lies in their bilevel structure. In fact, it is not hard to see that an alternative description of the feasible set in (1) is given by M = {x R n ϕ(x) 0} (3) with the function ϕ(x) := sup g(x, y). y Y (x) 5

6 The common convention sup = is consistent here, as an empty index set Y (x) corresponds, loosely speaking, to the absence of restrictions at x and, hence, to the feasibility of x. In applications (cf. Sec. 2) often finitely many semi-infinite constraints g i (x, y) 0, y Y i (x), i I, describe the feasible set M of GSIP, along with finitely many equality constraints, so that with ϕ i (x) = sup y Yi (x) g i (x, y) one obtains feasible sets of the form M = {x R n ϕ i (x) 0, i I, h j (x) = 0, j J}. In order to avoid technicalities, however, in this article we focus on the basic case of a single semi-infinite constraint and refer the interested reader to [85] for more general formulations. 3.1 Topological properties of the feasible set At this point we introduce the usual assumptions on the set-valued mapping Y : R n R m, as they can be viewed as sufficient conditions for desirable properties of the function ϕ. In fact, Y is assumed to be nontrivial, that is, its graph gph Y = {(x, y) R n R m y Y (x)} is nonempty, outer semi-continuous, that is, gph Y is a closed set, and locally bounded, that is, for each x R n there exists a neighborhood U of x such that x U Y (x) is bounded in Rm. In standard semi-infinite optimization, that is, for a constant set-valued mapping Y (x) Y, these assumptions are obviously equivalent to Y being a nonempty and compact subset of R m. Together with the continuity of g, standard results from parametric optimization (cf., e.g., [3]) imply that ϕ is at least upper semi-continuous on R n if Y is outer semi-continuous and locally bounded. Since for each x R n the set Y (x) is compact, we also have ϕ(x) < + on R n. Moreover, the nontriviality of Y ensures that ϕ(x) > holds at least for one x R n so that, altogether, ϕ is upper semi-continuous and proper on R n. 6

7 In the standard semi-infinite case, one can even conclude that ϕ is a continuous and real-valued function for a nonempty and compact index set Y. In view of (3), the feasible set M is, thus, closed. On the other hand, in the generalized semi-infinite case upper semi-continuity of ϕ is not sufficient to guarantee closedness of M (see [85] for examples of nonclosed feasible sets). A natural assumption which ensures lower semi-continuity of ϕ and, thus, closedness of M for GSIP, is inner semi-continuity of Y (cf. [85]). This assumption often turns out to be satisfied in practical applications. 3.2 Computation of the lower level optimal value The function ϕ actually is the optimal value function of some subordinate optimization problem, the so-called lower level problem Q(x) : max g(x, y) s.t. y Y (x). (4) y Rm In contrast to the upper level problem which consists in minimizing f over M, in the lower level problem x plays the role of an n dimensional parameter, and y is the decision variable. The main computational problem in semiinfinite optimization is that the lower level problem has to be solved to global optimality, even if only a stationary point of the upper level problem is sought. In fact, by (3) a point x R n is feasible for GSIP if and only if the globally maximal value ϕ(x) of Q(x) is nonpositive. Clearly, the calculation of some locally maximal point y loc (x) of Q(x) with ϕ(x) = g(x, y loc ) 0 is not sufficient to ensure feasibility of x, since then still indices y Y (x) with g(x, y) > 0 might exist, making x infeasible. The need to numerically calculate a globally maximal value and the need to check infinitely many inequality constraints to ensure feasibility of a given point are equivalent problems. However, most algorithms for (standard) semi-infinite optimization, in particular discretization and exchange methods (cf., e.g., [74]), contain steps which check feasibility of an iterate, at least up to some tolerance. In implementations, such steps are usually performed by evaluating g on a very fine discretization of the index set which, from the global optimization point of view, corresponds to the brute force method of evaluating the objective function at a huge number of feasible points. While this approach is at least disputable from a conceptual point of view, it may certainly not be expected to work efficiently for index sets of already moderate dimensions. 7

8 An obvious remedy for this situation is to make assumptions under which it is easy to solve the lower level problem to global optimality, for example convexity. More precisely, if for any x R n the set Y (x) is convex and the function g(x, ) is concave on Y (x), then the globally maximal value ϕ(x) of the convex problem Q(x) can be calculated efficiently (at least in theory). This seemingly obvious assumption appears only recently in the literature on semi-infinite optimization (cf., e.g., [93]) since it is violated in many classical applications, like Chebyshev approximation. In the subsequent Sections 4 and 5 we will see, however, that not only interesting applications exist, but that also nonconvex lower level problems can be treated in this way by an approximation procedure. From a computational point of view, it is of course desirable to have a functional description of the index set at hand, for example Y (x) = {y R m v(x, y) 0} with some at least continuous function v with vector values in R s. Again, we omit additional equality constraints for the ease of presentation. Note that then outer semi-continuity of Y is automatic, and certain constraint qualifications imply the inner semi-continuity of Y (cf. [85]). Moreover, Y (x) is convex if each component of v(x, ) is convex on R m. In this framework, the main idea of robust optimization is to strengthen the convexity assumption on Q(x) further, so that ϕ(x) is not only efficiently computable, but also the resulting inequality constraint ϕ(x) 0 in the upper level problem may be treated efficiently. For a stylized illustration of this idea, assume that we have n = m, that g(x, y) = x (y a) with a R n is a bilinear function, and that Y = {y R m y 2 1} is the unit ball. Then a closed form of the lower level optimal value function, ϕ(x) = x 2 a x, is available, and ϕ(x) 0 becomes the second-order cone constraint x 2 a x. Eventually, via reformulation as a semi-definite constraint and assuming a linear objective function f, polynomial time algorithms are available to solve the upper level problem ([7, 8]). 3.3 Optimality conditions While smoothness of ϕ may not be expected even for smooth functions g and v, numerous results about properties of optimal value functions from parametric optimization ([3, 11, 76]) can be employed to prove first and second order optimality conditions (cf., e.g., [41, 77, 78, 84, 91, 85, 110]). A recent 8

9 review on optimality conditions and duality in semi-infinite optimization is presented in [81]. Optimality conditions for standard and generalized semi-infinite optimization problems with nonsmooth data functions f, g, and v, have recently been derived in several articles. In particular, using Clarke calculus, first order necessary and sufficient conditions along with different constraint qualifications are studied in [48, 49, 50], while [63] develops first order necessary and sufficient conditions along with duality results by Mordukhovich s limiting subdifferentials. The articles [66, 111] show that Mordukhovich calculus still yields first order conditions under very weak assumptions. 4 Bilevel reformulations While discretization methods can be formulated at least conceptually even for GSIP s ([97, 98]), implementation issues (like the need to compute lower level globally optimal points and, at least in GSIP, the need to control x dependent discretization points of Y (x)) motivate the design of alternative methods. The key idea behind bilevel approaches in semi-infinite optimization is the reformulation of GSIP as a so-called Stackelberg game. In fact, in [92] the equivalence of GSIP with the problem SG : min x,y f(x) s.t. g(x, y) 0, y is a solution of Q(x) is shown, as long as Y (x) is nonempty for all x R n. Note that the former index variable y is treated as an additional decision variable in SG, which makes this reformulation a lifting approach. Since a part of the decision variables is constrained to solve an optimization problem depending on the other decision variables, this problem has the structure of a Stackelberg game ([4, 14]). One may wonder why it should be convenient to formulate the problem SG in which even an optimal point of Q(x) has to be determined, while in GSIP only the optimal value is needed. The reason is that, under additional assumptions, y solves Q(x) if and only if more tractable conditions hold. 4.1 The MPCC reformulation The main such assumption is convexity of the optimization problem Q(x) for each x R n, as mentioned before in Section 3.2. If, in addition, a constraint 9

10 qualification like Slater s condition holds in the feasible set Y (x) of Q(x) for each x, and the involved functions are at least continuously differentiable, then the optimal points y of Q(x) can be characterized by solutions of the corresponding Karush-Kuhn-Tucker (KKT) systems. To formulate the KKT systems, let L(x, y, γ) = g(x, y) γ v(x, y) denote the Lagrangian of Q(x) with multiplier vector γ R s, and let y stand for the partial gradient with respect to the variable vector y. Then SG is equivalent ([93, 85]) to the mathematical program with complementary constraints MP CC : min x,y,γ f(x) s.t. g(x, y) 0 y L(x, y, γ) = 0 0 v(x, y) γ 0. Note that in MPCC also the multiplier vector γ serves as a decision variable, so that GSIP is lifted twice. For introductions to MPCC and the larger class of mathematical programs with equilibrium constraints (MPEC ) see [52, 60]. They turn out to be numerically challenging, since the so-called Mangasarian- Fromovitz constraint qualification (MFCQ) is violated everywhere in their feasible set ([80]). A first numerical approach to the MPCC reformulation of GSIP was given in [93, 85] by applying the smoothing procedure for MPCC from [17]. In fact, each scalar complementarity constraint 0 v l (x, y) γ l 0, l = 1,..., s, is first equivalently replaced by the equation ψ( v l (x, y), γ l ) = 0 with a complementarity function ψ like the natural residual function ψ NR (a, b) = min(a, b) or the Fischer-Burmeister function ψ F B (a, b) = a + b a 2 + b 2. The nonsmooth function ψ is then equipped with a smoothing parameter τ > 0, for example (a + b (a b) 2 + 4τ 2 ) ψτ NR (a, b) = 1 2 or ψτ F B (a, b) = a + b a 2 + b 2 + 2τ 2, so that ψ τ is smooth and ψ 0 coincides with ψ. This gives rise to the family of smoothed problems MP CC τ : min x,y,γ f(x) s.t. g(x, y) 0 y L(x, y, γ) = 0 ψ τ ( v(x, y), γ) = 0 10

11 with τ > 0, where ψ τ is extended to vector arguments componentwise. Under mild assumptions, in [93, 85] it is shown that MPCC τ is numerically tractable, and that stationary points of MPCC τ tend to a stationary point of GSIP for τ 0. This approach was successfully applied to the industrial problem of gemstone cutting in [108] (cf. also [56]) which can be formulated as a design centering problem with convex lower level problems. While MPCC is still an equivalent formulation of GSIP, the smoothed problem MPCC τ is only an approximation. This leads to the question how the x part of the feasible set of MPCC τ, that is, the orthogonal projection M τ of this set to R n, is related to the original feasible set M of GSIP. In [85] it is shown that M τ is an outer approximation of M for τ > 0 so that optimal points of MPCC τ must be expected to be infeasible for GSIP. This is unfortunate, since infeasibility of iterates is also a major drawback of more classical numerical approaches in semi-infinite optimization, like discretization and exchange methods. However, in [96] it could recently be shown that a simple modification of MPCC τ leads to inner approximations of M and, thus, to feasible iterates. In fact, an error analysis for the approximation of the lower level optimal value proves that the orthogonal projection M τ of the feasible set of MP CC τ : min f(x) s.t. g(x, y) + sτ 2 0 x,y,γ y L(x, y, γ) = 0 ψ τ ( v(x, y), γ) = 0 to R n is contained in M (where s denotes the number of lower level inequality constraints). The numerical tractability and the computational cost are identical to those of the formulation MPCC τ, so that it is strongly recommended to use the reformulation MP CC τ instead of MPCC τ. Moreover, a combination of both approaches leads to sandwiching procedures for the feasible set of GSIP (cf. [96]). Numerical approaches to the MPCC reformulation of GSIP other than smoothing are currently under investigation, among them the lifting approach for MPCC s from [88] (which amounts to lifting GSIP a third time). 4.2 The reformulation by lower level Wolfe duality An alternative approach to treat the Stackelberg game reformulation SG of GSIP, inspired by approaches from robust optimization (e.g., [6]) is presented 11

12 in the recent article [15]. This reformulation can do without the numerically demanding complementarity conditions of MPCC, which comes at the price of slightly stronger assumptions, namely convexity of the functions g(x, ), v l (x, ), l = 1,..., s, on all of R m for each x R n. Under this assumption, for the Wolfe dual problem of Q(x) D(x) : min y,γ L(x, y, γ) s.t. yl(x, y, γ) = 0, γ 0, its feasible set Y D (x), and its optimal value function ϕ D (x) = inf L(x, y, γ) (y,γ) Y D (x) it is well-known that the existence of a KKT point of Q(x) implies solvability of both Q(x) and D(x) as well as strong duality. Thus, if Q(x) possesses a KKT point for each x, then we have M = {x R n min L(x, y, γ) 0}. (y,γ) Y D (x) This motivates to introduce the lifted Wolfe problem LW P : min f(x) s.t. L(x, y, γ) 0, yl(x, y, γ) = 0, γ 0. x,y,γ As f does not depend on the variables y and γ, the minimizers of GSIP coincide with the x components of the minimizers of LWP, whenever further assumption guarantee that Q(x) possesses a Karush-Kuhn-Tucker point for each x R n (see [15] for details). The article [15] mainly deals with the fact that under mild assumptions the problem LWP is not only smooth, but also numerically tractable, that is, not intrinsically degenerate. A drawback of the LWP reformulation against the MPCC reformulation of GSIP is the fact that the lower level constraint v(x, y) 0 is not stated explicitly but follows implicitly by duality arguments. For lower level problems in which g(x, ) is only convex on Y (x), but not on all of R m this can actually lead to lower level infeasibility, as examples in [15] show. In particular, this problem arises routinely for the lower level problems constructed in the adaptive convexification procedure which we will discuss below in Section 5. As a remedy, [15] suggests to explicitly add the constraint v(x, y) 0 to the constraints of LWP. It is proven and illustrated that the resulting lifted problem still is numerically tractable. 12

13 4.3 The formulation as a semi-smooth equation A related approach which does not base on the Stackelberg game reformulation SG of GSIP was introduced in [73] and further developed in [94, 95]. Here, the focus is on calculating a stationary point of GSIP by reformulating an appropriate first order necessary optimality condition as a semi-smooth system of equations. We sketch the main ideas for the standard semi-infinite case, while [94, 95] also cover GSIP. To state the first order optimality condition, first we introduce the set of active indices Y 0 ( x) = {y Y g( x, y) = 0} of a feasible point x M. Note that Y 0 ( x) coincides with the set of globally maximal points of Q( x) in the case ϕ( x) = 0, and that only the latter case is interesting for local optimality conditions, as ϕ( x) < 0 forces x to lie in the topological interior of M. The Extended Mangasarian-Fromovitz Constraint Qualification (EMFCQ) ([36, 45, 86]) holds at x, if there is a vector d R n with x g( x, y), d < 0 for all y Y 0 ( x), where, denotes the standard inner product. A combination of Fritz John s optimality condition for SIP from the seminal paper [39] with EMFCQ immediately yields the KKT type result that at any local minimizer x of SIP, at which EMFCQ is satisfied, there exist a p {0,..., n}, multipliers λ i 0 and active indices ȳ i Y 0 ( x), i = 1,..., p, such that f( x) + p λ i x g( x, ȳ i ) = 0 i=1 holds. If, again, the lower level problem Q(x) is assumed to be convex with Slater s condition holding in its feasible set for each x R n, then the condition ȳ i Y 0 ( x) may equivalently be replaced by the lower level KKT conditions. Altogether, the first order optimality condition for SIP can then be replaced by the combination of upper and lower level KKT systems, resulting in the 13

14 system of equations T (x, λ, y 1,..., y p, γ 1,..., γ p ) = f(x) + p i=1 λ i x g(x, y i ) ψ( g(x, y 1 ), λ 1 ). ψ( g(x, y p ), λ p ) y L(x, y 1, γ 1 ) ψ( v(y 1 ), γ 1 ). y L(x, y p, γ p ) ψ( v(y p ), γ p ) where ψ is again a complementarity function (cf. Sec. 4.1). = 0, (5) To deal with the intrinsic nonsmoothness of complementarity functions like ψ NR and ψ F B, one may apply the so-called semi-smooth Newton method from [72] to (5). In fact, for a locally Lipschitzian function F : R n R m let F (x) denote Clarke s generalized Jacobian of F at x ([12]). F is called strongly semismooth at x if F is directionally differentiable at x and if for all V F (x + d) and d 0 we have V d F (x; d) = O( d 2 ). If, in the definition of T, we use the special NCP functions ψ NR or ψ F B, and if the data functions are at least twice continuously differentiable, then a result from [71] guarantees that T is strongly semismooth. In [72] it is shown that the semi-smooth Newton method for solving the equation F (x) = 0 with F : R n R n, defined by x k+1 = x k (W k ) 1 F (x k ) and W k F (x k ), is well-defined and q-quadratically convergent in a neighborhood of a solution point x for strongly semi-smooth F, if F is additionally CD-regular at x, that is, all matrices W F ( x) are nonsingular. Thus, the semi-smooth Newton method can be expected to work efficiently for the solution of (5) if CD-regularity of T is guaranteed in a solution point. While [73] derives somewhat technical conditions for CD-regularity involving strict complementarity in the upper and lower level problems (implying that the semi-smooth Newton method collapses to the usual Newton method), in [94] it is shown that natural conditions imply CD-regularity if the so-called Reduction Ansatz (cf. Sec. 6) holds in the lower level, and in [95] also the case of violated strict complementarity in the lower level problems is treated successfully. 14

15 5 Adaptive convexification The adaptive convexification algorithm is a method to solve standard semiinfinite optimization problems via a sequence of feasible iterates, even if the lower level problems are nonconvex. Its main idea ([20]) is to tessellate the index set into finitely many smaller index sets (as opposed to the approach of discretization methods, which choose finitely many elements of the index set), and to convexify the resulting subproblems. Ideas of spatial branching [62] are then used to efficiently refine the tessellation. Alternative feasible point methods from [9, 10, 65] apply spatial branching even simultaneously to the decision and index variables, so that a branch-andbound approach for the global solution of SIP generates convergent sequences of lower and upper bounds for its globally optimal value. While these approaches may be time consuming, in the following we explain the main ideas of the approach from [20], which only takes care of global optimality in the lower level, but not necessarily in the upper level problem. It is, thus, a local numerical method for the upper level problem which guarantees semi-infinite feasibility by global optimization ideas for the lower level problem. As in [20], let us first focus on a standard SIP with a onedimensional index set of the form Y = [y, y], and with twice continuously differentiable data functions f and g. Furthermore, the feasible set M is assumed to be contained in the n dimensional box X = [x, x] R n with x < x R n For any x X, the lower level problem Q(x) then consists in maximizing g(x, ) under the constraints v 1 (y) = y y 0 and v 2 (y) = y y 0, which gives rise to the lower level Lagrangian L(x, y, γ, γ) = g(x, y) γ (y y) γ (y y). If we assume for a moment that Q(x) is a convex problem for all x X, then the MPCC reformulation from Section 4.1 yields that SIP is equivalent to the problem MP CC : min f(x) s.t. x X, g(x, y) 0 x,y,γ,γ y g(x, y) + γ γ = 0 0 y y γ 0 0 y y γ 0. Next, to treat nonconvex lower level problems, [20] uses ideas of the αbb method from [1, 2], which is a spatial branching method for nonconvex global 15

16 optimization. In fact, since the lower level feasible set Y = [y, y] is certainly convex, nonconvexity of Q(x) for some x X can only be introduced by nonconcavity of g(x, ). The main idea is to replace g(x, ) by a concave overestimator. If no additional information is available (cf. [19]), αbb constructs concave overestimators by adding a quadratic relaxation function ψ(y; α, y, y) = α (y y)(y y) (6) 2 to the original function g(x, ), which results in g(x, y; α, y, y) = g(x, y) + ψ(y; α, y, y). In the sequel we will temporarily suppress the dependence of g on α, y, y. For α 0 the function g(x, ) clearly is an overestimator of g(x, ) on [y, y], and it coincides with g(x, ) at the boundary points y, y of the index set. Moreover, g is twice continuously differentiable with second derivative D 2 y g(x, y) = D 2 yg(x, y) α on [y, y]. Consequently g(x, ) is concave on [y, y] for α max Dyg(x, 2 y) y [y,y] (cf. also [1, 2]), and g(x, ) even is concave for any choice x X, if α satisfies α max Dyg(x, 2 y). (7) (x,y) X [y,y] Unfortunately, the computation of α thus involves a global optimization problem itself. Note, however, that α may be any upper bound for the right-hand side in (7). Such upper bounds can be provided by interval methods (see, e.g., [19, 32, 67]) under natural assumptions on the function g. Combining these facts shows that for ( ) α max 0, max Dyg(x, 2 y) (x,y) X [y,y] and arbitrary x X, the function g(x, ) is a concave overestimator of g(x, ) on [y, y]. By the overestimation property, any feasible point (if it exists) of the approximating semi-infinite problem min x X f(x) s.t. g(x, y; α, y, y) 0 for all y [y, y] 16

17 is also feasible for the original problem. On the other hand, the concavity of the overestimator entails that the lower level problem of the approximating problem is convex and can be solved, for example, by the MPCC reformulation. A straightforward generalization of this idea relies on the obvious fact that the single semi-infinite constraint g(x, y) 0 for all y Y is equivalent to the finitely many semi-infinite constraints g(x, y) 0 for all y Y k, k K, if the sets Y k, k K, form a tessellation of Y, that is, for N N we choose y = η 0 < η 1 <... < η N 1 < η N = y and put K = {1,..., N} as well as Y k = [η k 1, η k ], k K. Given such a tessellation, one can construct concave overestimators g k (x, y; α, η k 1, η k ) = g(x, y) + ψ(y; α, η k 1, η k ). for each of these finitely many semi-infinite constraints, and solve the corresponding semi-infinite problem with finitely many convex lower level problems by the MPCC formulation. Again, any element (if it exists) of the approximating feasible set M αbb (E, α) = {x R n g k (x, y) 0 for all y Y k, k K} is also feasible for the original problem, where E = {η k k K} denotes the set of subdivision points defining the tessellation of Y. This means that any solution concept for SIP αbb (E, α) : min f(x) s.t. x M αbb(e, α), x X be it global solutions, local solutions or stationary points, will at least generate a feasible point of SIP. Given a consistent approximating problem, the adaptive convexification algorithm computes a stationary point x of SIP αbb (E, α) by the MPCC reformulation, and terminates if x is also stationary for SIP within given tolerances (where a suitable stationarity condition from Sec. 3.3 is employed). If x is not stationary, the algorithm refines the tessellation in the spirit of exchange 17

18 methods ([34, 74]) by adding the active indices of the solution point x to the set of subdivision point E, constructs a refined approximating problem, and iterates this procedure. Error bounds for the concave overestimators ([20]) indicate that finer tessellations of Y should lead to better approximations of the original feasible set M. In fact, the resulting algorithm in [20] is well-defined, convergent and finitely terminating. Numerical examples for the performance of the method from Chebyshev approximation and design centering as well as an approach to calculate a consistent initial approximation of the feasible set are given in [20]. A generalization of the adaptive convexification algorithm for higher dimensional index sets was recently presented in [89]. Here, the index set Y is not necessarily assumed to be box-shaped, which gives rise to further approximation issues. Again, the resulting algorithm is well-defined, convergent and finitely terminating. An implementation is freely available at [90]. 6 Genericity results Throughout optimization theory and numerics (and beyond), assumptions are made to derive optimality conditions, ensure convergence of algorithms, etc. Usually it is cumbersome or even impossible to check these assumptions, for example if they are requested to hold in a solution point which has yet to be determined. Then a fundamental question is whether such assumptions are mild, so that an urgent need to check them a-priorily becomes obsolete but, instead, they may be expected to hold typically. One way to translate mild and typically into mathematical terms is the concept of genericity. For optimization problems, it is formulated by topologizing the linear space of their data functions by a strong (or Whitney) C k topology ([37, 40]) with k N 0, denoted by C k s. The latter is generated by allowing perturbations of functions and their derivatives up to k th order, where the perturbations are controlled by continuous positive functions. The space of C k functions endowed with the C k s topology turns out to be a Baire space, that is, every countable intersection of open and dense sets is dense. A set is called C k s generic if it contains such a countable intersection of C k s open and dense sets. Clearly, generic sets in a Baire space are dense as well. A property is called generic, if it holds on a generic set. To give an example from [40], in smooth finite optimization the linear inde- 18

19 pendence constraint qualification (LICQ) generically holds everywhere in the feasible set. More explicitly, this means that the set of data functions defining finite optimization problems in which the gradients of active constraints are linearly independent at each point of the feasible set, form a generic set in the space of all data functions. In particular, the frequent assumption that some unknown optimal point satisfies LICQ is mild in this sense. Moreover, since under LICQ a local C 1 change of coordinates shows that the feasible set looks like the Cartesian product of a linear space and finitely many one dimensional halfspaces, the generic local structure of the feasible set is clear. In standard semi-infinite optimization, however, from [46] an example is known where the feasible set contains the upper part of the so-called swallow tail singularity. This example is stable under perturbations so that, generically, it cannot be possible to describe the whole feasible set of a semi-infinite problem locally by finitely many smooth inequality constraints satisfying LICQ. On the other hand, one may ask if such a nice local description of M is at least possible in optimal points of SIP, since this would be sufficiently helpful for the formulation of optimality conditions and for convergence results of algorithms. For standard semi-infinite problems it was established already in [112, 82], that generically the so-called Reduction Ansatz holds at all locally minimal points, and even on the larger set of all generalized critical points (which contain the Fritz John and Karush-Kuhn-Tucker points of SIP). The Reduction Ansatz was first introduced in [33, 107] and constitutes natural regularity conditions at each active index (i.e., at each optimal point of the lower level problem), namely LICQ, strict complementary slackness (SCS), and a standard second order condition (SOC). Under these assumptions, M can locally be described by finitely many smooth functions, and then LICQ, SCS as well as SOC even hold generically in this local description of the upper level problem. A long-standing open question was whether such a result also holds for generalized SIP s. Partial positive answers were given already in [83] for the case of sufficiently many active indices, and in [92] for the case of affine data functions. Only in the recent series of articles [27, 28, 29] we were able to show that a certain modification of the Reduction Ansatz, the Symmetric Reduction Ansatz, generically holds for GSIP s at each locally minimal point, and that under this set of regularity assumptions the closure of M can be described by finitely many smooth functions. While this genericity result only holds at locally minimal points, the generic structure of the closure of M at Karush-Kuhn-Tucker points was shown in [42] to be that of a disjunctive 19

20 optimization problem. Based on these results, a Morse theory for GSIP was developed in [43]. While we will not review the Symmetric Reduction Ansatz here, at least some explanation of the symmetry observation for GSIP is appropriate, as it constitutes a fundamental and, at the same time, fruitful difference between standard and generalized semi-infinite optimization. We will illustrate the symmetry effect by a description of the closure of the (not necessarily closed, cf. Sec. 3.1) feasible set M of GSIP. In fact, consider the index set without boundary and define the set Y < (x) = {y R m v l (x, y) < 0, 1 l s} M = {x R n g(x, y) 0 for all y Y < (x)}. In view of Y < (x) Y (x) we clearly have M M. As (under mild assumptions) Y < (x) is only slightly smaller than Y (x), the set M may be expected to be only slightly larger than M. In [27] it was first shown that generically the set M actually coincides with the topological closure M of M. While the corresponding proof is rather technical, recently this result was significantly improved in [30] by formulating the symmetric Mangasarian Fromovitz constraint qualification (Sym-MFCQ) for GSIP. It was shown that, generically, Sym-MFCQ holds everywhere in M and that, under this natural generic condition, M coincides with M. To see the symmetry aspect, note that an alternative description of M is with the continuous function M = {x R n σ(x, y) 0 for all y R n }, (8) σ(x, y) = min {g(x, y), v 1 (x, y),..., v s (x, y)}. Symmetry refers to the fact that, via σ, all data functions g, v 1,..., v s contribute in the same way to the definition of M, as opposed to the lower level objective function g playing a different role in Q(x) than the lower level constraint functions v l, l = 1,..., s. While this effect was implicitly used already in [84, 87], its full consequences were only understood recently in the above mentioned articles. In fact, coarsely speaking the Symmetric Reduction Ansatz is a set of nondegeneracy assumptions for active indices of 20

21 σ in the description (8) of M, and Sym-MFCQ is a Mangasarian Fromovitz type condition for this description (jointly in the variables x and y). Note, in particular, that symmetry entails the following surprising fact: the set M stays invariant when the lower level optimal value function g is exchanged with one of the lower level constraint functions v l which, in general, leads to a different lower level problem. The difference to the standard SIP case is that this exchange operation would replace SIP by a GSIP, while in the GSIP case the considered optimization problem stays in the same class. 7 Final remarks In this survey we tried to explain some major developments in theory and numerics of semi-infinite optimization during the past couple of years. Many other interesting topics could not be treated explicitly, including stability of the feasible set ([26]), the convexity structure of critical value functions ([16]), multiplier rules via augmented Lagrangians ([79]), smoothing of the lower level optimal value function by mollifiers ([44]), optimality conditions in degenerate cases ([53]) and, with respect to numerics, combinations of discretization with interval methods ([64]), and special purpose exchange methods ([104]). All these different contributions clearly indicate, however, that semi-infinite optimization will stay a broad and highly active field of research also in the years to come. References [1] C.S. Adjiman, I.P. Androulakis, C.A. Floudas, A global optimization method, αbb, for general twice-differentiable constrained NLPs - I: Theoretical advances, Computers and Chemical Engineering, Vol. 22 (1998), [2] C.S. Adjiman, I.P. Androulakis, C.A. Floudas, A global optimization method, αbb, for general twice-differentiable constrained NLPs - II: Implementation and computational results, Computers and Chemical Engineering, Vol. 22 (1998), [3] B. Bank, J. Guddat, D. Klatte, B. Kummer, K. Tammer, Nonlinear Parametric Optimization, Birkhäuser, Basel,

22 [4] J.F. Bard, Practical Bilevel Optimization, Kluwer, Dordrecht, [5] A. Ben-Tal, L. El Ghaoui, A. Nemirovski, Robustness, in: H. Wolkowicz (ed) et al., Handbook of semidefinite programming, Kluwer, 2000, [6] A. Ben-Tal, A. Nemirovski, Robust convex optimization, Mathematics of Operations Research, Vol. 23 (1998), [7] A. Ben-Tal, A. Nemirovski, Robust solutions of uncertain linear programs, Operations Research Letters, Vol. 25 (1999), [8] D. Bertsimas, D.B. Brown, C. Caramanis, Theory and applications of robust optimization, SIAM Review, Vol. 53 (2011), [9] B. Bhattacharjee, W.H. Green Jr., P.I. Barton, Interval methods for semi-infinite programs, Computational Optimization and Applications, Vol. 30 (2005), [10] B. Bhattacharjee, P. Lemonidis, W.H. Green, P.I. Barton, Global solution of semi-infinite programs, Mathematical Programming, Vol. 103 (2005), [11] J.F. Bonnans, A. Shapiro, Perturbation Analysis of Optimization Problems, Springer, New York, [12] F.H. Clarke, Optimization and Nonsmooth Analysis, Wiley, New York, [13] S. Daum, R. Werner, A novel feasible discretization method for linear semi-infinite programming applied to basket option pricing, Optimization, Vol. 60 (2011), [14] S. Dempe, Foundations of Bilevel Programming, Kluwer, [15] M. Diehl, B. Houska, O. Stein, P. Steuermann, A lifting method for generalized semi-infinite programs based on lower level Wolfe duality, Optimization Online, [16] D. Dorsch, F. Guerra Vázquez, H. Günzel, H.Th. Jongen, J.-J. Rückmann, SIP: critical value functions have finite modulus of non-convexity, Mathematical Programming, to appear. [17] F. Facchinei, H. Jiang, L. Qi, A smoothing method for mathematical programs with equilibrium constraints, Mathematical Programming, Vol. 85 (1999),

23 [18] U. Faigle, W. Kern, G. Still, Algorithmic Principles of Mathematical Programming, Kluwer, [19] C.A. Floudas, Deterministic Global Optimization, Theory, Methods and Applications, Kluwer, [20] C. A. Floudas, O. Stein, The adaptive convexification algorithm: a feasible point method for semi-infinite programming, SIAM Journal on Optimization, Vol. 18 (2007), [21] Y.B. Germeyer, Einführung in die Theorie des Operations Research, Akademie Verlag, Berlin, [22] M.A. Goberna, NEOS Semiinfinite Programming Directory, Programming Directory [23] M.A. Goberna, M.A. López, Linear Semi-infinite Optimization, Wiley, Chichester, [24] T.J. Graettinger, B.H. Krogh, The acceleration radius: a global performance measure for robotic manipulators, IEEE Journal of Robotics and Automation, Vol. 4 (1988), [25] P. Gritzmann, V. Klee, On the complexity of some basic problems in computational convexity. I. Containment problems, Discrete Mathematics, Vol. 136 (1994), [26] H. Günzel, H.Th. Jongen, J.-J. Rückmann, On stable feasible sets in generalized semi-infinite programming, SIAM Journal on Optimization, Vol. 19 (2008), [27] H. Günzel, H.Th. Jongen, O. Stein, On the closure of the feasible set in generalized semi-infinite programming, Central European Journal of Operations Research, Vol. 15 (2007), [28] H. Günzel, H. Th. Jongen, O. Stein, Generalized semi-infinite programming: the Symmetric Reduction Ansatz, Optimization Letters, Vol. 2 (2008), [29] H. Günzel, H. Th. Jongen, O. Stein, Generalized semi-infinite programming: on generic local minimizers, Journal of Global Optimization, Vol. 42 (2008),

24 [30] F. Guerra Vázquez, H.Th. Jongen, V. Shikhman, General semiinfinite programming: symmetric Mangasarian-Fromovitz constraint qualification and the closure of the feasible set, SIAM Journal on Optimization, Vol. 20 (2010), [31] F. Guerra Vázquez, J.-J. Rückmann, O. Stein, G. Still, Generalized semi-infinite programming: a tutorial, Journal of Computational and Applied Mathematics, Vol. 217 (2008), [32] E. Hansen, Global Optimization using Interval Analysis, M. Dekker, New York, [33] R. Hettich, H.Th. Jongen, Semi-infinite programming: conditions of optimality and applications, in: J. Stoer (ed): Optimization Techniques, Part 2, Lecture Notes in Control and Information Sciences, Vol. 7, Springer, Berlin, 1978, [34] R. Hettich, K.O. Kortanek, Semi-infinite programming: theory, methods, and applications, SIAM Review, Vol. 35 (1993), [35] R. Hettich, G. Still, Semi-infinite programming models in robotics, in: J. Guddat, H.Th. Jongen, B. Kummer, F. Nožička (eds): Parametric Optimization and Related Topics II, Akademie Verlag, Berlin, 1991, [36] R. Hettich, P. Zencke, Numerische Methoden der Approximation und semi-infiniten Optimierung, Teubner, Stuttgart, [37] M.W. Hirsch, Differential Topology, Springer, New York, [38] R. Horst, H. Tuy, Global Optimization, Springer, Berlin, [39] F. John, Extremum problems with inequalities as subsidiary conditions, in: Studies and Essays, R. Courant Anniversary Volume, Interscience, New York, 1948, [40] H.Th. Jongen, P. Jonker, F. Twilt, Nonlinear Optimization in Finite Dimensions, Kluwer, Dordrecht, [41] H.Th. Jongen, J.-J. Rückmann, O. Stein, Generalized semiinfinite optimization: a first order optimality condition and examples, Mathematical Programming, Vol. 83 (1998),

25 [42] H.Th. Jongen, V. Shikhman, On generic one-parametric semiinfinite optimization, SIAM Journal on Optimization, Vol. 21 (2011), [43] H.Th. Jongen, V. Shikhman, General semi-infinite programming: critical point theory, Optimization, Vol. 60 (2011), [44] H.Th. Jongen, O. Stein, Smoothing by mollifiers. Part I: Semiinfinite optimization, Journal of Global Optimization, Vol. 41 (2008), [45] H.Th. Jongen, F. Twilt, G.-W. Weber, Semi-infinite optimization: structure and stability of the feasible set, Journal of Optimization Theory and Applications, Vol. 72 (1992), [46] H.Th. Jongen, G. Zwier, On the local structure of the feasible set in semi-infinite optimization, in: Brosowski, Deutsch (eds): International Series of Numerical Mathematics, Vol. 72, Birkhäuser, Basel, 1984, [47] C. Kaiser, W. Krabs, Ein Problem der semi-infiniten Optimierung im Maschinenbau und seine Verallgemeinerung, Working paper, Darmstadt University of Technology, Department of Mathematics, [48] N. Kanzi, S. Nobakhtian, Nonsmooth semi-infinite programming problems with mixed constraints, Journal of Mathematical Analysis and Applications, Vol. 351 (2009), [49] N. Kanzi, S. Nobakhtian, Optimality conditions for non-smooth semi-infinite programming, Optimization, Vol. 59 (2010), [50] N. Kanzi, S. Nobakhtian, Necessary optimality conditions for nonsmooth generalized semi-infinite programming problems, European Journal of Operational Research, Vol. 205 (2010), [51] A. Kaplan, R. Tichatschke, On a class of terminal variational problems, in: J. Guddat, H.Th. Jongen, F. Nožička, G. Still, F. Twilt (eds): Parametric Optimization and Related Topics IV, Peter Lang, Frankfurt a.m., 1997, [52] M. Kočvara, J. Outrata, J. Zowe, Nonsmooth Approach to Optimization Problems with Equilibrium Constraints: Theory, Applications and Numerical Results, Kluwer, Dordrecht,

appeared in: S. Dempe, V. Kalashnikov (eds): Optimization with Multivalued Mappings, Springer, 2006,

appeared in: S. Dempe, V. Kalashnikov (eds): Optimization with Multivalued Mappings, Springer, 2006, appeared in: S. Dempe, V. Kalashnikov (eds): Optimization with Multivalued Mappings, Springer, 2006, 209-228. Chapter 1 A SEMI-INFINITE APPROACH TO DESIGN CENTERING Oliver Stein RWTH Aachen University

More information

First-Order Optimally Conditions in Generalized Semi-Infinite Programming 1

First-Order Optimally Conditions in Generalized Semi-Infinite Programming 1 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 101, No. 3, pp. 677-691, JUNE 1999 First-Order Optimally Conditions in Generalized Semi-Infinite Programming 1 J. J. RUCKMANN 2 AND A. SHAPIRO 3 Communicated

More information

Solving generalized semi-infinite programs by reduction to simpler problems.

Solving generalized semi-infinite programs by reduction to simpler problems. Solving generalized semi-infinite programs by reduction to simpler problems. G. Still, University of Twente January 20, 2004 Abstract. The paper intends to give a unifying treatment of different approaches

More information

A lifting method for generalized semi-infinite programs based on lower level Wolfe duality

A lifting method for generalized semi-infinite programs based on lower level Wolfe duality A lifting method for generalized semi-infinite programs based on lower level Wolfe duality M. Diehl B. Houska O. Stein # P. Steuermann # December 16, 2011 Abstract This paper introduces novel numerical

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity On smoothness properties of optimal value functions at the boundary of their domain under complete convexity Oliver Stein # Nathan Sudermann-Merx June 14, 2013 Abstract This article studies continuity

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

A Test Set of Semi-Infinite Programs

A Test Set of Semi-Infinite Programs A Test Set of Semi-Infinite Programs Alexander Mitsos, revised by Hatim Djelassi Process Systems Engineering (AVT.SVT), RWTH Aachen University, Aachen, Germany February 26, 2016 (first published August

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics and Statistics

More information

An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints

An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints Oliver Stein Peter Kirst # Paul Steuermann March 22, 2013 Abstract We discuss some difficulties in determining

More information

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN Fakultät für Mathematik und Informatik Preprint 2013-04 Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN 1433-9307 Stephan Dempe and

More information

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY IN NONLINEAR PROGRAMMING 1 A. L. DONTCHEV Mathematical Reviews, Ann Arbor, MI 48107 and R. T. ROCKAFELLAR Dept. of Math., Univ. of Washington, Seattle, WA 98195

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic

More information

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity Preprint ANL/MCS-P864-1200 ON USING THE ELASTIC MODE IN NONLINEAR PROGRAMMING APPROACHES TO MATHEMATICAL PROGRAMS WITH COMPLEMENTARITY CONSTRAINTS MIHAI ANITESCU Abstract. We investigate the possibility

More information

Branch-and-Bound Reduction Type Method for Semi-Infinite Programming

Branch-and-Bound Reduction Type Method for Semi-Infinite Programming Branch-and-Bound Reduction Type Method for Semi-Infinite Programming Ana I. Pereira 1 and Edite M. G. P. Fernandes 2 1 Polytechnic Institute of Bragança, ESTiG-Gab 54, 5301-857 Bragança, Portugal, apereira@ipb.pt

More information

How to solve a design centering problem

How to solve a design centering problem How to solve a design centering problem The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Harwood, Stuart

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Sharpening the Karush-John optimality conditions

Sharpening the Karush-John optimality conditions Sharpening the Karush-John optimality conditions Arnold Neumaier and Hermann Schichl Institut für Mathematik, Universität Wien Strudlhofgasse 4, A-1090 Wien, Austria email: Arnold.Neumaier@univie.ac.at,

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

SEMI-INFINITE PROGRAMMING

SEMI-INFINITE PROGRAMMING SEMI-INFINITE PROGRAMMING MARCO LÓPEZ, GEORG STILL ABSTRACT. A semi-infinite programming problem is an optimization problem in which finitely many variables appear in infinitely many constraints. This

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

Semi-infinite programming, duality, discretization and optimality conditions

Semi-infinite programming, duality, discretization and optimality conditions Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Stationarity and Regularity of Infinite Collections of Sets. Applications to

Stationarity and Regularity of Infinite Collections of Sets. Applications to J Optim Theory Appl manuscript No. (will be inserted by the editor) Stationarity and Regularity of Infinite Collections of Sets. Applications to Infinitely Constrained Optimization Alexander Y. Kruger

More information

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS ROLAND HERZOG AND FRANK SCHMIDT Abstract. Sufficient conditions ensuring weak lower

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

Advanced Continuous Optimization

Advanced Continuous Optimization University Paris-Saclay Master Program in Optimization Advanced Continuous Optimization J. Ch. Gilbert (INRIA Paris-Rocquencourt) September 26, 2017 Lectures: September 18, 2017 November 6, 2017 Examination:

More information

Lecture Notes on Support Vector Machine

Lecture Notes on Support Vector Machine Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is

More information

A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems

A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems Math. Program., Ser. A (2013) 142:591 604 DOI 10.1007/s10107-012-0586-z SHORT COMMUNICATION A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Mathematical programs with complementarity constraints in Banach spaces

Mathematical programs with complementarity constraints in Banach spaces Mathematical programs with complementarity constraints in Banach spaces Gerd Wachsmuth July 21, 2014 We consider optimization problems in Banach spaces involving a complementarity constraint defined by

More information

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006 CONVEX OPTIMIZATION VIA LINEARIZATION Miguel A. Goberna Universidad de Alicante Iberian Conference on Optimization Coimbra, 16-18 November, 2006 Notation X denotes a l.c. Hausdorff t.v.s and X its topological

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Lectures on Parametric Optimization: An Introduction

Lectures on Parametric Optimization: An Introduction -2 Lectures on Parametric Optimization: An Introduction Georg Still University of Twente, The Netherlands version: March 29, 2018 Contents Chapter 1. Introduction and notation 3 1.1. Introduction 3 1.2.

More information

Fakultät für Mathematik und Informatik

Fakultät für Mathematik und Informatik Fakultät für Mathematik und Informatik Preprint 2017-03 S. Dempe, F. Mefo Kue Discrete bilevel and semidefinite programg problems ISSN 1433-9307 S. Dempe, F. Mefo Kue Discrete bilevel and semidefinite

More information

1. Introduction. We consider the mathematical programming problem

1. Introduction. We consider the mathematical programming problem SIAM J. OPTIM. Vol. 15, No. 1, pp. 210 228 c 2004 Society for Industrial and Applied Mathematics NEWTON-TYPE METHODS FOR OPTIMIZATION PROBLEMS WITHOUT CONSTRAINT QUALIFICATIONS A. F. IZMAILOV AND M. V.

More information

Priority Programme 1962

Priority Programme 1962 Priority Programme 1962 An Example Comparing the Standard and Modified Augmented Lagrangian Methods Christian Kanzow, Daniel Steck Non-smooth and Complementarity-based Distributed Parameter Systems: Simulation

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Active sets, steepest descent, and smooth approximation of functions

Active sets, steepest descent, and smooth approximation of functions Active sets, steepest descent, and smooth approximation of functions Dmitriy Drusvyatskiy School of ORIE, Cornell University Joint work with Alex D. Ioffe (Technion), Martin Larsson (EPFL), and Adrian

More information

A smoothing augmented Lagrangian method for solving simple bilevel programs

A smoothing augmented Lagrangian method for solving simple bilevel programs A smoothing augmented Lagrangian method for solving simple bilevel programs Mengwei Xu and Jane J. Ye Dedicated to Masao Fukushima in honor of his 65th birthday Abstract. In this paper, we design a numerical

More information

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Twice Differentiable Characterizations of Convexity Notions for Functions on Full Dimensional Convex Sets

Twice Differentiable Characterizations of Convexity Notions for Functions on Full Dimensional Convex Sets Schedae Informaticae Vol. 21 (2012): 55 63 doi: 10.4467/20838476SI.12.004.0814 Twice Differentiable Characterizations of Convexity Notions for Functions on Full Dimensional Convex Sets Oliver Stein Institute

More information

Simulation-Based Solution of Stochastic Mathematical Programs with Complementarity Constraints Ilker Birbil, S.; Gurkan, Gul; Listes, O.L.

Simulation-Based Solution of Stochastic Mathematical Programs with Complementarity Constraints Ilker Birbil, S.; Gurkan, Gul; Listes, O.L. Tilburg University Simulation-Based Solution of Stochastic Mathematical Programs with Complementarity Constraints Ilker Birbil, S.; Gurkan, Gul; Listes, O.L. Publication date: 2004 Link to publication

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Solving a Signalized Traffic Intersection Problem with NLP Solvers

Solving a Signalized Traffic Intersection Problem with NLP Solvers Solving a Signalized Traffic Intersection Problem with NLP Solvers Teófilo Miguel M. Melo, João Luís H. Matias, M. Teresa T. Monteiro CIICESI, School of Technology and Management of Felgueiras, Polytechnic

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Solving Multi-Leader-Common-Follower Games

Solving Multi-Leader-Common-Follower Games ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 Solving Multi-Leader-Common-Follower Games Sven Leyffer and Todd Munson Mathematics and Computer Science Division Preprint ANL/MCS-P1243-0405

More information

AN EXACT PENALTY APPROACH FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. L. Abdallah 1 and M. Haddou 2

AN EXACT PENALTY APPROACH FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. L. Abdallah 1 and M. Haddou 2 AN EXACT PENALTY APPROACH FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. L. Abdallah 1 and M. Haddou 2 Abstract. We propose an exact penalty approach to solve the mathematical problems with equilibrium

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information