1. Introduction. We consider mathematical programs with equilibrium constraints in the form of complementarity constraints:

Size: px
Start display at page:

Download "1. Introduction. We consider mathematical programs with equilibrium constraints in the form of complementarity constraints:"

Transcription

1 SOME PROPERTIES OF REGULARIZATION AND PENALIZATION SCHEMES FOR MPECS DANIEL RALPH AND STEPHEN J. WRIGHT Abstract. Some properties of regularized and penalized nonlinear programming formulations of mathematical programs with equilibrium constraints (MPECs) are described. The focus is on the properties of these formulations near a local solution of the MPEC at which strong stationarity and a second-order sufficient condition are satisfied. In the regularized formulations, the complementarity condition is replaced by a constraint involving a positive parameter that can be decreased to zero. In the penalized formulation, the complementarity constraint appears as a penalty term in the objective. Existence and uniqueness of solutions for these formulations are investigated, and estimates are obtained for the distance of these solutions to the MPEC solution under various assumptions. Key words. Nonlinear Programming, Equilibrium Constraints, Complementarity Constraints 1. Introduction. We consider mathematical programs with equilibrium constraints in the form of complementarity constraints: (1.1) min x f(x) subject to g(x) 0, h(x) = 0, 0 G(x) H(x) 0, where f : IR n IR, g : IR n IR p, h : IR n IR q, G : IR n IR m, and H : IR n IR m are all twice continuously differentiable functions, and the notation G(x) H(x) signifies that G(x) T H(x) = 0. These problems have been the subject of much recent investigation because of both their importance in applications and their theoretical interest, which arises from the fact that their most natural nonlinear programming formulations (for example, replacing G(x) H(x) by G(x) T H(x) = 0) do not satisfy constraint qualifications [4, 29] at any feasible point. In this paper, we study a regularization scheme analyzed by Scholtes [28] in which (1.1) is approximated by the following nonlinear program, which is parameterized by the nonnegative scalar t: Reg(t) : min x f(x) subject to (1.2) g(x) 0, h(x) = 0, G(x) 0, H(x) 0, G i (x)h i (x) t, i = 1, 2,..., m. We denote the solution of this problem by x(t). Since Reg(0) is equivalent to (1.1), the regularization scheme can be put to use by applying a nonlinear programming algorithm to Reg(t) for a sequence of problems where t is positive and decreasing to 0, deriving a starting point for each minimization from approximate minimizers for previous problems in the sequence. Scholtes [28, Theorem 4.1], restated later as Theorem 3.1, shows that in the neighborhood of a solution x of (1.1) satisfying certain conditions, there is a unique stationary point x(t) for Reg(t) for all positive t sufficiently small. Moreover, this The final version of this paper appeared in Optimization Methods and Software, 19 (2004), pp Judge Institute of Management, University of Cambridge, Trumpington St, Cambridge, CB2 1AG, U.K. danny.ralph@jims.cam.ac.uk Computer Sciences Department, 1210 W. Dayton Street, University of Wisconsin, Madison, WI 53706, U.S.A. swright@cs.wisc.edu 1

2 local solution mapping is piecewise smooth in t, and thus satisfies x(t) x = O(t). One of our main results (Theorem 3.7 in Section 3.3) shows that the same conclusion holds in the absence of one of the less natural assumptions a strict complementarity condition made in [28, Theorem 4.1]. Both results rely on a strong-second order condition, termed RNLP-SSOSC and defined below. In Section 3.1, we investigate existence of solutions to Reg(t) near x, under weaker second-order and strict complementarity conditions. Theorem 3.2 replaces RNLP-SSOSC with a weaker second-order sufficient condition (MPEC-SOSC, also defined below), and drops the strict complementarity assumptions. This result shows that Reg(t) has a (possibly nonunique) local solution within a distance O(t 1/2 ) of x. Under RNLP-SOSC, a condition that is intermediate between MPEC-SOSC and RNLP-SSOSC, Theorem 3.3 gives an improved O(t) bound, still without requiring the strict complementarity assumptions. Corollary 3.4 shows that a partial strict complementarity condition, in conjunction with MPEC-SOSC, leads to the O(t) estimate again. In Section 3.2, we show that Lagrange multipliers for solutions of Reg(t) satisfying the O(t) estimate are bounded. Section 3.3 contains Theorem 3.7 mentioned above, which gives sufficient conditions for x(t) to be piecewise smooth and locally unique for small t > 0. Section 4 studies properties of solutions of some alternative regularized formulations. Scholtes [28, Section 5.1] also considers the following regularization scheme, in which the approximate complementarity condition is gathered into a single constraint: (1.3) RegComp(t) : min x f(x) subject to g(x) 0, h(x) = 0, G(x) 0, H(x) 0, G(x) T H(x) t. Section 4.1 points out that analogs of Theorems 3.2 and 3.3 hold for RegComp(t), but that local uniqueness results like those of Section 3.3 do not hold. In another plausible regularization, the inequalities of the regularization terms in Reg(t) are replaced by equalities: RegEq(t) : min x f(x) subject to (1.4) g(x) 0, h(x) = 0, G(x) > 0, H(x) > 0, G i (x)h i (x) = t, i = 1, 2,..., m. Section 4.2 shows that an existence result similar to Theorem 3.2 holds for this formulation, but with the O(t 1/2 ) estimate replaced by O(t 1/4 ). (The proof technique is quite different; unlike the proofs in Section 3.1, it does not rely on the results of Bonnans and Shapiro [3].) Finally, in Section 5, we discuss a nonlinear programming reformulation based on the exact l 1 penalty function. For a given nonnegative parameter ρ, this reformulation is as follows: (1.5) PF(ρ) : min x f(x) + ρg(x) T H(x) subject to g(x) 0, h(x) = 0, G(x) 0, H(x) 0, We show that this formulation has the appealing property that under standard assumptions, the MPEC solution x is a local solution of PF(ρ), for all ρ sufficiently large, and that regularity conditions for the MPEC imply regularity of PF(ρ). While this paper focuses on certain regularization and penalization schemes, there are several other nonlinear programming approached to (1.1) with similar motivations, 2

3 starting with Fukushima and Pang s analysis [8] of the smoothing scheme of Facchinei et al. [6], and including the penalty approaches analyzed by Hu and Ralph [12] and Huang, Yang, and Zhu [13]. Lin and Fukushima [18] have studied the issue of identifying active constraints in smoothing, regularization, and penalty methods. More recently, Anitescu [1] has studied the elastic mode for nonlinear programming, in conjunction with a sequential quadratic programming (SQP) algorithm, and focuses particularly on MPECs. Anitescu s formulation is similar to (1.5), but it introduces an extra variable into the formulation to represent the maximum of G(x) T H(x) and the violation of the other constraints. On a slightly different tack, decomposition methods which recognize the disjunctive nature of MPEC constraints are well studied. We mention the globally convergent methods for MPECs with linear constraint functions proposed or analyzed by Jiang and Ralph [15] (see [20, Chapter 6] and [16] for local convergence analysis); Tseng and Fukushima [9], who use an ɛ-active set method; and Zhang and Liu [30], who use an extreme-ray descent method. SQP-based methods for MPECs can be found in Liu et al. [19] and Fletcher et al. [7]. Interior-point methods have been proposed by de Miguel, Friedlander, Nogales and Scholtes [5] and Raghunathan and Biegler [23], while Benson, Shanno, and Vanderbei [2] have performed a computational study involving the LOQO interior-point code and the MacMPEC test set (Leyffer [17]). An anonymous referee has alerted us to a forthcoming paper by Izmailov [14]. We do not have access to an English translation of this paper, but believe that it includes analysis similar to some of that which appears in our proofs below (in particular, the proof of Theorem 3.2). See the acknowledgments at the end of this paper for further details. In the remainder of the paper we use to denote the Euclidean norm 2, unless otherwise specified. We write b = O(a) for nonnegative scalars a and b if there is a constant C such that b Ca for all a sufficiently small, or all a sufficiently large, depending on the context. We write b = o(a) if for some sequence of nonnegative values a k and corresponding b k with either a k or a k 0, we have that b k /a k Assumptions and Background. We now summarize some known results concerning constraint qualifications and optimality conditions, for use in subsequent sections. We discuss first-order conditions and constraint qualifications in Section 2.1 and second-order conditions in Section 2.2, concluding with a result concerning local quadratic increase of the objective in a feasible neighborhood of x in Section First-Order Conditions and Constraint Qualifications. We start by defining the following active sets at the point x, feasible for (1.1): (2.1a) (2.1b) (2.1c) I g def = {i = 1, 2,..., p g i (x ) = 0}, I G def = {i = 1, 2,..., m G i (x ) = 0}, I H def = {i = 1, 2,..., m H i (x ) = 0}. Because x is feasible, we have I G I H = {1, 2,..., m}. The set I G I H is called the biactive set. Our first definition of stationarity is as follows. Definition 2.1. A point x that is feasible for (1.1) is Bouligand- or B-stationary 3

4 if d = 0 solves the following linear program with equilibrium constraints (LPEC): (2.2) min d f(x ) T d subject to g(x ) + g(x ) T d 0, h(x ) + h(x ) T d = 0, 0 G(x ) + G(x ) T d H(x ) + H(x ) T d 0. Checking B-stationarity is difficult in general, as it may require the solution of 2 m linear programs, where m is the cardinality of the biactive set I G I H. However, B- stationarity is implied by the following condition, which is more restrictive but much easier to check. Definition 2.2. A point x that is feasible for (1.1) is strongly stationary if d = 0 solves the following linear program: (2.3) min d f(x ) T d subject to g(x ) + g(x ) T d 0, h(x ) + h(x ) T d = 0, G i (x ) T d = 0, i I G \ I H, H i (x ) T d = 0, i I H \ I G, G i (x ) T d 0, H i (x ) T d 0, i I G I H. Note that (2.3) is the linearized approximation to the following nonlinear program, which is referred to as the relaxed nonlinear program (RNLP) for (1.1): (2.4) min x f(x) subject to g(x) 0, h(x) = 0, G i (x) = 0, i I G \ I H, H i (x) = 0, i I H \ I G, G i (x) 0, H i (x) 0, i I G I H. (RNLP) We also mention an interesting and useful observation of Anitescu [1, Theorem 2.2] that x is strongly stationary if and only if it is stationary for Reg(0), that is, there are Lagrange multipliers such that the KKT conditions are satisfied for this problem. A similar result by Fletcher et al. [7, Proposition 4.1] gives equivalence between strongly stationary points and stationary points of RegComp(0). By introducing Lagrange multipliers, we can combine the optimality conditions for (2.3) with the feasibility conditions for x as follows: (2.5a) (2.5b) (2.5c) (2.5d) (2.5e) (2.5f) (2.5g) (2.5h) (2.5i) 0 = f(x ) q λ i g i (x ) µ i h i (x ) i I g i=1 τi G i (x ) νi H i (x ), i I G i I H 0 = h i (x ), i = 1, 2,..., q, 0 = g i (x ), i I g, 0 < g i (x ), i {1, 2,..., q} \ I g, 0 λ i, i I g, 0 = G i (x ), i I G, 0 < G i (x ), i {1, 2,..., m} \ I G, 0 = H i (x ), i I H, 0 < H i (x ), i {1, 2,..., m} \ I H, 4

5 (2.5j) (2.5k) 0 τ i, i I G I H, 0 ν i, i I G I H. Clearly, the Lagrange multipliers in (2.5) suffice for all 2 m of the LPECs in (2.2). For a strongly stationary point x, we can now define the following sets: (2.6a) (2.6b) (2.6c) I + g I 0 g J + G J 0 G (2.6d) (2.6e) J + (2.6f) H J 0 H def = {i I g λ i > 0 for some (λ, µ, τ, ν ) satisfying (2.5)}, def = I g \ I + g, def = {i I G I H τ i > 0 for some (λ, µ, τ, ν ) satisfying (2.5)}, def = (I G I H ) \ J + G, def = {i I G I H ν i > 0 for some (λ, µ, τ, ν ) satisfying (2.5)}, def = (I G I H ) \ J + H. It is easy to show that there exists a multiplier (λ, µ, τ, ν ) satisfying (2.5) such that (2.7a) (2.7b) (2.7c) i I + g λ i > 0, i I 0 g λ i = 0, i J + G τ i > 0, i J 0 G τ i = 0, i J + H ν i > 0, i J 0 H ν i = 0. (The set of optimal multipliers is convex, so we can simply take an average of the multipliers (λ, µ, τ, ν ) that satisfy (2.6a), (2.6c), (2.6e) individually.) If the MPEC-LICQ (defined next) is satisfied, then the Lagrange multipliers for (2.3) are in fact unique, and in this case strong stationarity and B-stationarity are equivalent. Definition 2.3. The MPEC-LICQ is satisfied at the point x if the following set of vectors is linearly independent: (2.8) { g i (x ) i I g } { h i (x ) i = 1, 2,..., q} { G i (x ) i I G } { H i (x ) i I H }. In other words, the linear independence constraint qualification (LICQ) is satisfied for the RNLP (2.4). We have the following result concerning first-order necessary conditions dating back to Luo, Pang and Ralph [21] but stated in the form of Scheel and Scholtes [27, Theorem 2]. Theorem 2.4. Suppose that x is a local minimizer of (1.1). Then if the MPEC- LICQ condition holds at x, then x is strongly stationary; and the multiplier vector (λ, µ, τ, ν ) that satisfies the conditions (2.5) is unique. A number of our results use the following weaker Mangasarian-Fromovitz constraint qualification (MFCQ). Definition 2.5. The MPEC-MFCQ is satisfied at x if the MFCQ is satisfied for the RNLP (2.4); that is, if there is a nonzero vector d IR n such that G i (x ) T d = 0, i I G \ I H, H i (x ) T d = 0, i I H \ I G, h i (x ) T d = 0, i = 1, 2,..., q, g i (x ) T d > 0, i I g, G i (x ) T d > 0 and H i (x ) T d > 0, i I G I H ; and G i (x ), i I G \ I H, H i (x ), i I H \ I G, h i (x ), i = 1, 2,..., q are all linearly independent. 5

6 (It is easy to show, by using an argument like that of Gauvin [10] for nonlinear programming, that MPEC-MFCQ holds if and only if the set of multipliers (λ, µ, τ, ν ) satisfying (2.5) is bounded.) We now define three varieties of strict complementarity at a strongly stationary point. To our knowledge, the second of these has only appeared before in the conditions for superlinear convergence of the elastic-mode penalty approach to MPCC analyzed in [1, Section 4]. Definition 2.6. Let x be a strongly stationary point at which MPEC-LICQ is satisfied. (a) The upper-level strict complementarity (USC) condition holds if J + G = J + H = I G I H. (b) The partial strict complementarity (PSC) condition holds if J + G J + H = I G I H. (c) Lower-level strict complementarity (LSC) holds if I G I H =. It is obvious that LSC USC PSC. Strong stationarity and B-stationarity are equivalent when lower-level strict complementarity holds, since in this case the LPEC (2.2) reduces to the LP (2.3) Second-Order Conditions. The set S of normalized critical directions for the RNLP (2.4) is defined as follows: (2.9) S def = {s s 2 = 1} {s h(x ) T s = 0} {s g i (x ) T s = 0 for all i I + g } {s g i (x ) T s 0 for all i I 0 g } {s G i (x ) T s = 0 for all i I G \ I H } {s G i (x ) T s 0 for all i J 0 G } {s G i (x ) T s = 0 for all i J + G } {s H i (x ) T s = 0 for all i I H \ I G } {s H i (x ) T s 0 for all i J 0 H } {s H i (x ) T s = 0 for all i J + H }. By enforcing the additional condition that either H i (x ) T s = 0 or G i (x ) T s = 0, for all i J 0 G J 0 H, we obtain the set of normalized critical directions S for the MPEC (1.1) (see Scheel and Scholtes [27, eq. (6) and Section 3]); that is, (2.10) S def = S {s min( H i (x ) T s, G i (x ) T s) = 0 for all i J 0 G J 0 H }. The difference between S and S vanishes if JG 0 J H 0 =, that is, if USC, LSC or PSC is satisfied. We also define the MPEC Lagrangian as in Scholtes [28, Sec. 4]: (2.11) L(x, λ, µ, τ, ν) = f(x) λ T g(x) µ T h(x) τ T G(x) ν T H(x). (Note that the expression in (2.5a) is the partial derivative of L with respect to x at the point (x, λ, µ, τ, ν ), omitting the terms corresponding to inactive constraints.) We are now ready to define second-order sufficient conditions. 6

7 Definition 2.7. Let x be a strongly stationary point. The MPEC-SOSC holds at x if there is σ > 0 such that for every s S, there are multipliers (λ, µ, τ, ν ) satisfying (2.5) such that (2.12) s T 2 xxl(x, λ, µ, τ, ν )s σ. The RNLP-SOSC holds at x if for every s S, there are multipliers (λ, µ, τ, ν ) satisfying (2.5) such that (2.12) holds. Likewise, we define strong second-order sufficient conditions for the MPEC and RNLP. For the latter, the normalized critical direction set at x is as follows: T def = {s s 2 = 1} {s h(x ) T s = 0} {s g i (x ) T s = 0 for all i I + g } {s G i (x ) T s = 0 for all i with τ i 0} {s H i (x ) T s = 0 for all i with ν i 0}. For the MPECs, the critical directions may be different for every branch of the feasible set containing x ; see [20] for various piecewise optimality conditions using this motivation. For any partition I J of J 0 G J 0 H, let T (I, J) def = T {s G i (x ) T s = 0 for all i I} {s H i (x ) T s = 0 for all i J}. Definition 2.8. Let x be a strongly stationary point. The MPEC-SSOSC holds at x if there is σ > 0 such that for every partition I J of JG 0 J H 0 and each s T (I, J), there are multipliers (λ, µ, τ, ν ) satisfying (2.5) such that (2.12) holds. The RNLP-SSOSC holds at x if for every s T, there are multipliers (λ, µ, τ, ν ) satisfying (2.5) such that (2.12) holds. When JG 0 J H 0 is empty (that is, PSC holds), the index sets I and J in the definition of MPEC-SSOSC are also empty, so that T (I, J) = T and the strong second-order sufficient conditions of Definition 2.8 coincide. In general, we have T S S, so that RNLP-SSOSC = RNLP-SOSC = MPEC-SOSC. Similarly, we have MPEC-SSOSC = MPEC-SOSC. The following example, which will be referred to again later, shows how the direction sets above are defined and demonstrates that MPEC-SOSC is strictly weaker than RNLP-SOSC, and that MPEC-SSOSC is strictly weaker than RNLP-SSOSC. (A similar example appears in Scheel and Scholtes [27, p. 12].) Example 1. Let x = (x 1, x 2 ) IR 2 and [ ] 1 1 Q =. 1 1 The MPEC min x T Qx subject to 0 x 1 x 2 0 has the origin x = (0, 0) as a global minimizer, and no other local minimizers or stationary points. The MPEC-LICQ holds at x and, taking G(x) = x 1 and H(x) = x 2, and the corresponding multipliers are τ = 0 and ν = 0. Hence, we have I G = I H = I G I H = J 0 G = J 0 H = {1}, J + G = J + H =. 7

8 The Hessian of the MPEC-Lagrangian (2.11) is Q, and we have and S = {(1, 0), (0, 1)}, S = {(s1, s 2 ) 0 s s 2 2 = 1}, T ({1}, ) = {(0, 1), (0, 1)}, T (, {1}) = {(1, 0), ( 1, 0)}, T = {(s 1, s 2 ) s s 2 2 = 1}. It is easy to check that MPEC-SSOSC, hence MPEC-SOSC, holds. However, RNLP- SOSC does not hold, and neither does RNLP-SSOSC, as there exists a direction of zero curvature in S, namely s = (1/ 2, 1/ 2). We mention for later reference that the solution set of Reg(t) can easily be seen to be a continuum {(x 1, x 2 ) : 0 x 1 = x 2 t} for t > Local Quadratic Increase. We have the following result concerning quadratic growth of the objective function in a feasible neighborhood of a strongly stationary x at which MPEC-SOSC is satisfied. Theorem 2.9. Suppose that x is a strongly stationary point of (1.1) at which MPEC-SOSC is satisfied. Then x is a strict local minimizer of (1.1) and in fact for any ˆσ (0, σ) (where σ is from (2.12)), there is r 0 > 0 such that (2.13) f(x) f(x ) ˆσ x x 2 2, for all x feasible in (1.1) with x x r 0. Proof. This result follows from Scheel and Scholtes [27, Theorem 7(2)] and basic theory concerning quadratic growth for standard nonlinear programming; see for example Maurer and Zowe [22] and Robinson [26, Theorem 2.2]. We can still prove quadratic increase if we drop the strong stationarity assumption, and assume instead B-stationarity of x along with an SOSC for all nonlinear programs of the form min x f(x) subject to g(x) 0, h(x) = 0, G i (x) = 0, for all i I G, G i (x) 0, for all i / I G, H i (x) = 0, for all i I H, H i (x) 0, for all i / I H. where I G and I H form a partition of {1, 2,..., m} such that I G I G and I H I H. (We do not give a formal statement or proof of this result, since it is not needed for subsequent sections of this paper.) Note that if we assume RNLP-SOSC rather than the less stringent MPEC-SOSC, the quadratic increase result becomes a trivial consequence of standard nonlinear programming theory; see again Robinson [26]. 3. Properties of Solutions of Reg(t). In this section, we investigate the minimizers of Reg(t) for small values of t. Our starting point is a result of Scholtes [28, Theorem 4.1], which we state in a slightly modified form below. This result requires the RNLP-SSOSC as well as an additional (and somewhat artificial) complementarity assumption involving the multipliers τ i, i I G \ I H and ν i, i I H \ I G. Theorem 3.1. Suppose that x is a strongly stationary point of (1.1) at which MPEC-LICQ, RNLP-SSOSC, and USC are satisfied. Assume in addition that τ i 0 8

9 for all i I G and ν i 0 for all i I H. Then for all t > 0 sufficiently small, the problem (1.2) has a unique stationary point x(t) in a neighborhood of x that satisfies second-order sufficient conditions for (1.2) and hence is a strict local solution. Moreover, we have that x(t) x = O(t). The original result also notes that x(t) is a piecewise smooth function of t for small nonnegative t. Our results in this section are of two main types: existence results and uniqueness results for solutions of Reg(t). We prove the existence results in Section 3.1. In Theorem 3.2, we weaken the assumptions in the theorem above by replacing RNLP- SSOSC by MPEC-SOSC and dropping the complementarity condition. The result is correspondingly weaker; we do not prove uniqueness of the solution of Reg(t) in the neighborhood of x, and show only that the distance from x(t) to x satisfies an O(t 1/2 ) estimate. In Theorem 3.3, we recover the O(t) estimate at the expense of using the RNLP-SOSC instead of MPEC-SOSC. Section 3.2 demonstrates boundedness of the Lagrange multipliers for Reg(t) at solutions x(t) for which x(t) x = O(t). In Section 3.3, we discuss local uniqueness of these solutions, and piecewise smoothness of the solution mapping x(t), making use of the SSOSC of Definition Estimating Distance Between Solutions of Reg(t) and the MPEC Optimum. We now prove our first result concerning existence of a solution to Reg(t) near the solution x of (1.1) and its distance to x. This result is obtained by applying Bonnans and Shapiro [3, Theorem 5.57] to the problem Reg(0), which is Reg(0) : min x f(x) subject to (3.1) g(x) 0, h(x) = 0, G(x) 0, H(x) 0, G i (x)h i (x) 0, i = 1, 2,..., m. Theorem 3.2. Suppose that x is a strongly stationary point of (1.1) at which MPEC-MFCQ and MPEC-SOSC are satisfied. Then there are positive constants ˆr 0, t 2, and M 2 such that for all t (0, t 2 ], the global solution x(t) of the localized problem Reg(t) with the additional ball constraint x x ˆr 0 that lies closest to x satisfies x(t) x M 2 t 1/2. Proof. We prove the result by verifying that the conditions of [3, Theorem 5.57] are satisfied. First, because x is a strict local solution of (1.1) (and hence of (3.1)), we can choose ˆr 0 and impose the additional condition x x 2 ˆr 0 in (3.1). With this additional constraint, x is the unique global solution of the problem, so the first condition of [3, Theorem 5.57] holds. Moreover, since the feasible set for Reg(t) contains the feasible set for Reg(0), we have by applying the additional condition x x 2 ˆr 0 to (1.2) that the feasible set for the latter problem is nonempty and uniformly bounded, thereby ensuring that the fifth condition of [3, Theorem 5.57] is also satisfied. The second condition in [3, Theorem 5.57] is Gollan s condition [3, (5.111)]. This condition reduces for our problem to the existence of a nonzero vector d IR n such that h i (x ), i = 1, 2,..., q are linearly independent; g i (x ) T d > 0, for all i Ig ; G i (x ) T d > 0, for all i IG ; H i (x ) T d > 0, for all i IH ; G i (x ) H i (x ) T d + Hi (x ) G i (x ) T d < 1, i = 1, 2,..., m. (3.2) 9

10 The linear independence condition in Definition 2.5 implies that we can choose s IR n such that h i (x ) T s = 0, i I g, G i (x ) T s = 1, i I G \ I H, H i (x ) T s = 1, i I H \ I G. By setting d = d + αs, where d is from Definition 2.5 and α > 0 is sufficiently small, we can ensure that all conditions but the final one in (3.2) are satisfied. By scaling d by an appropriate factor we can ensure that this condition is satisfied too. The third condition of [3, Theorem 5.57] existence of Lagrange multipliers for (3.1) at x follows from (2.5) in a similar fashion to the proof of [7, Proposition 4.1]; see also the recent result of Anitescu [1, Theorem 2.2]. We seek Lagrange multipliers λ, µ, τ, ν, and ρ IR m (the last one for the constraints G i (x)h i (x) 0) such that (3.3a) (3.3b) (3.3c) (3.3d) (3.3e) (3.3f) (3.3g) (3.3h) (3.3i) (3.3j) (3.3k) (3.3l) 0 = f(x ) q λi g i (x ) µ i h i (x ) i I g i=1 ( τ i ρ i H i (x )) G i (x ) ( ν i ρ i G i (x )) H i (x ), i I G i I H 0 = h i (x ), i = 1, 2,..., q, 0 = g i (x ), i I g, 0 > g i (x ), i {1, 2,..., q} \ I g, 0 µ i, i I g, 0 = G i (x ), i I G, 0 < G i (x ), i {1, 2,..., m} \ I G, 0 = H i (x ), i I H, 0 < H i (x ), i {1, 2,..., m} \ I H, 0 τ i, i I G, 0 ν i, i I H, 0 ρ i, i = 1, 2,..., m. Note that, in contrast to (2.5j) and (2.5k), nonnegativity is required of all τ i, i I G and all ν i, i I H, not just the components in the biactive set I G I H. Given any set of multipliers (λ, µ, τ, ν ) satisfying (2.5) and (2.7), we can set (3.4a) (3.4b) (3.4c) (3.4d) λ i = λ i, i I g, µ i = µ i, i = 1, 2,..., q, τ i = τi + ρ i H i (x ), i I G, ν i = νi + ρ i G i (x ), i I H, where the multipliers ρ i, i = 1, 2,..., m satisfy ( def τ ) i (3.5a) ρ i ρ i = max 0, H i (x, i I G \ I H ; ) ( def ν ) i (3.5b) ρ i ρ i = max 0, G i (x, i I H \ I G ; ) (3.5c) ρ i ρ i def = 0, i I G I H. 10

11 It is easy to check that the resulting multipliers satisfy (3.3). Note in particular that (3.6) τ i = τ i, ν i = ν i, i I G I H. The fourth condition in [3, Theorem 5.57] requires second-order sufficient conditions for (3.1) to hold. Because of (3.6), the critical direction set for this problem is S the same as for the RNLP (2.4). Defining L to be the Lagrangian for (3.1), it is easy to see from the relations (3.4) that (3.7) 2 L(x xx, λ, µ, τ, ν, ρ) = 2 xxl(x, λ, µ, τ, ν ) m ( + ρ i Gi (x ) H i (x ) T + H i (x ) G i (x ) T ). i=1 By using Definition 2.7 and the definition (2.10) of S, we can find an ɛ > 0 such that for each (3.8) s S {s min( H i (x ) T s, G i (x ) T s) ɛ for all i J 0 G J 0 H }, there exists a tuple of MPEC multipliers (λ, µ, τ, ν ) (satisfying (2.5)), hence a corresponding tuple of multipliers ( λ, µ, τ, ν, ρ) satisfying (3.4) and (3.5), such that s T 2 xx L(x, λ, µ, τ, ν, ρ)s s T 2 xxl(x, λ, µ, τ, ν )s σ/2 where σ is from Definition 2.7. For all s S but not in the set (3.8), we have H i (x ) T s > ɛ and G i (x ) T s > ɛ for at least one i JG 0 J H 0, so that sup s T 2 L(x, λ, xx µ, τ, ν, ρ)s ( λ, µ, τ, ν,ρ) sup s T 2 xxl(x, λ, µ, τ, ν )s + 2 (λ,µ,τ,ν ) min i J 0 G J 0 H where, here and below, the supremum at left (right) is taken over the multipliers for (3.1) (MPEC multipliers, respectively). In addition to (3.5), we now require that ρ i ˆρ for all i JG 0 J H 0, where ˆρ is large enough that the following condition holds: inf sup s S (λ,µ,τ,ν ) s T 2 xxl(x, λ, µ, τ, ν )s + 2ˆρɛ 2 σ/2 Under these additional conditions on ρ, we have that ρ i ɛ 2 sup s T 2 L(x, λ, xx µ, τ, ν, ρ)s σ/2, ( λ, µ, τ, ν,ρ) for all s S. Hence, second-order sufficient conditions for (3.1) are satisfied at x, so the fourth condition of [3, Theorem 5.57] is also satisfied. The result now follows immediately from [3, Theorem 5.57]. When the RNLP-SOSC replaces MPEC-SOSC and MPEC-LICQ replaces MPEC- MFCQ, we can strengthen the bound to x(t) x = O(t). Theorem 3.3. Suppose that x is a strongly stationary point of (1.1) at which MPEC-LICQ and RNLP-SOSC are satisfied, and let ˆr 0 be the positive constant defined in Theorem 3.2. Then there is a value t 3 > 0 and a constant M 3 such that for all t (0, t 3 ], the global solution x(t) of the localized problem Reg(t) with the additional ball constraint x x ˆr 0 /2 that lies closest to x satisfies x(t) x M 3 t. 11

12 Proof. We prove the result by invoking [3, Theorem 4.55]. Our task is to show that the three conditions of this theorem are satisfied by the limiting problem (3.1). We discuss these three conditions in the order (i), (iii), (ii). To make the connections with the notation in [3], we write Reg(t) in the following general form: (3.9) min f(x) subject to C(x, t) K, where K in our case is a polyhedral convex cone (a Cartesian product of zeros and half-lines), and t appears in the constraints C(x, t) as the linear term tv, where v is a vector consisting of zeros, except for 1 in the locations corresponding to the constraints G i (x)h i (x) t 0. Condition (i) of the cited theorem requires the Lagrange multiplier set for (3.1) to be nonempty and a directional regularity condition to be satisfied. We verified existence of Lagrange multipliers already in the proof of Theorem 3.2, while the directional regularity condition reduces for this problem to Gollan s condition, which has also been verified in our earlier proof. Condition (iii) is automatic for our problem since K above is polyhedral and convex; see [3, Remark 4.59]. We turn now to condition (ii), which is a second-order sufficient condition [3, (4.139)]. Note first that the σ term in [3, (4.139)] can be ignored because of the polyhedral convex nature of our set K in (3.9). We start by expanding on results in the proof of Theorem 3.2, and then discuss the set of optimal multipliers for Reg(0) and define linearized dual problem for Reg(t) in terms of this set. Let us introduce the Lagrangian L for Reg(t), where (3.10) L(x, t, λ, µ, τ, ν, ρ) = f(x) λ T g(x) µ T h(x) τ T G(x) ν T H(x) + Note that when t = 0, we have m ρ i (G i (x)h i (x) t). i=1 (3.11) L(x, 0, λ, µ, τ, ν, ρ) = L(x, λ, µ, τ, ν, ρ), for L defined in the proof of Theorem 3.2. As shown there, the set of optimal multipliers for Reg(0) can be defined by taking the union, over all MPEC multipliers (λ, µ, τ, ν ) (satisfying (2.5)), of the corresponding multipliers ( λ, µ, τ, ν, ρ) defined in (3.4), (3.5), where ρ ρ and the components of ρ are defined in (3.5). Since we assume MPEC-LICQ, the MPEC multiplier (λ, µ, τ, ν ) is in fact unique, so the multipliers ( λ, µ, τ, ν, ρ) depend only on ρ, a dependence we indicate explicitly by writing ( λ(ρ), µ(ρ), τ(ρ), ν(ρ), ρ). The linearized dual problem for Reg(0), following the general definition in [3, (4.46)], is as follows: (3.12) max ( λ(ρ), µ(ρ), τ(ρ), ν(ρ),ρ);ρ ρ D t L(x, 0, λ(ρ), µ(ρ), τ(ρ), ν(ρ), ρ), From the definition (3.10), this problem reduces to min ( λ(ρ), µ(ρ), τ(ρ), ν(ρ),ρ);ρ ρ m ρ i, i=1 whose (unique) solution is obviously ( λ( ρ), µ( ρ), τ( ρ), ν( ρ), ρ). 12

13 The condition [3, (4.139)] now reduces to the following: (3.13) s T 2 xx L(x, 0, λ( ρ), µ( ρ), τ( ρ), ν( ρ), ρ)s > 0, for all s S, since, as we mentioned in the proof of Theorem 3.2, the critical direction set for (3.1) is the same as the critical direction set S (2.9) for the RNLP (2.4). From (3.7) and (3.11), we have that 2 L(x xx, 0, λ( ρ), µ( ρ), τ( ρ), ν( ρ), ρ) m = 2 xxl(x, λ, µ, τ, ν ( ) + ρ i Gi (x ) H i (x ) T + H i (x ) G i (x ) T ), so that i=1 s T 2 L(x xx, 0, λ( ρ), µ( ρ), τ( ρ), ν( ρ), ρ)s m = s T 2 xxl(x, λ, µ, τ, ν ( )s + 2 ρ i Gi (x ) T s ) ( H i (x ) T s ). Because s S, and because (λ, µ, τ, ν ) is the unique multiplier satisfying (2.5), we have by RNLP-SOSC (Definition 2.7) that the first term on the right-hand side of this equation is at least σ > 0. Moreover, since ρ 0, G(x ) T s 0, and H(x ) T s 0, the summation in the final term is nonnegative. We conclude that (3.13), and hence condition (ii) of [3, Theorem 4.55] is satisfied. We conclude that the three conditions of [3, Theorem 4.55] are satisfied, so our result follows directly from the cited theorem. The next result follows immediately from Theorem 3.3 when we note that the MPEC-SOSC and RNLP-SOSC conditions are identical when PSC holds. Corollary 3.4. Suppose that x is a strongly stationary point of (1.1) at which MPEC-LICQ and MPEC-SOSC are satisfied, and that the partial strict complementarity (PSC) condition holds. Then there is a value t 3 > 0 and a constant M 3 such that for all t (0, t 3 ], the global solution x(t) of the localized problem Reg(t) with the additional ball constraint x x ˆr 0 /2 that lies closest to x satisfies x(t) x M 3 t, where ˆr 0 is as defined in Theorem 3.2. We conclude this subsection by illustrating the difference between Theorems 3.2 and 3.3 using Example 1. There we can take x(t) = ( t, t), hence x(t) x = O(t 1/2 ). The O(t) estimate of Theorem 3.3 does not hold because RNLP-SOSC is not satisfied Boundedness of Lagrange Multipliers in Reg(t). We now establish a companion result for Theorem 3.3 and Corollary 3.4 concerning boundedness of the Lagrange multipliers at the solutions of Reg(t) described in those results. The main result, Proposition 3.6, is proved after the following simple technical preliminary. Lemma 3.5. Consider any i I G I H and suppose that G i (x ) and H i (x ) are nonzero vectors. Then there exist a neighborhood U i of x and positive constant c i such that for any x U i and t 0 with G i (x)h i (x) = t, we have x x c i t. Proof. Suppose for contradiction that there is a sequence t k 0, and corresponding x k with G i (x k )H i (x k ) = t k, G i (x k ) > 0, H i (x k ) > 0, such that i=1 (3.14) tk / x k x. 13

14 By taking a subsequence if necessary, we have that either G i (x k ) t k for all k, or a similar bound on H i (x k ). In the former case, for all k sufficiently large, we have from G i (x ) 0 that tk G i (x k ) = G i (x k ) G i (x ) = G i (x )(x k x ) + o( x k x ) 2 G i (x ) x k x, which contradicts (3.14). A similar contradiction occurs in the latter case. Proposition 3.6. Let x be a strongly stationary point of (1.1) at which the MPEC-LICQ holds. If the regularized solution x(t) satisfies x(t) x = O(t) for small positive t, then (i) G i (x(t))h i (x(t)) < t for each small positive t and each i I G I H ; and (ii) the Lagrange multipliers corresponding to x(t) are bounded as 0 < t 0. Proof. Because of MPEC-LICQ, we have that each biactive pair G i (x ) and H i (x ) are linearly independent. Apply Lemma 3.5 to each i I G I H and combine the results to obtain a neighborhood U of x and positive constants ĉ and ˆt with the following property: If 0 t ˆt, x U and G i (x)h i (x) = t for some biactive index i I G I H, then x x c t. Since x(t) x = O(t), we have for small t > 0 that x(t) U, 0 t ˆt, and x(t) x < c t. Hence the constraint G i (x(t))h i (x(t)) t must be inactive, proving (i). It follows from (i) that δ i (t) = 0 for all i I G I H and all t sufficiently small. From Scholtes [28, Theorem 3.1], we have the following convergence result for multipliers of Reg(t): (3.15a) (3.15b) (3.15c) (3.15d) λ i (t) λ i, for all i I g, µ(t) µ, τ i (t) δ i (t)h i (x(t)) τ i, for all i I G, ν i (t) δ i (t)g i (x(t)) ν i, for all i I H. Since δ i (t) = 0 for i I G I H, it follows from (3.15c) and (3.15d) that τ i (t) τ i and ν i (t) ν i for these indices. For i I G \ I H, we cannot have both G i (x(t)) = 0 and G i (x(t))h i (x(t)) = t, so either or both of τ i (t) and δ i (t) must be zero. Checking (3.15c) in each case shows that the resulting multipliers τ i (t) and δ i (t) must be bounded. Boundedness of ν i (t) and δ i (t) likewise follows from (3.15d), for i I H \I G. This completes the proof of (ii) Local Uniqueness of Solutions to Reg(t). In this subsection, we present a further refinement of Scholtes result [28, Theorem 4.1] that has been mentioned several times above. The main difference between Theorem 3.7, below, and the existence results of Section 3.1 is that, in addition to an O(t) bound on x x(t), it provides local uniqueness of x(t) under RNLP-SSOSC. While a strong second-order sufficient condition is to be expected as a sufficient condition for uniqueness, one might hope to use the weaker MPEC-SSOSC. However, Example 1 dispels this hope: the MPEC-LICQ and MPEC-SSOSC hold at the strongly stationary point x = (0, 0), but the solution of Reg(t) is not unique for positive t. We present two main results below. In the first, Theorem 3.7, we weaken the assumptions of [28, Theorem 4.1] by dropping LSC altogether, while retaining similar conclusions. The second result, Corollary 3.9, assumes MPEC-SSOSC instead of RNLP-SSOSC and replaces LSC by the weaker PSC condition. Theorem 3.7. Suppose that x is a strongly stationary point of (1.1) at which MPEC-LICQ and RNLP-SSOSC are satisfied. Then there exist a neighborhood U of 14

15 x, a scalar t > 0, and a piecewise smooth function z : ( t, t) U such that x(t) = z(t) is the unique stationary point of Reg(t) in U for every t [0, t). A consequence of piecewise smoothness is that, for s, t [0, t), we have x(s) x(t) = O( s t ); in particular x(t) x = O(t). The proof of the theorem relies first on showing that x(t) is one of finitely many local solution mappings of strongly stable NLPs and, second, on a somewhat involved argument to establish uniqueness of x(t) within a neighborhood of x. By contrast, under LSC, the good behavior including uniqueness of x(t) follows immediately by observing that it is the solution of a single, strongly stable nonlinear program whose constraints are identified by the sign of the multipliers of the active constraints of G and H; see [28]. A key step toward the proof of Theorem 3.7 is the following technical result. Lemma 3.8. Let f, g, h, γ, and φ be functions from (x, t) IR n IR to IR, IR l, IR m, IR, and IR respectively. Suppose that each of the following parametric problems is strongly stable about (x, 0), meaning that there is a neighborhood of x such that for small perturbations of t about zero the parametric problem has a unique solution (stationary point or local minimizer) in that neighborhood: (3.16a) (3.16b) (3.16c) min x f(x, t) subject to g(x, t) 0, h(x, t) = 0; min x f(x, t) subject to g(x, t) 0, h(x, t) = 0, γ(x, t) 0; min x f(x, t) subject to g(x, t) 0, h(x, t) = 0, φ(x, t) 0. Suppose further, for each x near x with g(x, t) 0 and h(x, t) = 0, that γ(x, t) 0 implies φ(x, t) 0; and φ(x, t) 0 implies γ(x, t) 0. Then the problem (3.17) min x f(x, t) subject to g(x, t) 0, h(x, t) = 0, γ(x, t) 0, φ(x, t) 0 is also strongly stable at (x, 0), and the local solution mapping x(t) for this problem is a selection of the local solution mappings for the previous problems. Proof. Let x 1 (t), x 2 (t), and x 3 (t) denote the local solution mappings of (3.16a), (3.16b) and (3.16c) respectively. We discuss existence and uniqueness of the solution x(t) of (3.17) in turn. a) Existence. If any one of x 1 (t), x 2 (t), and x 3 (t) is feasible for (3.17) then it is a solution of this problem because the feasible set of (3.17) is contained in the feasible set of each of the other problems. Suppose x 1 (t) is not feasible for (3.17), for example γ(x 1 (t)) < 0 and, of course, x 1 (t) x 2 (t). Then γ(x 2 (t)) = 0, otherwise γ(x 2 (t)) > 0, in which case x 2 (t) is a local minimizer of both (3.16a) and (3.16b), which implies x 1 (t) = x 2 (t) by uniqueness, a contradiction. Now use the relationship between γ and φ, which requires that φ(x 2 (t)) 0, i.e. x 2 (t) is a solution of (3.17). A similar argument exchanging the roles of γ and φ shows that x 3 (t) is a solution of (3.17) if φ(x 1 (t)) < 0. b) Uniqueness. Let x 4 (t) be a solution of (3.17) near x, for t near 0. If γ(x 4 (t)) and φ(x 4 (t)) are both positive then x 4 (t) is also a solution of (3.16a), hence coincides with x 1 (t) by uniqueness of the latter. Similarly, x 4 (t) = x 2 (t) if γ(x 4 (t)) = 0 < 15

16 φ(x 4 (t)) and x 4 (t) = x 3 (t) if γ(x 4 (t)) > 0 = φ(x 4 (t)). That is, x 4 (t) is a selection of {x 1 (t), x 2 (t), x 3 (t)}. If x 1 (t) and x 2 (t) are both solutions of (3.17) then obviously the former is also a solution of (3.16b), and they coincide by uniqueness of the latter. Likewise if x 1 (t) and x 3 (t) are both solutions of (3.17) then they coincide. Finally, let x 2 (t) and x 3 (t) be solutions of (3.17). We show by contradiction that x 1 (t) must be feasible for this problem, hence x 4 (t) = x 1 (t) = x 2 (t) = x 3 (t). Assume x 1 (t) is infeasible for (3.17), say γ(x 1 (t)) < 0. The relationship between γ and φ requires that φ(x 1 (t)) 0, i.e. x 1 (t) is feasible for (3.16c) and therefore coincides with x 3 (t). But x 3 (t) is feasible for (3.17), a contradiction. A similar argument shows a contradiction if we assume φ(x 1 (t)) < 0. Proof of Theorem 3.7. To unburden notation we assume without loss of generality, by exchanging G i with H i if necessary, that I G = {1,..., m}. Define I 0 = {i I G \ I H : τi = 0}; note that the corresponding set {i I H \ I G : νi = 0} is empty. Define minimal core constraints as follows: g(x) 0, h(x) = 0, G i (x) 0, if i I G I H or τi > 0 H i (x) 0, if i I G I H or νi > 0 F i (x) t, if τi + ν i < 0. Define core constraints as any set composed of the minimal core as well as, for each i I G \ I H with τ i = 0, at most one of G i (x) 0 and F i (x) t. Choose any set of core constraints and consider the corresponding core NLP which is parametric in t, min x f(x) subject to x satisfies the chosen core constraints. When t = 0, because of MPEC-LICQ, x is a solution of this core NLP at which the LICQ and SSOSC hold; hence classical perturbation theory [3, 26] says that the core NLP is strongly stable at (x, 0) and the local solution mapping is piecewise smooth in t. Call this problem NLP(1). Take i I 0 such that neither G i (x, t) 0 nor F i (x, t) t is in the core. Define NLP(2) by adding the constraint G i (x, t) 0 to NLP(1); and NLP(3) by adding the constraint F i (x, t) t to NLP(1). Then each of NLP(1)-(3) is a core NLP (using a different set of core constraints), hence is strongly stable at (x, 0). It is easy to see that Lemma 3.8 can be applied by taking (3.16a), (3.16b), (3.16c) as NLP(1), NLP(2), NLP(3) respectively, yielding strong stability of the new problem (corresponding to (3.17)): min x f(x) subject to the constraints of NLP(1) and also constraints (3.18) G i (x, t) 0 and F i (x, t) t. The lemma also says that the local solution mapping for the fourth problem (call it x (4) (t)) is a selection of the local solution mappings of NLP(1)-(3), therefore x (4) (t) is also piecewise smooth. Thus we have fulfilled the following induction hypothesis for k = 1. Induction hypothesis 1 k: Choose any distinct i 1,..., i k I 0 and any set of core constraints that includes neither G i (x, t) 0 nor F i (x, t) t for i = i 1,..., i k. Then 1 The assumption I G = {1,..., m} means we need not also consider pairs of constraints H i (x, t) 0 or F i (x, t) t. 16

17 the NLP with constraints given by the chosen core and (3.18) for all i = i 1,..., i k is strongly stable at (x, 0), and the associated local solution mapping is piecewise smooth in t. Let k be at least one and less than the cardinality of I 0. We now show that the induction hypothesis holds for k + 1. Choose any distinct i 1,..., i k+1 I 0 and any set of core constraints that includes neither G i (x, t) 0 nor F i (x, t) t for i = i 1,..., i k+1. Consider three NLPs, each with the objective function f. The first problem, NLP(i), has constraints given the by chosen core with the additional constraints (3.18) for i = i 1,..., i k. The second (third resp.) problem NLP(ii) (NLP(iii) resp.) is derived from NLP(i) by adding the constraint G ik+1 (x) 0 (F ik+1 (x) t resp.). The constraints of each of NLP(i)-(iii) can be written as the union of a core set together with (3.18) for i = i 1,..., i k, i.e. in the form of the NLP specified in Induction Hypothesis k. This is obvious for NLP(i). For NLP(ii), take the core to be the chosen core as well as G ik+1 (x) 0; and for NLP(iii), the chosen core as well as F ik+1 (x) t. Hence each NLP(i)-(iii) is strongly stable at (x, 0). Lemma 3.8 says that the NLP with objective f and constraints consisting of the chosen core and the pair (3.18) for all i = i 1,..., i k+1 is also strongly stable at (x, 0), and that its local solution mapping, denoted x (iv) (t), is the selection of the local solution mappings of NLP(i)-(iii); so x (iv) (t) is also piecewise smooth. The last result here follows from the above theorem simply because, under PSC, MPEC-SSOSC implies (is equivalent to) RNLP-SSOSC. Corollary 3.9. The conclusions of Theorem 3.7 hold if x is a strongly stationary point of (1.1) at which MPEC-LICQ, MPEC-SSOSC, and PSC hold. 4. Alternative Regularized Formulations. We now consider the alternative regularized formulations RegComp(t) and RegEq(t), and discuss the possibility of results like Theorems 3.2 and 3.3 holding for these formulations Properties of Solutions of RegComp(t). For RegComp(t), in which the individual constraints G i (x)h i (x) t are replaced by a single approximate complementarity constraint G(x) T H(x) t, the feasible region contains that of the original problem (1.1) and is a subset of the feasible region for Reg(t). Analogs of Theorems 3.2 and 3.3 hold, with RegComp(t) replacing Reg(t), and the proofs are quite similar. (We omit the details.) However, local uniqueness of the solution of RegComp(t) is difficult to ensure. Scholtes [28] mentions a private communication of Hu which shows that [28, Theorem 4.1] does not extend to RegComp(t). In Hu s counterexample, which is presented in [11, Example 2.3.2], all conditions of [28, Theorem 4.1], hence of Theorem 3.7 above, are shown to hold but the solutions of the RegComp(t) are not unique Properties of Solutions of RegEq(t). A result like Theorem 3.2 holds for the RegEq(t) formulation (1.4) as well, but only if the O(t 1/2 ) estimate is replaced by a weaker O(t 1/4 ) estimate. The following result differs from Theorem 3.2 also in that MPEC-LICQ is assumed in place of MPEC-MFCQ. The result [3, Theorem 5.57] cannot be applied here, as Gollan s directional regularity condition (constraint qualification) does not hold for this formulation. Our proof is based on more elementary results. Theorem 4.1. Suppose that x is a strongly stationary point of (1.1) at which MPEC-LICQ and MPEC-SOSC are satisfied. Then there are positive constants ˆr 2, t 4, and M 7 such that for all t (0, t 4 ], the global solution x(t) of the localized problem RegEq(t) with the additional ball constraint x x ˆr 2 that lies closest to x 17

18 satisfies x(t) x M 7 t 1/4. Proof. Our strategy is to define two balls about x with the following properties: The inner ball has radius O(t 1/4 ), while the outer ball has a constant radius; There is at least one feasible point z(t) for RegEq(t) in the inner ball; All feasible points for RegEq(t) in the annulus between the two balls have a larger function value than f(z(t)). It follows from these facts that the minimizer x(t) described in the proof of the theorem lies inside the inner ball, so the O(t 1/4 ) estimate is satisfied. Consider first the following projection problem, a nonlinear program parametrized by t: (4.1) 1 min x 2 x x 2 2 subject to g(x) 0, h(x) = 0, G i (x) = t 1/2, H i (x) = t 1/2 (i I G I H ), G i (x)h i (x) = t (i / I G I H ). When t = 0, the solution is x and the gradients of the active constraints are linearly independent, by the MPEC-LICQ assumption (Definition 2.3). Since the objective is strongly convex, standard perturbation theory shows that the solution z(t) of this problem satisfies (4.2) z(t) x M 6 t 1/2, for some constant M 6 > 0 and all t sufficiently small. We now choose ˆr 2 such that the following properties hold: (4.3a) (4.3b) (4.3c) ˆr 2 r 0, ˆr 2 r 1, f(x) 2 f(x ) for all x with x x ˆr 2, where r 0 is defined in Theorem 2.9 and r 1 is defined in Lemma A.2. We now define a constant M 7 large enough that the following are true: (4.4a) (4.4b) (4.4c) M 7 2M 1, ˆσM M 1 f(x ), ˆσM M 6 f(x ), where M 1 is defined in Lemma A.2 and ˆσ is defined in Theorem 2.9. We further define t 4 small enough that the following conditions hold: (4.5a) (4.5b) (4.5c) (4.5d) t 4 1, M 6 t 1/2 4 ˆr 2 /2, M 7 t 1/4 4 < ˆr 2 /2, M 1 t 1/2 4 ˆr 2 /2. From (4.2) and (4.5b) and (4.3c), we have (4.6) f(z(t)) f(x ) + 2M 6 t 1/2 f(x ), for all t (0, t 4 ]. 18

19 For a given t t 4, we define the radius of the inner ball to be M 7 t 1/4 and of the outer ball to be ˆr 2 /2. (Because of (4.5c), the inner ball is truly contained in the outer ball.) Now let x be any point in the annulus between the two balls that is feasible for RegEq(t). Since x x ˆr 2 /2 < r 1, we have from Lemma A.2 that there is a z feasible for (1.1) such that (4.7) z x M 1 t 1/2. Since from (4.5d) we have z x z x + x x M 1 t 1/2 + ˆr 2 /2 ˆr 2 /2 + ˆr 2 /2 = ˆr 2, we have using (4.3c) again that (4.8) f(z) f(x) 2 f(x ) z x 2 f(x ) M 1 t 1/2. Moreover, we have from (4.7) and the definition of x that z x x x z x M 7 t 1/4 M 1 t 1/2 > 0, where the final inequality follows from (4.5a) and (4.4a). Hence, from Theorem 2.9 and (4.8), we have (4.9) f(x) f(x ) f(z) f(x ) f(z) f(x) ˆσ z x 2 2 f(x ) M 1 t 1/2 ˆσ[M 7 t 1/4 M 1 t 1/2 ] 2 2 f(x ) M 1 t 1/2. Because of (4.5a) and (4.4a), we have M 1 t 1/2 (1/2)M 7 t 1/4, so ˆσ[M 7 t 1/4 M 1 t 1/2 ] 2 (1/4)ˆσM 2 7 t 1/2. By substituting into (4.9) and using (4.4b), we have (4.10) f(x) f(x ) (1/4)ˆσM 2 7 t 1/2 2 f(x ) M 1 t 1/2 (1/8)ˆσM 2 7 t 1/2. By comparing with (4.6), and using (4.4c), we have f(x) f(x ) + (1/8)ˆσM 2 7 t 1/2 f(z(t)) 2M 6 t 1/2 f(x ) + (1/8)ˆσM 2 7 t 1/2 f(z(t)) + (1/16)ˆσM 2 7 t 1/2, thereby confirming that any feasible point for RegEq(t) in the space between the two balls has a higher function value than the point z(t) defined by (4.1), which is feasible for RegEq(t) and which lies inside the inner ball. This observation establishes the result. The stronger O(t 1/2 ) estimate of Theorem 3.2 cannot apply, at least not under the assumptions of Theorem 4.1. Example 2. The simple MPEC min x x2 2 subject to 0 x 1 x 2 0 has a strongly stationary point x = (0, 0) at which MPEC-LICQ and RNLP-SSOSC (hence MPEC-SOSC) hold, with MPEC-multipliers τ = 1 and ν = 0. RegEq(t) is min x x2 2 subject to x 1 x 2 = t, x 1, x 2 > 0. 19

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity Preprint ANL/MCS-P864-1200 ON USING THE ELASTIC MODE IN NONLINEAR PROGRAMMING APPROACHES TO MATHEMATICAL PROGRAMS WITH COMPLEMENTARITY CONSTRAINTS MIHAI ANITESCU Abstract. We investigate the possibility

More information

1. Introduction. Consider the generic mathematical program with equilibrium constraints (MPEC), expressed as

1. Introduction. Consider the generic mathematical program with equilibrium constraints (MPEC), expressed as SIAM J. OPTIM. Vol. 16, No. 2, pp. 587 609 c 2005 Society for Industrial and Applied Mathematics A TWO-SIDED RELAXATION SCHEME FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS VICTOR DEMIGUEL, MICHAEL

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

Interior Methods for Mathematical Programs with Complementarity Constraints

Interior Methods for Mathematical Programs with Complementarity Constraints Interior Methods for Mathematical Programs with Complementarity Constraints Sven Leyffer, Gabriel López-Calva and Jorge Nocedal July 14, 25 Abstract This paper studies theoretical and practical properties

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

MODIFYING SQP FOR DEGENERATE PROBLEMS

MODIFYING SQP FOR DEGENERATE PROBLEMS PREPRINT ANL/MCS-P699-1097, OCTOBER, 1997, (REVISED JUNE, 2000; MARCH, 2002), MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY MODIFYING SQP FOR DEGENERATE PROBLEMS STEPHEN J. WRIGHT

More information

Solving MPECs Implicit Programming and NLP Methods

Solving MPECs Implicit Programming and NLP Methods Solving MPECs Implicit Programming and NLP Methods Michal Kočvara Academy of Sciences of the Czech Republic September 2005 1 Mathematical Programs with Equilibrium Constraints Mechanical motivation Mechanical

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

A Sequential NCP Algorithm for Solving Equilibrium Problems with Equilibrium Constraints

A Sequential NCP Algorithm for Solving Equilibrium Problems with Equilibrium Constraints A Sequential NCP Algorithm for Solving Equilibrium Problems with Equilibrium Constraints Che-Lin Su Original: October 30, 2004; Revision: November 28, 2004 Abstract. This paper studies algorithms for equilibrium

More information

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics and Statistics

More information

1. Introduction. We consider the general smooth constrained optimization problem:

1. Introduction. We consider the general smooth constrained optimization problem: OPTIMIZATION TECHNICAL REPORT 02-05, AUGUST 2002, COMPUTER SCIENCES DEPT, UNIV. OF WISCONSIN TEXAS-WISCONSIN MODELING AND CONTROL CONSORTIUM REPORT TWMCC-2002-01 REVISED SEPTEMBER 2003. A FEASIBLE TRUST-REGION

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Equilibrium Programming

Equilibrium Programming International Conference on Continuous Optimization Summer School, 1 August 2004 Rensselaer Polytechnic Institute Tutorial on Equilibrium Programming Danny Ralph Judge Institute of Management, Cambridge

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

1. Introduction. We consider the mathematical programming problem

1. Introduction. We consider the mathematical programming problem SIAM J. OPTIM. Vol. 15, No. 1, pp. 210 228 c 2004 Society for Industrial and Applied Mathematics NEWTON-TYPE METHODS FOR OPTIMIZATION PROBLEMS WITHOUT CONSTRAINT QUALIFICATIONS A. F. IZMAILOV AND M. V.

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order

More information

Mathematical programs with complementarity constraints in Banach spaces

Mathematical programs with complementarity constraints in Banach spaces Mathematical programs with complementarity constraints in Banach spaces Gerd Wachsmuth July 21, 2014 We consider optimization problems in Banach spaces involving a complementarity constraint defined by

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Semi-infinite programming, duality, discretization and optimality conditions

Semi-infinite programming, duality, discretization and optimality conditions Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,

More information

Constraint Identification and Algorithm Stabilization for Degenerate Nonlinear Programs

Constraint Identification and Algorithm Stabilization for Degenerate Nonlinear Programs Preprint ANL/MCS-P865-1200, Dec. 2000 (Revised Nov. 2001) Mathematics and Computer Science Division Argonne National Laboratory Stephen J. Wright Constraint Identification and Algorithm Stabilization for

More information

A DC (DIFFERENCE OF CONVEX FUNCTIONS) APPROACH OF THE MPECS. Matthieu Marechal. Rafael Correa. (Communicated by the associate editor name)

A DC (DIFFERENCE OF CONVEX FUNCTIONS) APPROACH OF THE MPECS. Matthieu Marechal. Rafael Correa. (Communicated by the associate editor name) Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 200X Website: http://aimsciences.org pp. X XX A DC (DIFFERENCE OF CONVEX FUNCTIONS) APPROACH OF THE MPECS Matthieu Marechal Centro de Modelamiento

More information

University of Erlangen-Nürnberg and Academy of Sciences of the Czech Republic. Solving MPECs by Implicit Programming and NLP Methods p.

University of Erlangen-Nürnberg and Academy of Sciences of the Czech Republic. Solving MPECs by Implicit Programming and NLP Methods p. Solving MPECs by Implicit Programming and NLP Methods Michal Kočvara University of Erlangen-Nürnberg and Academy of Sciences of the Czech Republic Solving MPECs by Implicit Programming and NLP Methods

More information

Solving generalized semi-infinite programs by reduction to simpler problems.

Solving generalized semi-infinite programs by reduction to simpler problems. Solving generalized semi-infinite programs by reduction to simpler problems. G. Still, University of Twente January 20, 2004 Abstract. The paper intends to give a unifying treatment of different approaches

More information

min s.t. h(x, w, y) = 0 x 0 0 w y 0

min s.t. h(x, w, y) = 0 x 0 0 w y 0 AN INTERIOR POINT METHOD FOR MATHEMATICAL PROGRAMS WITH COMPLEMENTARITY CONSTRAINTS (MPCCS) ARVIND U. RAGHUNATHAN AND LORENZ T. BIEGLER Abstract. Interior point methods for nonlinear programs (NLP) are

More information

SEQUENTIAL QUADRATIC PROGAMMING METHODS FOR PARAMETRIC NONLINEAR OPTIMIZATION

SEQUENTIAL QUADRATIC PROGAMMING METHODS FOR PARAMETRIC NONLINEAR OPTIMIZATION SEQUENTIAL QUADRATIC PROGAMMING METHODS FOR PARAMETRIC NONLINEAR OPTIMIZATION Vyacheslav Kungurtsev Moritz Diehl July 2013 Abstract Sequential quadratic programming (SQP) methods are known to be efficient

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Key words. constrained optimization, composite optimization, Mangasarian-Fromovitz constraint qualification, active set, identification.

Key words. constrained optimization, composite optimization, Mangasarian-Fromovitz constraint qualification, active set, identification. IDENTIFYING ACTIVITY A. S. LEWIS AND S. J. WRIGHT Abstract. Identification of active constraints in constrained optimization is of interest from both practical and theoretical viewpoints, as it holds the

More information

Solving a Signalized Traffic Intersection Problem with NLP Solvers

Solving a Signalized Traffic Intersection Problem with NLP Solvers Solving a Signalized Traffic Intersection Problem with NLP Solvers Teófilo Miguel M. Melo, João Luís H. Matias, M. Teresa T. Monteiro CIICESI, School of Technology and Management of Felgueiras, Polytechnic

More information

Mathematical Programs with Complementarity Constraints in the Context of Inverse Optimal Control for Locomotion

Mathematical Programs with Complementarity Constraints in the Context of Inverse Optimal Control for Locomotion Mathematical Programs with Complementarity Constraints in the Context of Inverse Optimal Control for Locomotion Sebastian Albrecht a and Michael Ulbrich a a Chair of Mathematical Optimization, Department

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method

Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method Bruno F. Lourenço Ellen H. Fukuda Masao Fukushima September 9, 017 Abstract In this work we are interested

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints

Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints Comput Optim Appl (2009) 42: 231 264 DOI 10.1007/s10589-007-9074-4 Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints A.F. Izmailov M.V. Solodov Received:

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Second Order Optimality Conditions for Constrained Nonlinear Programming

Second Order Optimality Conditions for Constrained Nonlinear Programming Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)

More information

1. Introduction. We consider the following mathematical program with equilibrium constraints (MPEC), all of whose constraint functions are linear:

1. Introduction. We consider the following mathematical program with equilibrium constraints (MPEC), all of whose constraint functions are linear: MULTIPLIER CONVERGENCE IN TRUST-REGION METHODS WITH APPLICATION TO CONVERGENCE OF DECOMPOSITION METHODS FOR MPECS GIOVANNI GIALLOMBARDO AND DANIEL RALPH Abstract. We study piecewise decomposition methods

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS

DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS 1 DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS Alexander Shapiro 1 School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA, E-mail: ashapiro@isye.gatech.edu

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

GENERALIZED second-order cone complementarity

GENERALIZED second-order cone complementarity Stochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods Mei-Ju Luo and Yan Zhang Abstract In this paper, we reformulate the

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter

Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter Complementarity Formulations of l 0 -norm Optimization Problems 1 Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter Abstract: In a number of application areas, it is desirable to

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Complementarity Formulations of l 0 -norm Optimization Problems

Complementarity Formulations of l 0 -norm Optimization Problems Complementarity Formulations of l 0 -norm Optimization Problems Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter May 17, 2016 Abstract In a number of application areas, it is desirable

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS HANDE Y. BENSON, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Operations Research and Financial Engineering Princeton

More information

1. Introduction. Consider the following parametric mathematical program with geometric constraints: s.t. F (x, p) Λ,

1. Introduction. Consider the following parametric mathematical program with geometric constraints: s.t. F (x, p) Λ, SIAM J. OPTIM. Vol. 22, No. 3, pp. 1151 1176 c 2012 Society for Industrial and Applied Mathematics STABILITY ANALYSIS FOR PARAMETRIC MATHEMATICAL PROGRAMS WITH GEOMETRIC CONSTRAINTS AND ITS APPLICATIONS

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

Combinatorial Structures in Nonlinear Programming

Combinatorial Structures in Nonlinear Programming Combinatorial Structures in Nonlinear Programming Stefan Scholtes April 2002 Abstract Non-smoothness and non-convexity in optimization problems often arise because a combinatorial structure is imposed

More information

Complementarity Formulations of l 0 -norm Optimization Problems

Complementarity Formulations of l 0 -norm Optimization Problems Complementarity Formulations of l 0 -norm Optimization Problems Mingbin Feng, John E. Mitchell,Jong-Shi Pang, Xin Shen, Andreas Wächter Original submission: September 23, 2013. Revised January 8, 2015

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Solving Multi-Leader-Follower Games

Solving Multi-Leader-Follower Games ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 Solving Multi-Leader-Follower Games Sven Leyffer and Todd Munson Mathematics and Computer Science Division Preprint ANL/MCS-P1243-0405

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Convergence of Stationary Points of Sample Average Two-Stage Stochastic Programs: A Generalized Equation Approach

Convergence of Stationary Points of Sample Average Two-Stage Stochastic Programs: A Generalized Equation Approach MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No. 3, August 2011, pp. 568 592 issn 0364-765X eissn 1526-5471 11 3603 0568 doi 10.1287/moor.1110.0506 2011 INFORMS Convergence of Stationary Points of Sample

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Gerd Wachsmuth. January 22, 2016

Gerd Wachsmuth. January 22, 2016 Strong stationarity for optimization problems with complementarity constraints in absence of polyhedricity With applications to optimization with semidefinite and second-order-cone complementarity constraints

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Simulation-Based Solution of Stochastic Mathematical Programs with Complementarity Constraints Ilker Birbil, S.; Gurkan, Gul; Listes, O.L.

Simulation-Based Solution of Stochastic Mathematical Programs with Complementarity Constraints Ilker Birbil, S.; Gurkan, Gul; Listes, O.L. Tilburg University Simulation-Based Solution of Stochastic Mathematical Programs with Complementarity Constraints Ilker Birbil, S.; Gurkan, Gul; Listes, O.L. Publication date: 2004 Link to publication

More information

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY IN NONLINEAR PROGRAMMING 1 A. L. DONTCHEV Mathematical Reviews, Ann Arbor, MI 48107 and R. T. ROCKAFELLAR Dept. of Math., Univ. of Washington, Seattle, WA 98195

More information

arxiv:math/ v1 [math.oc] 20 Dec 2000

arxiv:math/ v1 [math.oc] 20 Dec 2000 Preprint ANL/MCS-P865-1200, December, 2000 Mathematics and Computer Science Division Argonne National Laboratory arxiv:math/0012209v1 [math.oc] 20 Dec 2000 Stephen J. Wright Constraint Identification and

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

c 2012 Society for Industrial and Applied Mathematics

c 2012 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 22, No. 4, pp. 1579 166 c 212 Society for Industrial and Applied Mathematics GLOBAL CONVERGENCE OF AUGMENTED LAGRANGIAN METHODS APPLIED TO OPTIMIZATION PROBLEMS WITH DEGENERATE CONSTRAINTS,

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

First order optimality conditions for mathematical programs with second-order cone complementarity constraints

First order optimality conditions for mathematical programs with second-order cone complementarity constraints First order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye and Jinchuan Zhou April 9, 05 Abstract In this paper we consider a mathematical

More information

Fakultät für Mathematik und Informatik

Fakultät für Mathematik und Informatik Fakultät für Mathematik und Informatik Preprint 2018-03 Patrick Mehlitz Stationarity conditions and constraint qualifications for mathematical programs with switching constraints ISSN 1433-9307 Patrick

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

Priority Programme 1962

Priority Programme 1962 Priority Programme 1962 An Example Comparing the Standard and Modified Augmented Lagrangian Methods Christian Kanzow, Daniel Steck Non-smooth and Complementarity-based Distributed Parameter Systems: Simulation

More information

Date: July 5, Contents

Date: July 5, Contents 2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........

More information

Constraint qualifications for nonlinear programming

Constraint qualifications for nonlinear programming Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory Preprint ANL/MCS-P1015-1202, Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR

More information

A Local Convergence Analysis of Bilevel Decomposition Algorithms

A Local Convergence Analysis of Bilevel Decomposition Algorithms A Local Convergence Analysis of Bilevel Decomposition Algorithms Victor DeMiguel Decision Sciences London Business School avmiguel@london.edu Walter Murray Management Science and Engineering Stanford University

More information

An Accelerated Newton Method for Equations with Semismooth Jacobians and Nonlinear Complementarity Problems

An Accelerated Newton Method for Equations with Semismooth Jacobians and Nonlinear Complementarity Problems UW Optimization Technical Report 06-0, April 006 Christina Oberlin Stephen J. Wright An Accelerated Newton Method for Equations with Semismooth Jacobians and Nonlinear Complementarity Problems Received:

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information