Preprint S. Dempe, A. B. Zemkoho On the Karush-Kuhn-Tucker reformulation of the bilevel optimization problem ISSN

Size: px
Start display at page:

Download "Preprint S. Dempe, A. B. Zemkoho On the Karush-Kuhn-Tucker reformulation of the bilevel optimization problem ISSN"

Transcription

1 Fakultät für Mathematik und Informatik Preprint S. Dempe, A. B. Zemkoho On the Karush-Kuhn-Tucker reformulation of the bilevel optimization problem ISSN

2 S. Dempe, A. B. Zemkoho On the Karush-Kuhn-Tucker reformulation of the bilevel optimization problem TU Bergakademie Freiberg Fakultät für Mathematik und Informatik Institut für Numerische Mathematik und Optimierung Akademiestr. 6 (Mittelbau) FREIBERG

3 ISSN Herausgeber: Herstellung: Dekan der Fakultät für Mathematik und Informatik Medienzentrum der TU Bergakademie Freiberg

4 On the Karush-Kuhn-Tucker reformulation of the bilevel optimization problem S. Dempe a, A.B. Zemkoho a,1 a Institute of Numerical Mathematics and Optimization Technical University Bergakademie Freiberg Freiberg, Germany Abstract This paper is mainly concerned with the classical KKT reformulation and the primal KKT reformulation (also known as an optimization problem with generalized equation constraint (OPEC)) of the optimistic bilevel optimization problem. A generalization of the MFCQ to an optimization problem with operator constraint is applied to each of these reformulations; hence leading to new constraint qualifications (CQs) and optimality conditions for the bilevel optimization problem. Strikingly, we also obtained that the optimality conditions of an OPEC coincide with those of a certain mathematical program with complementarity constraints (MPCC), which is a new result. In this paper, a concept of partial calmness known for the optimal value reformulation is also introduced for the primal KKT reformulation and as expected, it is shown to be strictly implied by the basic CQ. Key words: Bilevel optimization problem, KKT reformulation, set-valued mapping, Basic CQ, Optimality conditions 1. Introduction We consider the optimistic bilevel programming problem to minimize F(x,y) subject to x X R n,y S(x), (1.1) also called the upper level problem, where F R n R m R is a continuously differentiable function and the set-valued mapping S R n R m, describes the solution set of the following parametric optimization problem also called the lower level problem: minimize f (x, y) subject to y K(x), (1.2) where K(x) is a closed subset of R m for all x X and the function f R n R m R is twice continuously differentiable. We assume that the upper and lower level feasible addresses: dempe@math.tu-freiberg.de (S. Dempe), zemkoho@student.tu-freiberg.de (A.B. Zemkoho) 1 This author acknowledges the support by the Deutscher Akademischer Austausch Dienst (DAAD) Preprint submitted to Elsevier November 1, 2010

5 sets are given as X = {x R n G(x) 0} and K(x) = {y R m g(x,y) 0} for all x X, (1.3) respectively; the functions G R n R k and g R n R m R p being continuously and twice continuously differentiable, respectively. Also, unless otherwise stated, the functions f (x,.) and g i (x,.), i = 1,..., p are assumed to be convex for all x X. Hence, it is well-known from convex optimization that the lower level problem would be equivalent to the parametric generalized equation: 0 y f (x,y)+n K(x) (y), (1.4) where N K(x) (y) denotes the normal cone (in the sense of convex analysis) to K(x) at y. If we assume that the Mangasarian-Fromowitz constraint qualification (MFCQ) is satisfied at all y K(x), x X, then we have N K(x) (y) = { y g(x,y) u u 0, u g(x,y) = 0}, (1.5) cf. [1, Theorem 4.3]. One can then deduce the so-called KKT reformulation of the bilevel optimization problem (1.1): minimize F(x,y) G(x) 0, L(u,x,y) = 0 subject to { u 0, g(x,y) 0, u g(x,y) = 0, (1.6) where L(u,x,y) = y f (x,y) + y g(x,y) u. The MFCQ is implicity assumed to be satisfied at all y K(x), x X, any time that problem (1.6), that we call classical KKT reformulation, will be considered in the sequel. A second possibility to write problem (1.1) as a one level optimization problem consist of maintaining the generalized equation (1.4) without explicitly computing the normal cone N K(x). Hence, the following problem that we call primal KKT reformulation (this terminology may be justified by the fact that the generalized equation (1.4) can be considered as a compact form of the KKT conditions of the lower level problem): minimize F(x,y) G(x) 0, g(x,y) 0 (1.7) subject to { (x,y, y f (x,y)) gphn K, where gphn K denotes the graph of the set-valued mapping N K R n R m R m defined by N K (x,y) = N K(x) (y), if y K(x); and N K (x,y) =, otherwise. This reformulation, also corresponding to an optimization problem with (parametric) generalized equation constraint (OPEC), where K is a moving set, has recently been studied in [2]. If the lower level feasible set is independent from the upper level parameter x, i.e. for all x X, K(x) = K = {y R m g(y) 0}, (1.8) (with g R m R p being continuously differentiable) then the above problem reduces to the following optimization problem with a (classical) generalized equation constraint, 2

6 as introduced by Robinson [3]: minimize F(x,y) G(x) 0, g(y) 0 subject to { (y, y f (x,y)) gphn K, (1.9) where gphn K denotes the graph of the set-valued mapping defined as N K (y) being the normal cone (in the sense of convex analysis) to K at y, if y K; and N K (y) =, otherwise. Such problems have been studied for example in [4, 5]. A third possibility to write problem (1.1) as a one level optimization problem is the following optimal value reformulation: minimize F(x,y) x X, y K(x) subject to { f (x,y) ϕ(x), (1.10) where ϕ is the optimal value function of problem (1.2), defined as ϕ(x) = min{ f (x,y) y K(x)}. Problem (1.10) is globally and locally equivalent to the bilevel programming problem (1.1). For an extensive review on constraint qualifications and optimality conditions for the optimal value reformulation, the interested reader is referred to [6]. As far as the algorithmic approach is concerned, see [7]. Our main concerns in this paper though are the classical and primal KKT reformulations of the bilevel optimization problem. First of all, let us mention that problem (1.6) is a special class of the mathematical programming problem with complementarity constraints (MPCC). Hence, in the literature, the bilevel programming problem has generally been considered as embedded in the MPCC family. But recently, it was shown in [8] that the bilevel programming problem is not a special case of MPCC. The main reason is that, a local optimal solution of problem (1.6) may happen not to be a local optimal solution of the initial bilevel programming problem [8]. One of the consequences of this widespread idea, that the bilevel optimization problem is a special case of the MPCC, is that, in the literature, very little attention has been given to dual optimality conditions for problem (1.6). Although, a great amount of work has been devoted to dual optimality conditions for the MPCC; see e.g. [9, 10, 11]. With the observation in [8], it seems necessary that a specific study be devoted to (1.6), taking into account its particularities. This is one of the interests of the current paper. As far as primal optimality conditions for problem (1.6) are concerned, a number of interesting results can be found in [12, 13, 14, 15]. In particular, for the special class of linear bilevel programming problem, polynomial-time verifiable primal optimality conditions of problem (1.6) were given in [15], something which is unusual for bilevel programming [13]. Studying dual form or KKT type optimality conditions for problem (1.6), and for the MPCC, in general, the main concern raised in the literature is the failure of most of the well-known CQs [9, 11, 16, 17]. The first step of our approach consists of rewriting the feasible set of (1.6) as an operator constraint (this appellation is borrowed from Mordukhovich [18]). The basic 3

7 CQ, which can be seen as an extension of the MFCQ to a certain geometric constraint, is then applied to the new problem; and this leads to new CQs and optimality conditions for the problem. The first major advantage offered by the operator constraint formulation is the flexibility in choosing Ω, Λ, and ψ, see (3.1); which helps circumvent the failure of the usual MFCQ. Another interesting thing about the operator constraint formulation is that, it could allow one to study the (one level) reformulations of the bilevel optimization problem in a unified manner. This has already been the case for the optimal value reformulation, where it helped design new CQs [6]. In this paper, the primal KKT reformulation (1.7) has also been studied from this perspective. Not only we could design dual optimality conditions, which coincide with those obtained for the classical KKT reformulation, but we also introduced a concept of partial calmness substantially weaker than the initial basic CQ. This has also inspired new CQs for problem (1.7), and hence for the bilevel optimization problem. In the next section of the paper, we present the basic tools (i.e. the Mordukhovich normal cone, subdifferential and coderivative) and their relevant properties. Some Lipschizian properties of set-valued mappings will also be recalled. Section 3 is concerned with the optimization problem with operator constraint. We derive KKT type optimality conditions for this problem, from a perspective different from the one already known in [1, 18, 19]. The partial calmness concept will also be introduced and characterized in this section. Sections 4 and 5 are mainly concerned with the applications of the results from Section 3. This has led to new CQs and optimality conditions for the bilevel programming problem. We now introduce some notations that will simplify the presentation of the paper. Let N be the set of positive integers. For l N, R n n l = R n 1... R n l and 0 n n l is the origin of the space R n n l. For vectors a i R n i, i = 1,...,l, the joint vector (a 1,...,a n l ) may be used instead of its transposed vector (a 1,...,a n l ). Let a i R n i, i = 1,2; (a 1 ) a 2 = a 1,a 2 is used for the inner product of a 1 and a 2. For two properties a and b, property a b means that either a or b is satisfied. For A R l, bda denotes the topological boundary of A, while d A (a) = d(a,a) is the distance from the point a to A. Finally, if (x,y) is a local optimal solution of the bilevel optimization (1.1), then the main part of the dual optimality conditions derived through the classical or primal KKT reformulation will have the form: x F(x,y)+ G(x) α + x g(x,y) ζ (β)+ x L(u,x,y) γ = 0 (1.11) y F(x,y)+ y g(x,y) ζ (β)+ y L(u,x,y) γ = 0 (1.12) α 0, α G(x) = 0 (1.13) u 0, u g(x,y) = 0, L(u,x,y) = 0 (1.14) with ζ (.) being an affine function having an expression which depends on the CQ used to derive the optimality conditions. In the case where K is defined as in (1.8), the above 4

8 conditions reduce to: x F(x,y)+ G(x) α + x L o (u,x,y) γ = 0 (1.15) y F(x,y)+ g(y) ζ (β)+ y L o (u,x,y) γ = 0 (1.16) α 0, α G(x) = 0 (1.17) u 0, u g(y) = 0, L o (u,x,y) = 0 (1.18) where L(u,x,y) = L o (u,x,y) = y f (x,y) + g(y) u. In the case of the classical KKT reformulation, one could have dropped condition (1.14) (resp. (1.18)), since it is part of the feasible set, as well as the upper and lower level constraints. But, we do include this condition purposely since one will notice (cf. Section 5) that obtaining it, in the case of the primal KKT reformulation, is not trivial. 2. Basic definitions and background material We first consider the Fréchet normal cone to a closed set A R l at some point z A N A (z) = {z R l z,z z o( z z ) z A}. This cone is known (see [20, Theorem 6.28]) to be the polar of the Bouligand tangent cone T A (z) = {z R l t k 0,z k z z+t k z k A}. The Mordukhovich normal cone to A at z is the Kuratowski-Painlevé upper limit ([18]) of the Fréchet normal cone, i.e. N A (z) = {z R l z k z,z k z(z k A) z k N A (z k )}. The Mordukhovich subdifferential can also be defined from the Fréchet subdifferential as in the case of the normal cone. But in order to save some space, we simply consider the well-known interplay between most of the normal cone and subdifferential objects in the literature. That is, for a lower semicontinuous function f R l R, the Mordukhovich subdifferential of f at some point z dom f is f (z) = {z R l (z, 1) N epi f (z, f (z))}, where epi f denotes the epigraph of f. It is important to mention that if f is a continuously differentiable function, then f coincides with the gradient of f. Considering two functions, the sum and chain rules are obtained respectively as: Theorem 2.1. [21, Corollary 4.6] Let the functions f,g R l R be locally Lipschitz continuous around z. Then ( f +g)(z) f (z)+ g(z). Equality holds if f or g is continuously differentiable. 5

9 Theorem 2.2. [22, Proposition 2.10] Let f R l 1 R l 2 be Lipschitz continuous around z, and g R l 2 R Lipschitz continuous around w = f (z) domg. Then (go f )(z) [ w, f (z) w g(w)]. In the next result, we recall the necessary optimality condition for a Lipschitz optimization problem with geometric constraint. Theorem 2.3. [20, Theorem 8.15] We let f R l R be a locally Lipschitz continuous function and A R l, a closed set. For z to be a local minimizer of f on A, it is necessary that 0 f (z)+n A (z). For a set-valued mapping Φ R l 1 R l 2, a derivative-like object, called coderivative, and introduced by Mordukhovich (see [18]), can also be defined. Let (u,z) gphφ, the coderivative of Φ at (u,z) is a positively homogeneous set-valued mapping D Φ(u,z) R l 2 R l 1, such that for any z R l 2, we have D Φ(u,z)(z ) = {u R l 1 (u, z ) N gphφ (u,z)}. (2.1) If Φ reduces to a single-valued Lipschitz continuous function, then z can be omitted and the coderivative of Φ reduces to D Φ(u)(z ) = z,φ (u), for all z R l 2; with z,φ (u) = z,φ(u) and being the basic subdifferential defined above. It clearly follows that, in case Φ is single-valued and continuously differentiable, then D Φ(u)(z ) = { Φ(u) z }, for all z R l 2; where Φ(u) denotes the Jacobian matrix of Φ. The set-valued mapping Φ is said to be pseudo-lipschitz continuous at (u,z) gphφ, if there exist neighborhoods U of u, V of z, and a number L > 0 such that Φ(u) V Φ(u )+L u u B, u,u U. (2.2) This property often called the Aubin property was introduced by Aubin [23] and also studied for example in [18, 20, 22]. It is worth mentioning that the Aubin property is a natural extension of the Lipschitz continuity known for a single-valued function. If we fix u = u in (2.2), then we obtain the following inclusion Φ(u) V Φ(u)+L u u B, u U, (2.3) which defines the calmness or upper pseudo-lipschitz continuity of the set-valued mapping Φ. Hence, it is obvious that the Aubin property implies the calmness property. As shown in the next theorem, the calmness property may be very useful in computing the normal cone to a subset of R l defined by finitely many inequalities and equalities. Theorem 2.4. [24, Theorem 5] We consider the set A = {z R l g(z) 0, h(z) = 0}, where g R l R l 1 and h R l R l 2 are continuously differentiable functions. Let z A, then N A (z) = { g(z) λ + h(z) µ λ 0,λ g(z) = 0}, provided the following set-valued mapping is calm at (0,0,z) M(t 1,t 2 ) = {z R l g(z)+t 1 0, h(z)+t 2 = 0}. 6

10 3. Optimization problem with operator constraint The optimization problem with operator constraint, which may be seen as a special optimization problem with geometric constraint, consist to minimize F(z) subject to z Ω ψ 1 (Λ), (3.1) where F R l R and ψ R l R l 1 are locally Lipschitz continuous functions, and the sets Ω R l, Λ R l 1 are closed. To derive KKT type dual optimality conditions for problem (3.1), we consider the basic CQ. Let z be a feasible point of problem (3.1); the basic CQ, which may have been introduced in [25] and studied for example in [1, 20, 26], is said to be satisfied at z if 0 u,ψ (z)+n Ω (z) u N Λ (ψ(z)) } u = 0. (3.2) Some CQs closely related to the basic CQ have also been studied in [27]. Remark 3.1. If Ω = R l, Λ = R l 2 {0 l1 l 2 } and ψ is a continuously differentiable function, then the basic CQ coincides with the dual form of the well-known MFCQ, cf. [28]. Hence, the basic CQ is a generalization of the MFCQ to the optimization problem with operator constraint. We consider the following perturbation map of the operator constraint: Ψ(u) = {z Ω ψ(z)+u Λ}. (3.3) It is shown in the next lemma that this set-valued mapping is pseudo-lipschitz continuous at (0,z) gphψ, if the basic CQ is satisfied at z. Lemma 3.2. Assume that z Ψ(0). Then D Ψ(0,z)(z ) {u N Λ (ψ(z)) z u,ψ (z)+n Ω (z)}. (3.4) If in addition, the basic CQ is satisfied at z, then Ψ is pseudo-lipschitz continuous at (0,z). Proof. It is clear that Ψ can be equivalently written as Ψ(u) = {z R l ψ (u,z) Λ, (u,z) Ω }, where ψ (u,z) = ψ(z) + u and Ω = R l 1 Ω. Hence, one can easily check that equalities (6.20) and (6.21) in [21, Theorem 6.10] are satisfied. Thus, implying that inclusion (3.4) holds. On the other hand, one can also verify that condition (4.11) in [26, Corollary 4.2] reduces to the basic CQ, in our case. Hence, the pseudo-lipschitz continuity of Ψ at (0,z) follows. Since we are only interested in designing optimality conditions for local optimal solutions, it is necessary to show that the Lipschitz continuity properties (pseudo- Lipschitz continuity and calmness) defined in the previous section are locally preserved for the set-valued mapping Ψ. 7

11 Lemma 3.3. Let z Ψ(u) and let V be a neighborhood of z. If Ψ is calm (resp. pseudo- Lipschitz continuous) at (u,z), then the set-valued mapping Ψ V (u) = {z Ω V ψ(z)+u Λ} is also calm (resp. pseudo-lipschitz continuous) at (u,z). Proof. Let us set ψ u (z) = ψ(z)+u. Then we have Ψ(u) = Ω ψ 1 u (Λ) and Ψ V (u) = V Ω ψ 1 (Λ); and the result follows from the definition of calmness (2.3) (resp. pseudo-lipschitz continuity (2.2)), by noting that for A,B,C R l, A B A C B C. We are now ready to state a KKT type optimality condition for problem (3.1) under the basic CQ. The technique utilized in the proof is inspired by [5, Lemma 3.1], in the framework of OPECs. For the latter class of optimization problems, this approach has also been used in [18, 29]. The statement of this result and many proofs exist in the literature, cf. [1, 18, 19, 20, 30]; but we were unable to find any reference where the multiplier u is bounded, except as already mentioned, for the OPEC. Hence, the reason why we include the proof here. Proposition 3.1. Let z be a local optimal solution of problem (3.1). Assume that the basic CQ is satisfied at z. Then, there exists µ > 0 such that for any r µ, one can find u rb N Λ (ψ(z)) such that 0 F(z)+ u,ψ (z)+n Ω (z). (3.5) Proof. Let z be an optimal solution of problem (3.1) in the closed neighborhood V of z. It follows from Lemma 3.2 and Lemma 3.3 that Ψ V is pseudo-lipschitz continuous at (0,z), under the basic CQ (3.2). Denote by L V and L F the Lipschitz modulus of Ψ V and F, respectively. We claim that for any r µ = L V L F, the point (0,z) is a local optimal solution of problem minimize F r (u,z) subject to (u,z) gphψ V, (3.6) with F r (u,z) = F(z)+r u. In fact, the pseudo-lipschitz continuity of Ψ V at (0,z) implies, by definition, that there exist neighborhoods U 1 of 0 and V 1 of z such that u U 1, z Ψ(u) V 1, there exists z 1 Ψ V (0) such that z z 1 L V u. (3.7) In addition, F(z) F(z 1 ), given that z 1 Ψ V (0) = Ω V ψ 1 (Λ). Now, let r µ and (u,z) gphψ V (U 1 V 1 ), then we have u F r (0,z) = F(z) [F(z 1 ) F(z)]+F(z) L F z z 1 +F(z) (cf. Lipschitz continuity of F) L V L F u +F(z) F r (u,z) (cf. inequality (3.7)). 8

12 Applying Theorem 2.3 to problem (3.6), we have (0,0) rb F(z)+N gphψv (0,z). Hence, there exist u rb and z F(z) such that ( u, z ) N gphψv (0,z). It follows from the definition of the coderivative, and the inclusion of Lemma 3.2, that which concludes the proof. u N Λ (ψ(z)) and z u,ψ (z)+n Ω (z), Remark 3.4. Under the setting of Remark 3.1, the basic CQ coincides with the usual MFCQ. Further, it is well-known that under the MFCQ, the set of Lagrange multipliers is bounded. But the bound is not usually provided with the classical technique to derive KKT conditions via the MFCQ. Hence, the interesting feature of the approach in Proposition 3.1. For the rest of this section, we assume that ψ is a real-valued function and Λ = R. Then the optimization problem with operator constraint takes the form minimize F(z) subject to z Ω, ψ(z) 0. (3.8) Next, we define the concept of partial calmness for problem (3.8), as introduced in [31], for the optimal value reformulation of the bilevel programming problem. Definition 3.5. Let z be a local optimal solution of problem (3.8). Problem (3.8) is partially calm at z if there is a neighborhood U of (0,z) and a number λ > 0 such that F(z) F(z)+λ t 0 (t,z) U Ω ψ(z)+t 0. The next result shows that the concept of partial calmness is nothing but a partial exact penalization of problem (3.8), where z ψ(z) represents the penalty function and λ the penalty coefficient. Theorem 3.6. [17, Proposition 2.2] Let z be a local optimal solution of problem (3.8). Problem (3.8) is partially calm at z if and only if there exists a number λ > 0 such that z is a local optimal solution of the problem to minimize F(z) + λ ψ(z) subject to z Ω. In a more general sense, the concept of partial calmness corresponds to moving a disturbing constraint (in the sense of leading to the failure of a CQ) to the objective function. To further ease the presentation of the paper, one could set ϑ(z) = ψ(z) ; then we will say that problem (3.8) is ϑ-partially calm at z, if it is partially calm at z, in the sense of Definition 3.5. To close this section, we give some sufficient conditions for problem (3.8) to be ϑ-partially calm. 9

13 Proposition 3.2. Let z be a local optimal solution of problem (3.8) such that ψ(z) = 0 (without lost of generality). Problem (3.8) is ϑ-partially calm at z if one of the following assumptions holds: 1. The set-valued mapping Ψ in (3.3) (with Λ = R ) is calm at (0,z). 2. ψ(z) ( N Ω (z)) =. Proof. One can easily verify that the basic CQ applied to problem (3.8) reduces to the condition in assumption 2, if z is a feasible point satisfying ψ(z) = 0. On the other hand, it follows from Lemma 3.2 that assumption 2 implies assumption 1. Hence, it suffices to prove that problem (3.8) is ϑ- partially calm if assumption 1 holds. Before, we recall that a set-valued mapping Φ is calm at (u,z) gphφ if and only if, there exist neighborhoods U of u, V of z and a number L > 0 such that d(z,φ(u)) L u u, u U, z Φ(u) V ; cf. [32]. Hence, under assumption 1, there exist neighborhoods U of 0, V of z and a constant L > 0 such that d(z,ψ(0)) L t, t U, z Ψ(t) V. (3.9) On the other hand, we know from [33, Proposition 2.4.3] that, since z is a local optimal solution of problem (3.8), then there exists ε > 0 and a neighborhood W of z, such that By combining (3.9) and (3.10), we have the result. F(z) F(z)+ε d(z,ψ(0)), z W. (3.10) The proof of this result, under assumption 1, was already given in [6] for the optimal value reformulation (1.10) of the bilevel programming problem, with (x, y) f (x,y) ϕ(x) considered as the penalty function. In this same case, a different proof was suggested in [34]. To close this section, it is worth recalling that the main concern in the following sections is to apply Propositions 3.1 and 3.2 to problems (1.6) and (1.7)-(1.9), in various instances. Hence, the function ψ and the sets Ω and Λ in (3.1), will take various values in the sequel. 4. The classical KKT reformulation We start this section by recalling that a point (x,y) is a global optimal solution of the bilevel optimization problem (1.1), if and only if, there exists u Λ(x,y), with Λ(x,y) = {u R p + L(u,x,y) = 0, u g(x,y) = 0}, such that (u, x, y) is a global optimal solution of the classical KKT reformulation (1.6), cf. [8]. For local solutions, we have already mentioned in the Introduction that a local optimal solution of problem (1.6) may happen not to induce a local optimal solution of the initial problem. Nevertheless, if (x, y) is a local optimal solution of problem (1.1), 10

14 then for all u Λ(x,y), the point (u,x,y) is also a local optimal solution of (1.6). Base on this, our main concern in this section is to suggest dual necessary optimality conditions for local optimal solutions of problem (1.1), via its classical KKT reformulation (1.6). Deriving the classical KKT reformulation from the generalized equation (1.4), it is clear that if K(x) = R m, the MFCQ remains applicable. Otherwise, it would fail at any feasible point of problem (1.6), cf. [9, 11, 16, 17]. In the framework of MPCCs, the Guignard CQ has been shown to be one of the few CQs to be directly applicable to (1.6) considered as a usual nonlinear optimization problem, cf. [11]. In the next result, we recall the optimality conditions for problem (1.6) under the Guignard CQ, which holds at a point (u,x,y) C (C being the feasible set of problem (1.6)) if the Fréchet normal cone to C at z takes the form G i (x) d x 0, i G i (x) = 0 d = (d N C (u,x,y) = u,d x,d y ) R p R n+m g i (x,y) (d x,d y ) 0, i g i (x,y) = 0 L i (u,x,y) d = 0, i i = 1,...,m σ(u,x,y) d = 0 where σ denotes the function defined as σ(u,x,y) = u g(x,y). Theorem 4.1. [35, Theorem 3.5] Let (u,x,y) be a local optimal solution of problem (1.6) and assume that the Guignard CQ is satisfied at (u,x,y). Then, there exists a vector (α,β,γ,λ) R k+p+m+1 such that the following conditions hold: (1.11) (1.14), with ζ (β) = β λu (4.1) β i 0, β i g i (x,y) = 0 i = 1,..., p (4.2) u i ( y g i (x,y)γ) = 0 i = 1,..., p (4.3) λg i (x,y) y g i (x,y)γ 0 i = 1,..., p. (4.4) An example of bilevel optimization problem for which the Guignard CQ is satisfied can be found in [35]. For more on this CQ and its application to MPCCs, see [11]. The basic CQ, which can be seen as a generalization of the MFCQ (cf. Remark 3.1) may well still be applied to the classical KKT reformulation, provided the feasible set is written differently. To motivate our discussion, we recall that the failure of the MFCQ is due to the following complementarity system u 0, g(x,y) 0, u g(x,y) = 0. (4.5) We show in the next example that the basic CQ is applicable to problem (1.6), if we assume that the function g is linear in (x,y), and the feasible set is reformulated as an operator constraint; with ψ(u,x,y) = (G(x),L(u,x,y)), Λ = R k {0 m } and Ω denoting the set of (u,x,y) solving the complementarity problem (4.5). Example 4.1. We consider the bilevel optimization problem to where S(x) represents the solution set of minimize x 2 +y 2 subject to x 0, y S(x), minimize xy+y subject to y 0. 11

15 One can easily check that (0,0) is the optimal solution of the above problem. The classical KKT reformulation of this problem is: minimize x 2 +y 2 subect to { x 0, x u+1 = 0 u 0, y 0, uy = 0. It is obvious that the lower level multiplier corresponding to the optimal solution is u = 1; and hence that the MFCQ fails to hold at (0,0,1). We are now going to show that the basic CQ is satisfied if we set ψ(x,y,u) = ( x, x u + 1), Λ = R {0} and Ω = {(x,y,u) R 3 y 0, u 0, yu = 0}. For some point (α,β) N Λ (ψ(0,0,1)), i.e. (α,β) R + R, (0,0,0) ψ(0,0,1),(α,β) + N Ω (0,0,1) if and only if α β = 0 and (0, β) N Θ (0,1) (with Θ = {(y,u) R 2 y 0, u 0, yu = 0}). It follows from Lemma 4.2 that β = 0 and hence that α = 0. This shows that the basic CQ holds at (0,0,1). By setting v = g(x,y) and hence introducing a new (dummy) variable in the problem, the idea in the above example can be extended to the more general problem (1.6). The technicality behind this idea is that the new constraint g(x,y)+v = 0 is moved to the function ψ and thus allowing just the computation of the normal cone to the polyhedral set Θ = {(v,u) R 2p v 0, u 0, v u = 0}, (4.6) which is possible without any qualification condition: Lemma 4.2. [36, Proposition 2.1] Let (v,u) Θ, then N Θ (v,u) = v i = 0 i v i > 0 = u i (v,u ) R 2p u i = 0 i v i = 0 < u i v i < 0, u i < 0 v i u i = 0 i v i = 0 = u i. Thanks to the aforementioned transformation, a different kind of optimality conditions can be derived for the classical KKT reformulation of the bilevel programming problem. Theorem 4.3. Let (u, x, y) be a local optimal solution of problem (1.6). Then, there exists (α,β,γ) R k+p+m, with (α,β,γ) r (for some r > 0) such that the following conditions hold: (1.11) (1.14), with ζ (β) = β (4.7) β i = 0 i g i (x,y) < 0 = u i (4.8) y g i (x,y)γ = 0 i g i (x,y) = 0 < u i (4.9) (β i > 0, y g i (x,y)γ > 0) β i ( y g i (x,y)γ) = 0 i g i (x,y) = 0 = u i, (4.10) 12

16 provided there is no nonzero vector (α,β,γ) R k+p+m such that G(x) α + x g(x,y) β + x L(u,x,y) γ = 0 (4.11) y g(x,y) β + y L(u,x,y) γ = 0 (4.12) α 0, α G(x) = 0 (4.13) β i = 0 i g i (x,y) < 0 = u i (4.14) y g i (x,y)γ = 0 i g i (x,y) = 0 < u i (4.15) (β i > 0, y g i (x,y)γ > 0) β i ( y g i (x,y)γ) = 0 i g i (x,y) = 0 = u i. (4.16) Proof. Let us set ψ(v,u,x,y) = (G(x),g(x,y)+v,L(u,x,y)), Λ = R k {0 p+m } and Ω = Θ R n+m. Let (u,x,y) be a local optimal solution of problem (1.6). One can easily verify that there is a vector v such that (v,u,x,y) is a local optimal solution of the problem to minimize F(x,y) subject to (v,u,x,y) Ω ψ 1 (Λ). (4.17) We have N Ω (v,u,x,y) = N Θ (v,u) {0 n+m } (4.18) N Λ (ψ(v,u,x,y)) = {(α,β,γ) α 0, α G(x) = 0} (4.19) ψ(v,u,x,y) (α,β,γ) = [ β A(α,β,γ) ] (4.20) where y g(x,y)γ A(α,β,γ) = G(x) α + x g(x,y) β + x L(u,x,y) γ. (4.21) y g(x,y) β + y L(u,x,y) γ It follows from equalities (4.18)-(4.20) that the basic CQ applied to problem (4.17) at (v,u,x,y) can equivalently be formulated as: there is no nonzero vector (α,β,γ) R k+p+m such that (4.11)-(4.13) and the inclusion ( β, y g(x,y)γ) N Θ (v,u) (4.22) are satisfied. By noting that v i = g i (x,y), for i = 1,..., p, it follows from Lemma 4.2 that the basic CQ applied to problem (4.17) is equivalent to: there is no nonzero vector (α,β,γ) R k+p+m such that (4.11)-(4.16) hold. Hence, from Proposition 3.1 there exists (α,β,γ) R k+p+m, with (α,β,γ) r (for some r > 0) such that (4.7) and (4.22) are satisfied, given that the objective function of problem (4.17) is independent of (v, u). The result then follows by interpreting inclusion (4.22), as already made above. The technique used in this proof, i.e. to transform the nonlinear complementarity problem in (4.5) into a linear one, has been used in various occasions, for the MPCC; see e.g. [10, 36]. One can easily check that the optimality conditions (4.7)-(4.10) are identical to those obtained in [5] or [37] under various CQs, among which CQ (b) of [5, Theorem 4.1] or (b) of [37, Theorem 5.1] coincides with the CQ in Theorem 4.3. But, it should be mentioned that in the latter case, this CQ is recovered from a perspective 13

17 different from that of [5, 37], where an enhanced generalized equation formulation of the KKT conditions of the lower level problem was used to design the CQ. We now introduce a different way to choose ψ,ω and Λ; that would lead to a new CQ allowing us to obtain the same optimality conditions as in Theorem 4.3. To proceed, let us recall that the complementarity system (4.5) is equivalent to meaning that u i 0, g i (x,y) 0, u i g i (x,y) = 0, i = 1,..., p, (u i, g i (x,y)) Λ i = {(a,b) R 2 a 0, b 0, ab = 0}, i = 1,..., p. Proposition 4.1. Let (u,x,y) be a local optimal solution of problem (1.6). Then, there exists (α,β,γ) R k+p+m, with β r (for some r > 0) such that (4.7)-(4.10) are satisfied, provided the following conditions hold: 1. The following set-valued mapping is calm at (0,0,u,x,y) M 1 (t 1,t 2 ) = {(u,x,y) R p+n+m G(x)+t 1 0, L(u,x,y)+t 2 = 0}. 2. β = 0, y g(x,y)γ = 0 whenever there is a vector (α,β,γ) R k+p+m satisfying (4.11)-(4.16). Proof. We consider the set Ω = {(u,x,y) R p+n+m G(x) 0, L(u,x,y) = 0} and the function ψ(u,x,y) = (u i, g i (x,y)) i=1,...,p. If (u,x,y) is a local optimal solution of problem (1.6), it means, in other words that (u,x,y) is a local optimal solution of the problem to minimize F(x,y) subject to (u,x,y) Ω ψ 1 (Λ), (4.23) where Λ = Λ 1... Λ p, with Λ i = {(a,b) R 2 a 0, b 0, ab = 0} for i = 1,..., p. Applying Proposition 3.1 to (4.23), there exists a vector (δ,β) R 2p with (δ,β) r (for some r > 0) such that (0,0) [ 0 F(x,y) ]+[ provided there is no nonzero vector (δ,β) R 2p such that (0,0) [ It follows, under assumption 1 (see Theorem 2.4), that (δ,β) N Λ (ψ(u,x,y)) (4.24) δ g(x,y) β ]+N Ω(u,x,y), (4.25) (δ,β) N Λ (ψ(u,x,y)) (4.26) δ g(x,y) β ]+N Ω(u,x,y). (4.27) N Ω (u,x,y) = {A(α,β,γ) (0, x g(x,y) β, y g(x,y) β) α 0, α G(x) = 0}, where A(α,β,γ) denotes the matrix given in (4.21). Hence, either from (4.25) or from (4.27), it follows that there exists γ R m such that δ = y g(x,y)γ; à fortiori, 14

18 either (4.24) or (4.26) implies that there exists γ R m such that ( y g(x,y)γ,β) N Λ (ψ(u,x,y)). The result then follows by noting that N Λ (ψ(u,x,y)) is obtained from Lemma 4.2 since N Λ (ψ(u,x,y)) = N Λ1 (u 1, g 1 (x,y))... N Λp (u p, g p (x,y)). The approach in the above result is similar to the one used in [38], for a mathematical program with vanishing constraints. For the rest of this section, we focus our attention to the concept of partial calmness. Precisely, we start by showing how a combination of partial calmness and basic CQ could lead to the optimality conditions of Theorem 4.1. The principle of this result is very simple. In fact, since the failure of the MFCQ is due to the complementarity system, then moving the function σ to the upper level objective function paves the way to the application of the same CQ. Proposition 4.2. Let (u,x,y) be a local optimal solution of problem (1.6). Then, there exists (α,β,γ,λ) R k+p+m+1, with (α,β,γ) r (for some r > 0) and λ > 0 such that (4.1)-(4.4) are satisfied, provided the following conditions hold: 1. Problem (1.6) is σ-partially calm at (u,x,y). 2. There is no nonzero vector (α,β,γ) R k+p+m such that (4.11)-(4.13) and the following conditions are fulfilled: β i 0, β i g i (x,y) = 0 i = 1,..., p (4.28) u i ( y g i (x,y)γ) = 0 i = 1,..., p (4.29) y g i (x,y)γ 0 i = 1,..., p. (4.30) Proof. Let (u,x,y) be a local optimal solution of problem (1.6) and let assumption 1 of the theorem be satisfied. Then, it follows from Theorem 3.6 that, there exists λ > 0 such that (u,x,y) is also a local optimal solution of the problem minimize F(x,y) λu g(x,y) subject to (u,x,y) Ω ψ 1 (Λ), (4.31) where Ω = R+ p R n+m, Λ = R k+p {0 m } and ψ(u,x,y) = (G(x),g(x,y),L(u,x,y)). One can easily check that: N Ω (u,x,y) = {η R p η 0, η u = 0} {0 n+m } (4.32) N Λ (ψ(u,x,y)) = {(α,β,γ) R k+p+m (4.2) and (4.13) are satisfied } (4.33) ψ(u,x,y) (α,β,γ) = A(α,β,γ), (4.34) where A(α, β, γ) is the matrix in (4.21). It follows from equalities (4.32)-(4.34) that the basic CQ applied to problem (4.31) at (u,x,y) can equivalently be formulated as: there is no nonzero vector (α,β,γ) R k+p+m and a vector η R p (dummy multiplier) such that (4.11)-(4.13), (4.28) and y g(x,y)γ +η = 0, η 0, η u = 0 (4.35) 15

19 are satisfied, respectively. It clearly follows that assumption 2 corresponds to the basic CQ applied to problem (4.31), where (4.29)-(4.30) is recovered from (4.35). Hence, applying Proposition 3.1 to problem (4.31), it follows from (4.32)-(4.34) that there exists (α,β,γ) R k+p+m, with (α,β,γ) r (for some r > 0) and λ > 0 such that (4.1)-(4.2) and λg(x,y)+ y g(x,y)γ +η = 0, η 0, η u = 0 (4.36) are satisfied. Hence, (4.3)-(4.4) is regained from system (4.36) while noting that the feasibility of (u,x,y) implies that u g(x,y) = 0. In the next result, we show that the CQ in assumption 2 of the previous theorem can be weakened, if the perturbation map of the joint upper and lower level feasible set is calm. Theorem 4.4. Let (u, x, y) be a local optimal solution of problem (1.6). Then, there exists (α,β,γ,λ) R k+p+m+1, with γ r (for some r > 0) and λ > 0 such that (4.1)- (4.4) are satisfied, provided that the following assumptions hold: 1. Problem (1.6) is σ-partially calm at (u,x,y). 2. The following set-valued mapping is calm at (0,0,x,y) M 2 (t 1,t 2 ) = {(x,y) R n+m G(x)+t 1 0,g(x,y)+t 2 0}. 3. There is no nonzero vector γ R m such that conditions (4.11)-(4.13) and (4.28)- (4.30) are fulfilled. Proof. Set Ω = {(u,x,y) R p+n+m u 0, G(x) 0, g(x,y) 0}, and let (u,x,y) be a local optimal solution of problem (1.6). Then, under assumption 1, there exists λ > 0 such that (u,x,y) is also a local optimal solution of minimize F(x,y) λu g(x,y) subject to (u,x,y) Ω L 1 (0). Hence, it follows from Proposition 3.1 that if [0 L(u,x,y) γ +N Ω (u,x,y), γ R m ] γ = 0, (4.37) then there exists γ R m with γ r (for some r > 0) such that 0 [ λg(x,y) F(x,y) ]+ L(u,x,y) γ +N Ω (u,x,y). (4.38) Now, observe that Ω = R+ p Ω with Ω = {(x,y) R n+m G(x) 0, g(x,y) 0}. follows from Theorem 2.4 that It N Ω (x,y) = {( G(x) α + x g(x,y) β, y g(x,y) β) (1.13) and (4.2) hold} (4.39) under assumption 2. Hence, we regain assumption 3 and the desired optimality conditions by applying the last equality to (4.37) and (4.38), while noting that u N R p + (u) if and only if u 0 and u u = 0. 16

20 To close this section, it seems important to mention some sufficient conditions for the partial calmness used in Proposition 4.2 and Theorem 4.4. For this purpose, a slightly modified notion of uniform weak sharp minimum was introduced in [6]. We recall that the initial definition was introduced in [31]. Definition 4.5. The family {(1.2) x X} is said to have a uniformly weak sharp minimum if there exist c > 0 and a neighborhood N (x) of S(x), x X such that f (x,y) ϕ(x) cd(y,s(x)), y K(x) N (x), x X. If we set N (x) = R m, we obtain the definition in [31]. A uniform weak sharp minimum, as given in the above definition, was shown to exist under the uniform calmness, see [6]. Theorem 4.6. Let (u,x,y) be a local optimal solution of problem (1.6). Then, problem (1.6) is is σ-partially calm at (u,x,y), provided one of the following assumptions hold: 1. The family {(1.2) x X} has a uniform weak sharp minimum. 2. The set A = {(u,x,y) R p + R n+m G(x) 0, L(u,x,y) = 0, g(x,y) 0} is convex and (u, g(x,y)) bdn A (u,x,y). Proof. Under assumption 1, the result follows from [17]. Under assumption 2, it follows from [32] that the set-valued mapping M 3 (t) = {(u,x,y) A u g(x,y)+t 0} is calm at (0,u,x,y). Hence, the result follows from Proposition The primal KKT reformulation In this section, we are interested in the primal KKT reformulation (1.7) of the bilevel programming problem. Unlike in the classical KKT reformulation, the former is equivalent to the initial problem for local and global optimal solutions, cf. [35]. It is worth mentioning the strategic role of the apparently redundant constraint g(x, y) 0, present in the primal KKT reformulation (1.7), that helps to avoid situations where N K (x,y) =, which may be harmful for the mentioned close relationship with problem (1.1). From now on, we focus our attention on deriving KKT type optimality conditions for problem (1.7). Hence, the following notations will be standard: ψ(x,y) = (x,y, y f (x,y)), Ω = {(x,y) G(x) 0, g(x,y) 0} and Λ = gphn K. (5.1) To obtain the closeness of Λ, required, in order to apply the basic CQ, we introduce the concept of inner semicontinuity for a set-valued mapping, see [18] for more details. The set-valued mapping Φ is said to be inner semicontinuous at (x,y) gphφ if for every sequence x k x there is a sequence y k Φ(x k ) such that y k y. Φ will be said to be inner semicontinous if it is inner semicontinuous at every point of its graph. In the following theorem, we show that Λ is closed if the lower level feasible set mapping K is inner semicontinuous. Proposition 5.1. Assume that K is inner semicontinuous. Then, gphn K is closed as a subset of gphk R m, i.e. if (x k,y k ) (x,y) ((x k,y k ) gphk) and z k z with z k N K (x k,y k ), then z N K (x,y). 17

21 Proof. Let (x k,y k ) (x,y) ((x k,y k ) gphk) and z k z with z k N K (x k,y k ), then z k N K(x k ) (yk ). Since K(x k ) is assumed to be convex for all k, we have z k,u k y k 0, u k K(x k ),k N. (5.2) Given that K is inner semicontinuous, then for an arbitrary v K(x), there exists v k K(x k ) such that v k v. It follows from (5.2) that z k,v k y k 0, k N. This implies that z, v y 0, v K(x), which concludes the proof. This result can be seen as an extension to parametric sets, of the result stated in [1, Proposition 3.3]; with the difference that our cones are defined in the sense of convex analysis. But, it may well be extended to the case where N K(x) (y) is the more general Mordukhovich normal cone used in [1], when K(x) = K for all x. Considering the case where the set-valued mapping K is defined as in (1.3), the concept of inner semicontinuity can be brought to usual terms through the following well-known result, see e.g. [39]. Before, let us mention that the MFCQ will be said to hold at y K(x), x X (resp. at x) if the MFCQ is satisfied at y, for the constraint g(x,y) 0 (resp. at x, for the constraint G(x) 0). Lemma 5.1. If MFCQ holds at y K(x), x X, then K is inner semicontinuous at (x,y). We are almost ready to state the optimality conditions of the primal KKT reformulation under the basic CQ. In order to simplify the understanding of the proof, the following lemma, implicity used in [39], is necessary. Since the proof is obvious, we do not include it here. Lemma 5.2. MFCQ holds at (x,y) for the constraint G(x,y) = (G(x),g(x,y)) 0 if MFCQ holds at x and at y K(x), x X; respectively. Theorem 5.3. Let (x,y) be a local optimal solution of problem (1.7). Then there exists (α,β,γ) R k+p+m, with γ r (for some r > 0) such that (1.13), (4.2) and (0,0) [ xf(x,y)+ G(x) α + x g(x,y) β + 2 xy f (x,y) γ y F(x,y)+ y g(x,y) β + 2 yy f (x,y) γ ]+D N K (ψ(x,y))(γ) are satisfied, provided that the following conditions hold: 1. MFCQ holds at x 2. MFCQ holds at all y K(x), x X 3. Basic CQ (for ψ, Ω, Λ in (5.1)) holds at (x,y). Proof. Under assumption 2, the optimization problem minimize F(x,y) subject to (x,y) Ω ψ 1 (Λ) is well-defined, as an optimization problem with operator constraint (i.e. Λ is closed), cf. Proposition 5.1 and Lemma 5.1. Hence, for a local optimal solution (x,y), assumption 3 implies that there exists (δ,η,γ) R n+2m, with γ r (for some r > 0) such that (δ,η,γ) N Λ (ψ(x,y)) (5.3) 0 F(x,y)+ ψ(x,y) (δ,η,γ)+n Ω (x,y). (5.4) 18

22 Condition (5.4) is equivalent to (0,0) [ xf(x,y)+δ + 2 xy f (x,y) ( γ) y F(x,y)+η + 2 yy f (x,y) ( γ) ]+N Ω(x,y). Under assumptions 1 and 2, it follows from Lemma 5.2 that the expression of N Ω (x,y) is obtained as in (4.39). Hence, in addition to (δ,η,γ), there also exists (α,β) R k+p satisfying (1.13), (4.2) and such that (0,0) [ xf(x,y)+ G(x) α + x g(x,y) β + 2 xy f (x,y) ( γ) y F(x,y)+ y g(x,y) β + 2 yy f (x,y) ( γ) ]+{(δ,η) }. The result follows by observing that from the definition of the coderivative (see (2.1)), (5.3) implies that (δ,η) D N K (ψ(x,y))( γ). The result then follows by replacing γ by γ. Optimality conditions similar or closely related to those of the above theorem have also been given in [2, 5, 14, 29, 40, 41, 42]. One interesting thing about Theorem 5.3 is that it results as an application of Proposition 3.1 and additionally, it addresses the issue of the closedness of Λ also important in some of the aforementioned papers but which seems not to have been explicitly raised. We also remind that in the case where the set-valued mapping K takes more general values, assumption 2 can simply be replaced by the inner semicontinuity, which can be achieved by the coderivative criterion of Mordukhovich [18, 20]. In the case where K is defined as in (1.3), the interested reader is referred to [2] for the computation of D N K (ψ(x,y))(γ). For the rest of this section, we focus on the case where K is independent of x, i.e. when K is defined as in (1.8). The estimation of the coderivative term D N K (ψ(x,y))(γ) in the latter case has been the special focus of the recent paper [43]. It is clear that we will henceforth concentrate on problem (1.9), where ψ(x,y) = (y, y f (x,y)), Ω = {(x,y) G(x) 0, g(y) 0} and Λ = gphn K. (5.5) Unless otherwise stated, the basic CQ will, from here on, refer to the CQ in (3.2), with data in (5.5). It is worth mentioning that in this case, the set Λ = gphn K is automatically closed, as a subset of K R m ; cf. [1, Proposition 3.3]. Proposition 5.2. Let (χ,v) gphn R p, then where Θ is given in (4.6). N gphnr p (χ,v) = {( χ,v ) R 2p (χ,v ) N Θ ( χ,v)}, Proof. We start by noting that gphn p R = {(χ,v) R 2p χ 0, v 0, χ v = 0} = {(χ,v) R 2p ( χ,v) Θ}. 19

23 This means that gphn p R = ϑ 1 (Θ), where ϑ(χ,v) = ( χ,v) and for (χ,v) R 2p, one obviously has ϑ(χ,v) = ( I p O ) O I p with I p and O denoting the p p identity and zero matrix, respectively. Hence, the Jacobian matrix ϑ(χ, v) is quadratic and nonsingular and it follows from [22, Corollary 2.12] that N gphnr p (χ,v) = ϑ(χ,v) N Θ (ϑ(χ,v)), given that gphn R p and Θ are closed sets. The result then follows. Exploiting the polyhedrality of gphn p R and Θ, the equality in the above result can also be proven, at least in the case where χ = 0, by using a combination of [44, Proposition 1] and [24, Theorem 5]. We now describe an element of D N K (ψ(x,y))(γ), in a way different from that of [43]. Corollary 5.4. Consider (x,y) R n+m such that ψ(x,y) Λ. Let δ D N K (ψ(x,y))(γ), and assume that the following conditions hold: 1. MFCQ is satisfied at y 2. The following set-valued mapping is calm at (0,y,u), for all u Λ(x,y): M o (t) = {(y,u) R m+p (g(y),u) gphn R p }. Then, there exists u Λ(x,y) such that δ = ( 2 g(y) u)γ + g(y) β, where β i = 0 i g i (y) < 0 = u i (5.6) y g i (y)γ = 0 i g i (y) = 0 < u i (5.7) (β i > 0, y g i (y)γ > 0) β i ( y g i (y)γ) = 0 i g i (y) = 0 = u i. (5.8) Proof. Let δ D N K (ψ(x,y))(γ), it follows from [43, Theorem 3.3] that under assumptions 1 and 2, there exists u Λ(x,y) such that δ ( 2 g(y) u)γ g(y) D N R p (g(y),u)( g(y)γ); i.e. there exists β R p such that δ = ( 2 g(y) u)γ + g(y) β, with (β, g(y)γ) N gphnr p (g(y),u). The result then follows from Proposition 5.2. It should, nevertheless, be mentioned that the above expression of an element of D N K (ψ(x,y))(γ) must be equivalent to that of [43], given that the difference relies on the interpretation of an element of D N R p (g(y),u)( g(y)γ), which in our case results from Proposition 5.2. The above result allow us to derive optimality conditions for problem (1.9), which are identical to those in Theorem 4.3, obtained for the classical KKT reformulation, provided the additional constraint g(y) 0, is deleted. To the best of our knowledge, these conditions are new for an OPEC. 20

24 Theorem 5.5. Let (x,y) be a local optimal solution for problem (1.9). Assume that the following assumptions are satisfied: 1. Assumption 1 in Theorem 5.3, and assumptions 1 and 2 in Corollary Basic CQ holds at (x,y). Then, there exists (α,β,γ,ζ,u) R k+p+m+2p, with γ r (for some r > 0) such that (1.15) (1.18) and (5.6) (5.8), with ξ (β) = β +ξ (5.9) ξ 0, ξ g(y) = 0. (5.10) Proof. Proceeding as in the proof of Theorem 5.3, it follows that assumption 2 implies that there exists (δ,γ) R 2m, with γ r (for some r > 0) such that Condition (5.12) is equivalent to (0,0) [ (δ,γ) N Λ (ψ(x,y)) (5.11) 0 F(x,y)+ ψ(x,y) (δ,γ)+n Ω (x,y). (5.12) x F(x,y)+ 2 xy f (x,y) ( γ) y F(x,y)+δ + 2 yy f (x,y) ( γ) ]+N Ω(x,y), with δ D N K (ψ(x,y))( γ), cf. (5.11). The result then follows from Corollary 5.4. It is clear that for ξ = 0, we have the optimality conditions in Theorem 4.3, provided K is independent of x. As mentioned above, one can have ξ = 0, if the additional constraint g(y) 0 is dropped; something which is possible when only necessary optimality conditions for the bilevel optimization problem (1.1) are searched for. We now introduce a concept of partial calmness for the primal KKT reformulation of the bilevel programming problem, that will allow us to substantially weaken the basic CQ, while still been able to obtain the optimality conditions in Theorem 5.5. To bring the concept of partial calmness in Definition 3.5 to the primal KKT reformulation (1.9), we observe that ψ(x,y) Λ ρ(x,y) = d Λ oψ(x,y) = 0, where d Λ denotes the distance function. Hence, the primal KKT reformulation can equivalently be written as minimize F(x, y) subject to (x, y) Ω, ρ(x, y) = 0. (5.13) Proposition 5.3. Let (x, y) be a local optimal solution for problem (1.9). Then the optimality conditions in Theorem 5.5 are satisfied under the following conditions: 1. Assumption 1 in Theorem ρ-partial calmness at (x,y). 21

25 Proof. Let (x,y) be a local optimal solution for (1.9). Obviously, (x,y) is also a local optimal solution of problem (5.13). Under assumption 2, it follows from Theorem 3.6, that there exists r > 0 such that (x,y) is a local optimal solution of minimize F(x, y) + rρ(x, y) subject to (x, y) Ω. Hence, from Theorem 2.3 and Theorem 2.1, Applying Theorem 2.2 to ρ, it follows that 0 F(x,y)+r ρ(x,y)+n Ω (x,y). (5.14) ρ(x,y) [ u,ψ (x,y), u B N Λ (ψ(x,y))], since d Λ (a) = B N Λ (a) for any a Λ, cf. [20, Example 8.53]. Thus, (5.14) implies that there exists u rb N Λ (ψ(x,y)) such that 0 F(x,y)+ u,ψ (x,y)+n Ω (x,y), and the rest of the proof follows as in that of Theorem 5.5. Next, we state a result that will allow us deduce that the basic CQ is weaker than the ρ-partial calmness. Theorem 5.6. We consider the set-valued mapping Ψ (cf. (3.3) with ψ, Ω, Λ as in (5.5)) and let (x,y) Ψ(0). Then, Ψ is calm at (0,x,y) if and only if the set-valued mapping Ψ(t) = {(x,y) Ω ρ(x,y) t} (5.15) is calm at (0,x,y). This result established in [24], for the case where Ω corresponds to a normed space, remains valid in our setting. Additionally, one can easily check that the calmness of Ψ is equivalent to the calmness of a set-valued mapping obtained by replacing ρ(x,y) t in (5.15) by ρ(x,y) +t 0. Hence, by the combination of Proposition 3.2 (cf. assumption 1) and Theorem 5.6, it is clear that the basic CQ is stronger than the ρ partial calmness. In order to show that the ρ partial calmness is substantially weaker, we consider the following example. Example 5.1. We consider the bilevel optimization problem in Example 4.1, then its primal KKT reformulation takes the form minimize x 2 +y 2 subject to (x,y) Ω ψ 1 (Λ), (5.16) with Ω = R 2 +, Λ = gphn R+ and ψ(x,y) = (y, x 1). We have ψ(0,0) (λ,µ) = ( µ,λ), for (λ,µ) R 2 ; N Ω (0,0) = R 2 and N Λ (ψ(0,0)) = N Λ (0, 1) = R {0}, cf. [37, Proposition 3.7]. It then follows that { ψ(0,0) (λ,µ) (λ,µ) N Λ (ψ(0,0)) {(0,0)}} ( N Ω (0,0)) = {(0,λ) λ (,0) (0, )} R 2 + = {(0,λ) λ > 0}, 22

Preprint ISSN

Preprint ISSN Fakultät für Mathematik und Informatik Preprint 2011-07 Stephan Dempe, Boris S. Mordukhovich, Alain B. Zemkoho Sensitivity analysis for two-level value functions with applications to bilevel programming

More information

Preprint ISSN

Preprint ISSN Fakultät für Mathematik und Informatik Preprint 2016-06 Susanne Franke, Patrick Mehlitz, Maria Pilecka Optimality conditions for the simple convex bilevel programming problem in Banach spaces ISSN 1433-9307

More information

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order

More information

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN Fakultät für Mathematik und Informatik Preprint 2013-04 Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN 1433-9307 Stephan Dempe and

More information

Fakultät für Mathematik und Informatik

Fakultät für Mathematik und Informatik Fakultät für Mathematik und Informatik Preprint 2018-03 Patrick Mehlitz Stationarity conditions and constraint qualifications for mathematical programs with switching constraints ISSN 1433-9307 Patrick

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

First order optimality conditions for mathematical programs with second-order cone complementarity constraints

First order optimality conditions for mathematical programs with second-order cone complementarity constraints First order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye and Jinchuan Zhou April 9, 05 Abstract In this paper we consider a mathematical

More information

Fakultät für Mathematik und Informatik

Fakultät für Mathematik und Informatik Fakultät für Mathematik und Informatik Preprint 2017-03 S. Dempe, F. Mefo Kue Discrete bilevel and semidefinite programg problems ISSN 1433-9307 S. Dempe, F. Mefo Kue Discrete bilevel and semidefinite

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints An Li and Jane J. Ye Abstract. In this paper we study an optimal control problem with nonsmooth

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3

PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3 PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3 Abstract. This paper presents a systematic study of partial

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

1. Introduction. An optimization problem with value function objective is a problem of the form. F (x, y) subject to y K(x) (1.2)

1. Introduction. An optimization problem with value function objective is a problem of the form. F (x, y) subject to y K(x) (1.2) OPTIMIZATION PROBLEMS WITH VALUE FUNCTION OBJECTIVES ALAIN B. ZEMKOHO Abstract. The famil of optimization problems with value function objectives includes the minmax programming problem and the bilevel

More information

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity On smoothness properties of optimal value functions at the boundary of their domain under complete convexity Oliver Stein # Nathan Sudermann-Merx June 14, 2013 Abstract This article studies continuity

More information

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics and Statistics

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

Preprint Patrick Mehlitz, Gerd Wachsmuth Weak and strong stationarity in generalized bilevel programming and bilevel optimal control

Preprint Patrick Mehlitz, Gerd Wachsmuth Weak and strong stationarity in generalized bilevel programming and bilevel optimal control Fakultät für Mathematik und Informatik Preprint 215-9 Patrick Mehlitz, Gerd Wachsmuth Weak and strong stationarity in generalized bilevel programming and bilevel optimal control ISSN 1433-937 Patrick Mehlitz,

More information

Optimization Methods and Stability of Inclusions in Banach Spaces

Optimization Methods and Stability of Inclusions in Banach Spaces Mathematical Programming manuscript No. (will be inserted by the editor) Diethard Klatte Bernd Kummer Optimization Methods and Stability of Inclusions in Banach Spaces This paper is dedicated to Professor

More information

arxiv: v2 [math.oc] 23 Nov 2016

arxiv: v2 [math.oc] 23 Nov 2016 Complete Characterizations of Tilt Stability in Nonlinear Programming under Weakest Qualification Conditions arxiv:1503.04548v [math.oc] 3 Nov 016 HELMUT GFRERER and BORIS S. MORDUKHOVICH Abstract. This

More information

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

Constraint qualifications for nonlinear programming

Constraint qualifications for nonlinear programming Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions

More information

First order optimality conditions for mathematical programs with semidefinite cone complementarity constraints

First order optimality conditions for mathematical programs with semidefinite cone complementarity constraints First order optimality conditions for mathematical programs with semidefinite cone complementarity constraints Chao Ding, Defeng Sun and Jane J. Ye November 15, 2010; First revision: May 16, 2012; Second

More information

Solving generalized semi-infinite programs by reduction to simpler problems.

Solving generalized semi-infinite programs by reduction to simpler problems. Solving generalized semi-infinite programs by reduction to simpler problems. G. Still, University of Twente January 20, 2004 Abstract. The paper intends to give a unifying treatment of different approaches

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Hölder Metric Subregularity with Applications to Proximal Point Method

Hölder Metric Subregularity with Applications to Proximal Point Method Hölder Metric Subregularity with Applications to Proximal Point Method GUOYIN LI and BORIS S. MORDUKHOVICH Revised Version: October 1, 01 Abstract This paper is mainly devoted to the study and applications

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

FULL STABILITY IN FINITE-DIMENSIONAL OPTIMIZATION

FULL STABILITY IN FINITE-DIMENSIONAL OPTIMIZATION FULL STABILITY IN FINITE-DIMENSIONAL OPTIMIZATION B. S. MORDUKHOVICH 1, T. T. A. NGHIA 2 and R. T. ROCKAFELLAR 3 Abstract. The paper is devoted to full stability of optimal solutions in general settings

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY IN NONLINEAR PROGRAMMING 1 A. L. DONTCHEV Mathematical Reviews, Ann Arbor, MI 48107 and R. T. ROCKAFELLAR Dept. of Math., Univ. of Washington, Seattle, WA 98195

More information

Stationarity and Regularity of Infinite Collections of Sets. Applications to

Stationarity and Regularity of Infinite Collections of Sets. Applications to J Optim Theory Appl manuscript No. (will be inserted by the editor) Stationarity and Regularity of Infinite Collections of Sets. Applications to Infinitely Constrained Optimization Alexander Y. Kruger

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

Constraint qualifications for convex inequality systems with applications in constrained optimization

Constraint qualifications for convex inequality systems with applications in constrained optimization Constraint qualifications for convex inequality systems with applications in constrained optimization Chong Li, K. F. Ng and T. K. Pong Abstract. For an inequality system defined by an infinite family

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY R. T. Rockafellar* Abstract. A basic relationship is derived between generalized subgradients of a given function, possibly nonsmooth and nonconvex,

More information

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018 EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,

More information

APPLICATIONS OF DIFFERENTIABILITY IN R n.

APPLICATIONS OF DIFFERENTIABILITY IN R n. APPLICATIONS OF DIFFERENTIABILITY IN R n. MATANIA BEN-ARTZI April 2015 Functions here are defined on a subset T R n and take values in R m, where m can be smaller, equal or greater than n. The (open) ball

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

GENERALIZED DIFFERENTIATION WITH POSITIVELY HOMOGENEOUS MAPS: APPLICATIONS IN SET-VALUED ANALYSIS AND METRIC REGULARITY

GENERALIZED DIFFERENTIATION WITH POSITIVELY HOMOGENEOUS MAPS: APPLICATIONS IN SET-VALUED ANALYSIS AND METRIC REGULARITY GENERALIZED DIFFERENTIATION WITH POSITIVELY HOMOGENEOUS MAPS: APPLICATIONS IN SET-VALUED ANALYSIS AND METRIC REGULARITY C.H. JEFFREY PANG Abstract. We propose a new concept of generalized dierentiation

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Martin Luther Universität Halle Wittenberg Institut für Mathematik

Martin Luther Universität Halle Wittenberg Institut für Mathematik Martin Luther Universität Halle Wittenberg Institut für Mathematik Lagrange necessary conditions for Pareto minimizers in Asplund spaces and applications T. Q. Bao and Chr. Tammer Report No. 02 (2011)

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

A New Fenchel Dual Problem in Vector Optimization

A New Fenchel Dual Problem in Vector Optimization A New Fenchel Dual Problem in Vector Optimization Radu Ioan Boţ Anca Dumitru Gert Wanka Abstract We introduce a new Fenchel dual for vector optimization problems inspired by the form of the Fenchel dual

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

On the Midpoint Method for Solving Generalized Equations

On the Midpoint Method for Solving Generalized Equations Punjab University Journal of Mathematics (ISSN 1016-56) Vol. 40 (008) pp. 63-70 On the Midpoint Method for Solving Generalized Equations Ioannis K. Argyros Cameron University Department of Mathematics

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

Radius Theorems for Monotone Mappings

Radius Theorems for Monotone Mappings Radius Theorems for Monotone Mappings A. L. Dontchev, A. Eberhard and R. T. Rockafellar Abstract. For a Hilbert space X and a mapping F : X X (potentially set-valued) that is maximal monotone locally around

More information

The limiting normal cone to pointwise defined sets in Lebesgue spaces

The limiting normal cone to pointwise defined sets in Lebesgue spaces The limiting normal cone to pointwise defined sets in Lebesgue spaces Patrick Mehlitz Gerd Wachsmuth February 3, 2017 We consider subsets of Lebesgue spaces which are defined by pointwise constraints.

More information

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2. ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n

More information

Aubin Property and Uniqueness of Solutions in Cone Constrained Optimization

Aubin Property and Uniqueness of Solutions in Cone Constrained Optimization Preprint manuscript No. (will be inserted by the editor) Aubin Property and Uniqueness of Solutions in Cone Constrained Optimization Diethard Klatte Bernd Kummer submitted 6 January 2012, revised 11 June

More information

Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter

Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter Complementarity Formulations of l 0 -norm Optimization Problems 1 Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter Abstract: In a number of application areas, it is desirable to

More information

arxiv: v1 [math.oc] 13 Mar 2019

arxiv: v1 [math.oc] 13 Mar 2019 SECOND ORDER OPTIMALITY CONDITIONS FOR STRONG LOCAL MINIMIZERS VIA SUBGRADIENT GRAPHICAL DERIVATIVE Nguyen Huy Chieu, Le Van Hien, Tran T. A. Nghia, Ha Anh Tuan arxiv:903.05746v [math.oc] 3 Mar 209 Abstract

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy ON THE THEORY OF VECTOR OPTIMIZATION AND VARIATIONAL INEQUALITIES. IMAGE SPACE ANALYSIS AND SEPARATION 1 Franco Giannessi, Giandomenico Mastroeni Department of Mathematics University of Pisa, Pisa, Italy

More information

Centre d Economie de la Sorbonne UMR 8174

Centre d Economie de la Sorbonne UMR 8174 Centre d Economie de la Sorbonne UMR 8174 On alternative theorems and necessary conditions for efficiency Do Van LUU Manh Hung NGUYEN 2006.19 Maison des Sciences Économiques, 106-112 boulevard de L'Hôpital,

More information

9. Birational Maps and Blowing Up

9. Birational Maps and Blowing Up 72 Andreas Gathmann 9. Birational Maps and Blowing Up In the course of this class we have already seen many examples of varieties that are almost the same in the sense that they contain isomorphic dense

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

1. Introduction. Consider the following parameterized optimization problem:

1. Introduction. Consider the following parameterized optimization problem: SIAM J. OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 8, No. 4, pp. 940 946, November 1998 004 NONDEGENERACY AND QUANTITATIVE STABILITY OF PARAMETERIZED OPTIMIZATION PROBLEMS WITH MULTIPLE

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

Complementarity Formulations of l 0 -norm Optimization Problems

Complementarity Formulations of l 0 -norm Optimization Problems Complementarity Formulations of l 0 -norm Optimization Problems Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter May 17, 2016 Abstract In a number of application areas, it is desirable

More information

Lecture 7: Semidefinite programming

Lecture 7: Semidefinite programming CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational

More information

Gerd Wachsmuth. January 22, 2016

Gerd Wachsmuth. January 22, 2016 Strong stationarity for optimization problems with complementarity constraints in absence of polyhedricity With applications to optimization with semidefinite and second-order-cone complementarity constraints

More information

Complementarity Formulations of l 0 -norm Optimization Problems

Complementarity Formulations of l 0 -norm Optimization Problems Complementarity Formulations of l 0 -norm Optimization Problems Mingbin Feng, John E. Mitchell,Jong-Shi Pang, Xin Shen, Andreas Wächter Original submission: September 23, 2013. Revised January 8, 2015

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

MODIFYING SQP FOR DEGENERATE PROBLEMS

MODIFYING SQP FOR DEGENERATE PROBLEMS PREPRINT ANL/MCS-P699-1097, OCTOBER, 1997, (REVISED JUNE, 2000; MARCH, 2002), MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY MODIFYING SQP FOR DEGENERATE PROBLEMS STEPHEN J. WRIGHT

More information

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones Noname manuscript No. (will be inserted by the editor Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones Truong Q. Bao Suvendu R. Pattanaik

More information

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ. Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed

More information

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems Chapter 1 Optimality Conditions: Unconstrained Optimization 1.1 Differentiable Problems Consider the problem of minimizing the function f : R n R where f is twice continuously differentiable on R n : P

More information

Hölder Metric Subregularity with Applications to Proximal Point Method

Hölder Metric Subregularity with Applications to Proximal Point Method Hölder Metric Subregularity with Applications to Proximal Point Method GUOYIN LI and BORIS S. MORDUKHOVICH February, 01 Abstract This paper is mainly devoted to the study and applications of Hölder metric

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε 1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR ANGEWANDTE MATHEMATIK

FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR ANGEWANDTE MATHEMATIK FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR ANGEWANDTE MATHEMATIK Optimality conditions for vector optimization problems with variable ordering structures by G. Eichfelder & T.X.D. Ha

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 25 (2012) 974 979 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml On dual vector equilibrium problems

More information

Implicit Multifunction Theorems

Implicit Multifunction Theorems Implicit Multifunction Theorems Yuri S. Ledyaev 1 and Qiji J. Zhu 2 Department of Mathematics and Statistics Western Michigan University Kalamazoo, MI 49008 Abstract. We prove a general implicit function

More information

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function arxiv:1805.03847v1 [math.oc] 10 May 2018 Vsevolod I. Ivanov Department of Mathematics, Technical

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

Characterizations of Strong Regularity for Variational Inequalities over Polyhedral Convex Sets

Characterizations of Strong Regularity for Variational Inequalities over Polyhedral Convex Sets Characterizations of Strong Regularity for Variational Inequalities over Polyhedral Convex Sets A. L. Dontchev Mathematical Reviews, Ann Arbor, MI 48107 and R. T. Rockafellar Dept. of Math., Univ. of Washington,

More information

Variational Analysis of the Ky Fan k-norm

Variational Analysis of the Ky Fan k-norm Set-Valued Var. Anal manuscript No. will be inserted by the editor Variational Analysis of the Ky Fan k-norm Chao Ding Received: 0 October 015 / Accepted: 0 July 016 Abstract In this paper, we will study

More information

Quasi-relative interior and optimization

Quasi-relative interior and optimization Quasi-relative interior and optimization Constantin Zălinescu University Al. I. Cuza Iaşi, Faculty of Mathematics CRESWICK, 2016 1 Aim and framework Aim Framework and notation The quasi-relative interior

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

On the Moreau-Yosida regularization of the vector k-norm related functions

On the Moreau-Yosida regularization of the vector k-norm related functions On the Moreau-Yosida regularization of the vector k-norm related functions Bin Wu, Chao Ding, Defeng Sun and Kim-Chuan Toh This version: March 08, 2011 Abstract In this paper, we conduct a thorough study

More information

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of

More information

NONSMOOTH ANALYSIS AND PARAMETRIC OPTIMIZATION. R. T. Rockafellar*

NONSMOOTH ANALYSIS AND PARAMETRIC OPTIMIZATION. R. T. Rockafellar* NONSMOOTH ANALYSIS AND PARAMETRIC OPTIMIZATION R. T. Rockafellar* Abstract. In an optimization problem that depends on parameters, an important issue is the effect that perturbations of the parameters

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.

Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary

More information