ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

Similar documents
Constraint qualifications for nonlinear programming

Numerical Optimization

Constrained Optimization Theory

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow

Optimality Conditions for Constrained Optimization

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Generalization to inequality constrained problem. Maximize

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

Mathematical programs with complementarity constraints in Banach spaces

Constrained Optimization

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Some Notes on Approximate Optimality Conditions in Scalar and Vector Optimization Problems

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

Lectures on Parametric Optimization: An Introduction

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

The Karush-Kuhn-Tucker (KKT) conditions

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Chap 2. Optimality conditions

8. Constrained Optimization

Nonlinear Optimization

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

MATH2070 Optimisation

More on Lagrange multipliers

Optimization problems with complementarity constraints in infinite-dimensional spaces

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0

Implications of the Constant Rank Constraint Qualification

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Second Order Optimality Conditions for Constrained Nonlinear Programming

Preprint ISSN

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

Gerd Wachsmuth. January 22, 2016

Nonlinear Programming and the Kuhn-Tucker Conditions

Constrained optimization

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Semi-infinite programming, duality, discretization and optimality conditions

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Lecture 18: Optimization Programming

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Fakultät für Mathematik und Informatik

Constrained Optimization and Lagrangian Duality

Computational Optimization. Constrained Optimization Part 2

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

Lecture: Duality of LP, SOCP and SDP

Sharpening the Karush-John optimality conditions

Date: July 5, Contents

Examination paper for TMA4180 Optimization I

On constraint qualifications with generalized convexity and optimality conditions

4TE3/6TE3. Algorithms for. Continuous Optimization

Lagrange Relaxation and Duality

Finite-Dimensional Cones 1

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

Finite Dimensional Optimization Part I: The KKT Theorem 1

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1

5. Duality. Lagrangian

Symmetric and Asymmetric Duality

Convex Optimization M2

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Solving generalized semi-infinite programs by reduction to simpler problems.

Priority Programme 1962

Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints

Centre d Economie de la Sorbonne UMR 8174

Some new facts about sequential quadratic programming methods employing second derivatives

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

arxiv: v2 [math.oc] 23 Nov 2016

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

5 Handling Constraints

Convex Optimization & Lagrange Duality

1. Introduction. We consider the mathematical programming problem

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

Optimality, Duality, Complementarity for Constrained Optimization

Preprint Patrick Mehlitz, Gerd Wachsmuth Weak and strong stationarity in generalized bilevel programming and bilevel optimal control

Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

CONSTRAINED NONLINEAR PROGRAMMING

Inequality Constraints

OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION. (Communicated by Yang Kuang)

First-Order Optimally Conditions in Generalized Semi-Infinite Programming 1

The Karush-Kuhn-Tucker conditions

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

The Kuhn-Tucker Problem

Transcription:

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to the uniqueness of Lagrange multipliers. However, the definition of this strict version of MFCQ requires the existence of a Lagrange multiplier and is not a constraint qualification (CQ) itself. In this note we show that LICQ is the weakest CQ which ensures (existence and) uniqueness of Lagrange multipliers. We also recall the relations between other CQs and properties of the set of Lagrange multipliers. 1. Introduction In this paper we study the optimization problem Minimize f(x), such that g(x) 0 and h(x) = 0. (P) Here, f : R n R, g : R n R m and h : R n R k are given functions which are continuously differentiable. Strongly related to this problem is the Lagrangian L(x, λ, µ) = f(x) + (λ, g(x)) R m + (µ, h(x)) R k (1) with λ R m and µ R k. Here, (, ) R l refers to the standard inner product in R l. In fact, if x is a local optimum of (P) and if an additional assumption is satisfied for g and h, then there exist multipliers ( λ, µ) R m R k such that the Karush-Kuhn-Tucker-system (KKT-system) x L( x, λ, µ) = 0 λ 0 (2a) (2b) ( λ, g( x)) R m = 0 (2c) is satisfied. This first order necessary optimality condition generalizes Fermat s theorem which states f( x) = 0 for unconstrained minimizers x of f. In Section 2 we recall why (2) cannot be a necessary optimality condition without further assumptions on g and h. These conditions which render (2) an optimality system are called constraint qualifications (CQs). Three important CQs are recalled in Section 3, see also [6]. Depending on the CQ which is satisfied, one has additional information about the set of multipliers ( λ, µ) satisfying (2). We will review the known relationships between constraint qualifications and properties of the set of multipliers satisfying (2) in Section 4. Moreover, we show that the linear independence CQ (LICQ) is the weakest CQ which implies uniqueness of the Lagrange multipliers, see Theorem 2. Finally, we give some comments on the strict Mangasarian-Fromovitz condition in Section 5. In particular, we highlight that this condition is not a CQ. Key words and phrases. Lagrange multiplier, constraint qualification. 1

2 GERD WACHSMUTH 2. Necessary optimality conditions In this section we want to review well-known facts of the optimality system (2). We denote the feasible set of the problem (P) by K = {x R n : g(x) 0 and h(x) = 0}. An immediate consequence of the local optimality of x for (P) is where ( f( x), d) R n 0 for all d T K ( x), (3) T K ( x) = {d R n : there exist {x n } R n and {t n } R, such that x n x, t n 0 and (x n x)/t n d} is the tangent cone of K at x. Using the notion of the polar cone condition (3) is equivalent to T K ( x) = {u R n : (u, d) R n 0 for all d T K ( x)}, f( x) T K ( x). (4) On the other hand, using Farkas Lemma, we find that the KKT-system (2) is equivalent to f( x) T lin K ( x). (5) Here, TK lin ( x) = {d R n : h( x) d = 0, g i ( x) d 0 for all i A( x)} is the linearized tangent cone and A( x) = {i N : 1 i m and g i ( x) = 0} is the set of active indices. Note that TK lin ( x) depends actually on the representation of K via g and h and not on K itself. It is easy to show that T K ( x) TK lin lin ( x) and hence TK ( x) T K ( x). Moreover, its not hard to construct examples where TK lin ( x) is significantly larger than T K ( x). This implies that the condition (5) is stronger than (4) in the general case. Therefore, it cannot be a necessary optimality condition without any further assumptions. 3. Constraint qualifications Constraint qualifications (CQs) are assumptions on the constraints g and h which ensure that condition (5) is a necessary optimality condition for the problem (P). The weakest CQ is Guignard s constraint qualification T K ( x) = T lin K ( x), (GCQ) see [3]. It immediately implies that (4) is equivalent to (5) and hence, the condition (5) is a necessary optimality condition. Note that Theorem 1 (ii) indeed shows that GCQ is the weakest possible CQ. A stronger CQ is the constraint qualification of Mangasarian and Fromovitz the set { h i ( x)} k i=1 is linearly independent, and there exists d R n, such that ( g i ( x), d) R n < 0 for i A( x) and ( h i ( x), d) R n = 0 for i = 1,..., k, see [5, (3.4) (3.6)]. The strongest CQ is the linear independence constraint qualification the set { h i ( x)} k i=1 { g i ( x)} i A( x) is linearly independent. It is easy to see that (LICQ) implies (MFCQ). (MFCQ) (LICQ)

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS 3 We remark that many other CQs (e.g. Abadie CQ, constant rank CQ) can be found in the literature. However, we restrict ourselves to the above ones since they imply certain properties of the set of Lagrange multipliers, as explained in the next section. 4. Connections betweens CQs and properties of the set of Lagrange multipliers Let us remark that all constraint qualifications above are independent of the objective function f. Hence, if a CQ would imply a certain property for the multipliers satisfying (2), this property would hold for all objective functions (such that x is a local minima). We fix an arbitrary feasible point x K and consider all objective functions f such that x is a local minimum. We define F = {f C 1 (R n ; R) : x is a local minimizer of (P)}. Moreover, for f F we define the set of Lagrange multipliers Λ(f) = {( λ, µ) R m R k : (2) is satisfied}. The following connections between CQs and the set of feasible multipliers Λ(f) are well-known. Theorem 1. (i) The set Λ(f) is closed and convex for all f F. (ii) The set Λ(f) is non-empty for all f F if and only if (GCQ) is satisfied. (iii) Let (MFCQ) be satisfied. Then, the set Λ(f) is compact for all f F. (iv) If there exists f F, such that Λ(f) is compact, then (MFCQ) is satisfied. (v) Let (LICQ) be satisfied. Then, the set Λ(f) is a singleton for all f F. Assertions (i) and (v) are easy to prove. In [2, Section 4] the proof of (ii) can be found and [1] shows (iii) and (iv). Some comments on Theorem 1 are in order. Note that each of (GCQ), (MFCQ) and (LICQ) implies the corresponding property for all objectives f F. For (GCQ) also the converse is true, i.e., if Λ(f) is non-empty for all f F, (GCQ) holds. The relation of the compactness of Λ(f) and (MFCQ) is a little bit different. Namely, the compactness of Λ(f) for some arbitrary f F implies (MFCQ), giving in turn the compactness of Λ(f) for all f F. As announced in the introduction, we show a converse of Theorem 1 (v). Theorem 2. Let x be a feasible point of (P). The set of Lagrange multipliers Λ(f) is a singleton for all f F if and only if (LICQ) is satisfied. Proof. If (LICQ) is satisfied, Theorem 1 (v) implies the uniqueness of Lagrange multipliers. To prove the contrary, we assume that Λ(f) is a singleton for all f F. We set f = g i. i A( x) Since f( x) = 0 and f(x) 0 for all feasible x R n, x is a local minimizer of f, i.e., f F. It can be seen easily, that { 1 if i A( x) λ i = and µ = 0 0 else are Lagrange multipliers. Let us take numbers a R m and b R k such that k a i g i ( x) + b i h i ( x) = 0 and a i = 0 for all i A( x). i A( x) i=1

4 GERD WACHSMUTH We define C = 1 + max i A( x) { a i }. It can be seen easily that the multipliers λ = λ + a C and µ = µ + b C satisfy (2). Since the Lagrange multipliers are unique by assumption, this shows a = 0 and b = 0, giving in turn (LICQ). 5. Uniqueness of multipliers and the strict Mangasarian-Fromovitz condition In this section, we recall the strict Mangasarian-Fromovitz condition (SMFC) and its relation to the uniqueness of Lagrange multipliers. Let x be a feasible point of (P), such that Lagrange multipliers ( λ, µ) Λ(f) exist. We say that SMFC is satisfied at x with multipliers λ if the set { h i ( x)} k i=1 { g i ( x)} i A+ ( x, λ) is linearly independent, and there exists d R n, such that ( g i ( x), d) R n < 0 for i A 0 ( x, λ), ( g i ( x), d) R n = 0 for i A + ( x, λ) and ( h i ( x), d) R n = 0 for i = 1,..., k, (SMFC( λ)) hold, where A + ( x, λ) = {i A( x) : λ i > 0} denotes the strictly active inequality constraints and A 0 ( x, λ) = {i A( x) : λ i = 0} = A( x) \ A + ( x, λ) denotes the weakly active inequality constraints. Note that this condition depends on the multiplier λ. Since (SMFC( λ)) relies on the existence of Lagrange multipliers and since it depends (indirectly) on the objective f, we refrain from calling this a constraint qualification as it is sometimes used in the literature. The relation between (SMFC( λ)) and the uniqueness of Lagrange multipliers is given in the following theorem. Theorem 3 ([4, Proposition 1.1]). Let x R n be given such that Lagrange multipliers ( λ, µ) Λ(f) exist. The strict Mangasarian-Fromovitz condition (SMFC( λ)) is equivalent to Λ(f) being a singleton, i.e., to the uniqueness of Lagrange multipliers. Let us highlight the difference of this theorem with Theorem 2. Theorem 3 shows that, if we already know multipliers ( λ, µ) such that (SMFC( λ)) is satisfied, these multipliers are unique. As already said, we have to assume the existence of multipliers a-priori since (SMFC( λ)) is not a CQ. On the other hand, (LICQ) is a CQ and ensures the existence and uniqueness of Lagrange multipliers. References [1] J. Gauvin. A necessary and sufficient regularity condition to have bounded multipliers in nonconvex programming. Mathematical Programming, 12(1):136 138, 1977. [2] F. J. Gould and J. W. Tolle. A necessary and sufficient qualification for constrained optimization. SIAM Journal on Applied Mathematics, 20:164 172, 1971. [3] M. Guignard. Generalized Kuhn-Tucker conditions for mathematical programming in a Banach space. SIAM Journal on Control and Optimization, 7(2):232 241, 1969. [4] J. Kyparisis. On uniqueness of Kuhn-Tucker multipliers in nonlinear programming. Mathematical Programming, 32(2):242 246, 1985. [5] O. L. Mangasarian and S. Fromovitz. The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. Journal of Mathematical Analysis and Applications, 17:37 47, 1967.

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS 5 [6] M. V. Solodov. Constraint qualifications. In J. J. Cochran, L. A. Cox, P. Keskinocak, J. P. Kharoufeh, and J. C. Smith, editors, Wiley Encyclopedia of Operations Research and Management Science. John Wiley & Sons, Inc., 2010. Faculty of Mathematics, Chemnitz University of Technology, Reichenhainer Strasse 41, 09126 Chemnitz, Germany (gerd.wachsmuth@mathematik.tu-chemnitz.de)