Constraint qualifications for nonlinear programming

Similar documents
ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

Numerical Optimization

Constrained Optimization Theory

Optimality Conditions for Constrained Optimization

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

Fakultät für Mathematik und Informatik

Generalization to inequality constrained problem. Maximize

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

Second Order Optimality Conditions for Constrained Nonlinear Programming

Constraint Qualifications and Stationarity Concepts for Mathematical Programs with Equilibrium Constraints

Implications of the Constant Rank Constraint Qualification

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter

arxiv: v2 [math.oc] 23 Nov 2016

Complementarity Formulations of l 0 -norm Optimization Problems

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

The Karush-Kuhn-Tucker (KKT) conditions

Complementarity Formulations of l 0 -norm Optimization Problems

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

Preprint ISSN

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

Some Notes on Approximate Optimality Conditions in Scalar and Vector Optimization Problems

Finite Dimensional Optimization Part I: The KKT Theorem 1

Computational Optimization. Constrained Optimization Part 2

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Solving generalized semi-infinite programs by reduction to simpler problems.

First order optimality conditions for mathematical programs with second-order cone complementarity constraints

Lectures on Parametric Optimization: An Introduction

Nonlinear Programming and the Kuhn-Tucker Conditions

Mathematical Economics. Lecture Notes (in extracts)

MODIFYING SQP FOR DEGENERATE PROBLEMS

The effect of calmness on the solution set of systems of nonlinear equations

4TE3/6TE3. Algorithms for. Continuous Optimization

Constrained Optimization

Mathematical Programs with Vanishing Constraints

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0

The Karush-Kuhn-Tucker conditions

Convex Optimization & Lagrange Duality

IE 5531: Engineering Optimization I

5 Handling Constraints

OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION. (Communicated by Yang Kuang)

On constraint qualifications with generalized convexity and optimality conditions

2.3 Linear Programming

Examination paper for TMA4180 Optimization I

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1.

CONSTRAINED NONLINEAR PROGRAMMING

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

Date: July 5, Contents

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

Mathematical Programs with Complementarity Constraints in the Context of Inverse Optimal Control for Locomotion

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

Stationarity and Regularity of Infinite Collections of Sets. Applications to

Aubin Property and Uniqueness of Solutions in Cone Constrained Optimization

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Relaxation schemes for mathematical programs with switching constraints

2.098/6.255/ Optimization Methods Practice True/False Questions

On a conjecture in second-order optimality conditions

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

Chapter 3: Constrained Extrema

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.

Priority Programme 1962

Convex Optimization M2

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

8. Constrained Optimization

MATH2070 Optimisation

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

5. Duality. Lagrangian

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Chap 2. Optimality conditions

Mathematical programs with complementarity constraints in Banach spaces

Technische Universität Dresden Herausgeber: Der Rektor

Finite Dimensional Optimization Part III: Convex Optimization 1

Sharpening the Karush-John optimality conditions

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

A smoothing augmented Lagrangian method for solving simple bilevel programs

College of William & Mary Department of Computer Science

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

1.4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

Summary Notes on Maximization

Inequality Constraints

arxiv: v1 [math.oc] 24 Nov 2016

Transcription:

Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions f, g i, h j : R n R. The feasible set of (NLP) will be denoted by Ω, i.e., Ω := {x R n g i (x) 0 (i = 1,..., m), h j (x) = 0 ( j = 1,..., p)}. For a feasible point x Ω, the so-called tangent cone for (NLP) at x is just the tangent cone of Ω at x, i.e., T( x) = T Ω ( x) := { d {x k } Ω, {t k } 0 : x k x and xk x d } t k and the linearized cone for (NLP) at x is = {d {d k } d, {t k } 0 : x + t k d k Ω k}, L( x) := { d gi ( x) T d 0 (i : g i ( x) = 0), h j ( x) T d = 0 ( j = 1,..., p) }. Note that both the tangent and the linearized cone are in fact closed cones and the latter is obviously polyhedral convex, where this is not necessarily true for the tangent cone. In addition to that, it is easy to see that the inclusion T( x) L( x) (1) holds for all x Ω. The condition that equality holds in (1) is known as the Abadie constraint qualification (ACQ), which we formally state in the following definition: Definition 1 (ACQ) Let x be feasible for (NLP). We say that the Abadie constraint qualification holds at x (and write ACQ( x)) if T( x) = L( x). ACQ plays a key role for establishing necessary optimality conditions for (NLP), due to the well-known Karush-Kuhn-Tucker theorem, see [1, Theorem 5.1.3]. Theorem 2 (KKT conditions) Let x be a local minimizer of (NLP) such that ACQ( x). Then there exist multipliers λ R m and µ R p such that m p 0 = f ( x) + λ i g i ( x) + µ j h j ( x) (2) and i=1 j=1 λ i 0, λ i g i ( x) = 0 (i = 1,..., m). (3) 1

In fact, the same result could be established under the following weaker condition: Definition 3 (GCQ) Let x be feasible for (NLP). We say that the Guignard constraint qualification (GCQ) holds at x (and write GCQ( x)) if T( x) = L( x), i.e., if the polar 1 of the tangent equals the polar of the linearized cone. In general, we call a property of the feasible set a constraint qualification if it guarantees the KKT conditions to hold at a local minimizer. GCQ is, in a sense, see [2], the weakest constraint qualification, and, as the following example shows, may be strictly weaker than ACQ. Example 4 Consider The global minimizer is x = (0, 0) T. We compute that min x 2 1 + x2 2 s.t. x 1, x 2 0, x 1 x 2 = 0. (4) T( x) = {d d 1, d 2 0, d 1 d 2 = 0} {d d 1, d 2 0} = L( x). x 2 T( x) L( x) x 1 L( x) = T( x) Figure 1: Tangent cone, linearized cone and their polars for (4) at x Hence, ACQ is violated at x. On the other hand we have T( x) = {v v 1, v 2 0} = L( x), and hence GCQ is satsified at x. Note, however, that ACQ (and hence GCQ) is satisfied at any other feasible point of (4). 1 For a cone C R n its polar is given by C := {v v T d 0 d C} 2

ACQ and thus GCQ are considered to be pretty mild assumptions, which have a good chance to be statisfied. On the other hand they are hard to check, since the tangent cone may be hard to compute. Furthermore, they are, in general, not strong enough to guarantee convergence of possible algorithms for solving (NLP). For these purposes the following two constraint qualifications are most useful. Definition 5 Let x be feasible for (NLP) and put I( x) := {i g i ( x) = 0}. We say that a) the linear independence constraint qualification (LICQ) holds at x (and write LICQ( x)) if the gradients g i ( x) (i I( x)), h j ( x) ( j = 1,..., p) are linearly independent. b) the Mangasarian-Fromovitz constraint qualification (MFCQ) holds at x (and write MFCQ( x)) if the gradients h j ( x) ( j = 1,..., p) are linearly independent and there exists a vector d R n such that g i ( x) T d < 0 (i I( x)), h j ( x) T d = 0 ( j = 1,..., p). Note that LICQ and MFCQ are violated at any feasible point of the nonlinear program from Example 4. In order to establish our main theorem on the relation of the constraint qualifications introduced above, we need the following auxiliary result. Lemma 6 Let x Ω such that MFCQ is satsified at x. Then there exists ε > 0 and a C 1 -curve x : ( ε, ε) R n such that x(t) Ω for all t [0, ε), x(0) = x and x (0) = d. Proof. Define H : R p+1 R p by H i (y, t) = h i ( x + td + h ( x) T y) j = 1,..., p, where h ( x) denotes the Jacobian of h at x. The nonlinear equation H(y, t) = 0 has the solution(ȳ, t) = (0, 0) with H y(0, 0) = h ( x)h ( x) T and the latter matrix is non-singular (even positive definite) due to the linear independence of the vectors h j ( x) ( j = 1,..., p). The implicit function theorem yields a C 1 -function y : ( ε, ε) R p such that y(0) = 0, H(y(t), t) = 0 and for all t ( ε, ε). Hence, we have y (t) = H y(y(t), t) 1 H t (y(t), t) y (0) = H y(0, 0) 1 H t (0, 0) = H y(0, 0) 1 h ( x)d = 0. 3

Now, put x(t) = x+td+h ( x) T y(t) for all t ( ε, ε). Reducing ε if necessary, x : ( ε, ε) R n has all desired properties: Obviously, x C 1, x(0) = x, x (0) = d and h i (x(t)) = 0 for all t ( ε, ε). Moreover, by continuity we have g i (x(t)) < 0 for all i I( x) and t sufficiently small. For i I( x) we have g i (x(0)) = g i ( x) = 0 and d dt g i(x(0)) = g i ( x) T d < 0 and hence g i (x(t)) < 0 for all t > 0 sufficiently small. Theorem 7 Let x be feasible for (NLP). Then the following implications hold: LICQ( x) = MFCQ( x) = ACQ( x) = GCQ( x). (5) Proof. LICQ( x) = MFCQ( x): Obviously, the vectors h j ( x) ( j = 1,..., p) are linear independent. It remains to find a suitable vector d: For these purposes, consider the matrix ( ) gi ( x) T (i I( x)) h j ( x) T R ( I( x) +p) n, ( j = 1,..., p) which has full rank by LICQ( x). Hence we can add rows to obtain non-singular matrix A( x) R n n, and hence the linear equation ( ) e A( x)d = 0 with e being the vector (in R I( x) ) of all ones has a solution ˆd, which fulfills the requirements for MFCQ( x). MFCQ( x) = ACQ( x): In view of (1), it suffices to show that L( x) T( x) holds. Hence, let d L( x) and ˆd given by MFCQ( x) such that g i ( x) T ˆd < 0 i I( x), h j ( x) T ˆd = 0 j. Put d(δ) := d + δ ˆd for δ > 0. Then for all δ > 0 we have g i ( x) T d(δ) < 0 i I( x), h i ( x) T d(δ) = 0 i. We claim that this implies d(δ) T( x) for all δ > 0: By Lemma 6 there exists a C 1 -curve x : ( ε, ε) R n such that x(t) Ω for all t [0, ε), x(0) = x and x (0) = d(δ). For an arbitrary sequence {t k } 0 and x k := x(t k ) we hence infer that x k X x and thus d(δ) = x x(t k ) x (0) = lim k t k 0 = lim x k x k And since T( x) is closed, this implies d = lim δ 0 d(δ) T( x). t k T( x). ACQ( x) = GCQ( x): Follows immediately from the definitions. We would like to point out that there is a variety of other constraint qualifications for (NLP) occuring in the literature. A very comprehensive suryey on that topic is given by [3]. 4

References [1] J.V. Burke: Numerical Optimization. Course Notes, AMath/Math 516, University of Washington, Spring Term 2012. [2] F.J. Gould and J.W. Tolle: A necessary and sufficient qualification for constrained optimization. SIAM J. Appl. Math. 20, 1971, pp. 164-172. [3] D. W. Peterson: A review of constraint qualifications in finite-dimensional spaces. SIAM Review 15, 1973, pp. 639-654. 5