SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

Similar documents
Second Order Optimality Conditions for Constrained Nonlinear Programming

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

Examination paper for TMA4180 Optimization I

Numerical Optimization

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Constrained Optimization

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

The Fundamental Theorem of Linear Inequalities

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lecture 3. Optimization Problems and Iterative Algorithms

MATH2070 Optimisation

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Link lecture - Lagrange Multipliers

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints

Constrained Optimization Theory

Convex Optimization and SVM

Math 164-1: Optimization Instructor: Alpár R. Mészáros

1 Computing with constraints

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Constrained Optimization and Lagrangian Duality

Optimality Conditions for Constrained Optimization

Finite Dimensional Optimization Part III: Convex Optimization 1

2015 Math Camp Calculus Exam Solution

Algorithms for constrained local optimization

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Lagrange Relaxation and Duality

MATH 4211/6211 Optimization Constrained Optimization

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

subject to (x 2)(x 4) u,

Exam in TMA4180 Optimization Theory

Constrained optimization

Exam: Continuous Optimisation 2015

Taylor Series and stationary points

Final Exam - Answer key

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

minimize x subject to (x 2)(x 4) u,

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

Generalization to inequality constrained problem. Maximize

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

10-725/ Optimization Midterm Exam

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Convex Optimization and Support Vector Machine

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Lecture 13: Constrained optimization

You should be able to...

More on Lagrange multipliers

3. Minimization with constraints Problem III. Minimize f(x) in R n given that x satisfies the equality constraints. g j (x) = c j, j = 1,...

Lagrange Multipliers

Convex Optimization & Lagrange Duality

5 Handling Constraints

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Let f(x) = x, but the domain of f is the interval 0 x 1. Note

Constrained optimization: direct methods (cont.)

Algorithms for Constrained Optimization

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017

Linear and Combinatorial Optimization

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

1 Review of last lecture and introduction

Sufficient Conditions for Finite-variable Constrained Minimization

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

ICS-E4030 Kernel Methods in Machine Learning

Algorithms for nonlinear programming problems II

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

Cheng Soon Ong & Christian Walder. Canberra February June 2018

IE 5531: Engineering Optimization I

4TE3/6TE3. Algorithms for. Continuous Optimization

Support Vector Machines for Regression

5. Duality. Lagrangian

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

Math 5311 Constrained Optimization Notes

Summary Notes on Maximization

Computational Optimization. Constrained Optimization Part 2

Support Vector Machine (continued)

Finite Dimensional Optimization Part I: The KKT Theorem 1

Nonlinear Programming and the Kuhn-Tucker Conditions

Linear Algebra Massoud Malek

1. Introduction. Consider the following quadratically constrained quadratic optimization problem:

Lecture 19 Algorithms for VIs KKT Conditions-based Ideas. November 16, 2008

Practice Exam 1: Continuous Optimisation

5.6 Penalty method and augmented Lagrangian method

3.7 Constrained Optimization and Lagrange Multipliers

Computational Optimization. Augmented Lagrangian NW 17.3

CS-E4830 Kernel Methods in Machine Learning

Lecture 4: Optimization. Maximizing a function of a single variable

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

Lecture 8. Strong Duality Results. September 22, 2008

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Lagrange Murderpliers Done Correctly

ECE580 Solution to Problem Set 6

We are now going to move on to a discussion of Inequality constraints. Our canonical problem now looks as ( ) = 0 ( ) 0

Part IB Optimisation

Lecture 18: Optimization Programming

8 Barrier Methods for Constrained Optimization

Transcription:

SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take a closer look at some examples and illustrate how optimality conditions are applied in the Method of Lagrange multipliers. Example.. Use the method of Lagrange multipliers to solve the problem min x R s.t. x (.) [ x 0 x [ 0. (.) x PSfrag replacements x F ˆx x x Fig... The feasible domain F is shaded. Clearly the problem is equivalent to We have f(x) = [ x x g (x) = [ x (x ) g (x) = [ x (x ) min x R f(x) = x + x s.t. g (x) = x + (x ) 0 g (x) = x (x ) + 0. L(x λ) = x + x ( λ x + (x ) ) ( λ x (x ) + ) [ x x L(x λ) = ( λ + λ ). x λ (x ) + λ (x )

The KKT conditions are the following: x ( λ + λ ) = 0 (.) x λ (x ) + λ (x ) = 0 (.4) x + (x ) 0 (.5) x (x ) + 0 (.6) λ ( x + (x ) ) = 0 (.7) λ ( x (x ) + ) = 0 (.8) λ 0 (.9) λ 0. (.0) Let us find all the KKT points. We need to distinguish four cases: If A(x) = then (.)(.4) imply x = 0 which violates (.6). Thus there are no KKT points that correspond to A(x) =. If A = {} then λ = 0. (.)(.4) and (.6) imply x ( + λ ) = 0 (.) x + λ (x ) = 0 (.) x + (x ) =. (.) (.) implies that either x = 0 or λ =. The second case contradicts (.0) so we may assume that the first case holds. But then (.) implies x { }. If x = then A(x) = { } which contradicts our earlier assumption. Thus we must have x =. But then (.) implies λ = which contradicts (.0). Thus there are no KKT points corresponding to A = {}. If A(x) = {} then λ = 0. (.) (.5) become The unique solution of these equations is x ( λ ) = 0 (.4) x λ (x ) = 0 (.5) x + (x ) =. (.6) ˆx = [ 0 ˆλ = [ 0. It is easily checked that (ˆx ˆλ) satisfies (.) (.0) and hence is a KKT point. Moreover the LICQ holds at ˆx because g (ˆx) = [ 0 0. If A(x) = { } then (.5) and (.6) must hold at equality that is x + (x ) = 0 x + (x ) = 0. This system of equations implies x = / x = ± /. Let us analyse the case x = [ / / only as the two cases are similar. (.)(.4) imply (λ + λ ) = 0 λ λ = 0

which implies λ = [. It is easily checked that ( x λ) satisfies (.) (.0) and hence is a KKT point. Likewise ( x λ) is a KKT point where x = [ / / and λ = λ. Furthermore the LICQ holds at both points because g ( x) = [ and g ( x) = [ are linearly independent and likewise for g ( x) = [ and g ( x) = [. All in all we have found three KKT points. It would be easy to evaluate f at all three points to find that x and x are global minimisers of (.). It can also be seen by inspection that ˆx is not a local minimiser. Let us now show that this information can also be derived from second order information: Since the LICQ holds at ˆx the feasible exit directions from ˆx are characterised by which is equivalent to d 0 d T g (ˆx) 0 d 0 d + d > 0. If d > 0 then ˆλ d T g (ˆx) = 4d > 0 so we don t need to check any additional conditions for such directions d. However if d = 0 then ˆλ d T g (ˆx) = 0 and since A(ˆx) = {} this shows that d is a feasible exit direction for which the second order necessary optimality condition d T D xx L(ˆx ˆλ)d 0 has to be satisfied. But note that this condition is violated because d T D xx L(ˆx ˆλ)d = d x L(ˆx ˆλ) = d < 0. Since ˆx fails to satisfy the second order necessary optimality conditions it cannot be a local minimiser of (.). Now for x where the LICQ holds the set of feasible exit directions is characterised by which is equivalent to d 0 d T g ( x) 0 d T g ( x) 0 d d d > 0. But for any d that satisfies these conditions we have λ d T g ( x) = ( d + d ) > 0.

Thus the set of feasible exit directions that satisfy Condition (.7) from Lecture 0 is the empty set. This shows that the sufficient optimality conditions are satisfied at x and that this must be a strict local minimiser. Likewise one finds that the sufficient optimality conditions hold at x. Example.. Consider the minimisation problem min 0.(x 4) + x s.t. x + x 0. (i) Does this problem have a global minimiser? (ii) Set up the KKT conditions for this problem. (iii) Find x and a vector λ of Lagrange multipliers so that (x λ ) satisfy the KKT conditions. (iv) Is the LICQ satisfied at x? (v) Characterise the set of feasible exit directions from x. (vi) Check that the sufficient optimality conditions hold at x to show that x is a local minimiser. (i) The objective function is unbounded along the line x = 0 x. Thus no global solution exists but we can find a local minimum with the method of Lagrange multipliers. (ii) We get ( ) x L(x λ) = 0.(x 4) λx x λx xx L(x λ) = ( ) 0. λ 0 0 λ. The KKT conditions are 0.(x 4) λx = 0 x λx = 0 x + x 0 λ(x + x ) = 0 λ 0. (iii) For A(x) = have λ = 0 and hence x = 4 x = 0 which implies A = {} contrary to our assumption. Thus there are no KKT points corresponding to this case. If A = {} then the unique solution is x = [ 0 λ = 0.. (iv) The LICQ holds at x because g (x ) = [ 0 0. (v) The set of feasible exit directions from x for which Condition (.7) from Lecture 0 holds is (vi) For any d from that set we have {d R : d = 0 d 0}. d T xx L(x λ )d = ( 0 d ) ( 0.4 0 0.4 ) ( 0d ) =.4d > 0. 4

Therefore the sufficient optimality conditions are satisfied and x is a strict local minimiser. Example.. Consider the half space defined by H = {x R n : a T x + b 0} where a R n and b R are given. Formulate and solve the optimisation problem of finding the point x in H that has the smallest Euclidean norm. The problem is of course trivial to solve directly but we want to see how the Lagrange multiplier approach solves the problem blindly. The problem is equivalent to solving min x T x s.t. g(x) = a T x + b 0. We may assume that a 0; otherwise the problem is trivial. The Lagrangian of this problem is L(x λ) = x T x λ(a T x + b). The gradient g(x) = a is nonzero everywhere and hence the LICQ holds at all feasible points. The KKT conditions are x λa = 0 (.7) λ(a T x + b) = 0 (.8) a T x + b 0 (.9) λ 0. (.0) If λ = 0 then x = 0 and then (.9) implies b 0. Either this is true and then (x λ) = (0 0) satisfies the KKT conditions or else b < 0 and then λ = 0 is not a viable choice. If λ > 0 then x = λa 0 a T x+b = λ a +b = 0 and then b < 0 which is either true in which case (x λ) = ( ( b/ a )a b/ a ) satisfies the KKT conditions or else λ > 0 is not a viable choice. Thus we have found that both in the case b 0 and b < 0 there is exactly one point satisfying the KKT conditions and since the KKT conditions must hold at the minimum of our optimisation problem the resulting points must be local minimisers. Since the problem is convex these are also the global minimisers in both cases. Our last example illustrates that even in the case with only equality constraints the method of Lagrange multipliers is the right tool to solve constrained optimisation problems: eliminating some of the variables can lead to the wrong result. Example.4. Solve the problem min x R x + x s.t. x + x = by eliminating the variable x. Show that the choice of sign for a square root operation during the elimination process is critical; the wrong choice leads to an incorrect 5

answer. A sketch reveals that ( / / ) is the unique local minimiser. Let us eliminate x = x and rename x as x. The problem becomes min x R f(x) = x x. Then f (x) = x/ x and f (x) = 0 for x = x which is satisfied for x = / leading to the solution (x x ) = (/ / ) which is neither a maximum nor a minimum. The elimination x = x however leads to the solutions (x x ) = ( / / ) and (x x ) = (/ / ) both of which are true stationary points one of them a minimiser the other a maximiser. 6