Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Similar documents
Constrained Optimization Theory

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Lecture 3. Optimization Problems and Iterative Algorithms

Numerical Optimization

Constrained Optimization

Optimality Conditions for Constrained Optimization

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Optimality, Duality, Complementarity for Constrained Optimization

Generalization to inequality constrained problem. Maximize

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Examination paper for TMA4180 Optimization I

Lecture 18: Optimization Programming

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

5. Duality. Lagrangian

Support Vector Machines: Maximum Margin Classifiers

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Computational Optimization. Constrained Optimization Part 2

Nonlinear Optimization: What s important?

SECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS

Constrained Optimization and Lagrangian Duality

5 Handling Constraints

Convex Optimization M2

Convex Optimization & Lagrange Duality

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

IE 5531: Engineering Optimization I

4TE3/6TE3. Algorithms for. Continuous Optimization

minimize x subject to (x 2)(x 4) u,

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

Convex Optimization Boyd & Vandenberghe. 5. Duality

Miscellaneous Nonlinear Programming Exercises

Lagrange Relaxation and Duality

Lecture: Duality of LP, SOCP and SDP

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Lecture 2: Linear SVM in the Dual

Lecture: Duality.

Lecture 3: Basics of set-constrained and unconstrained optimization

Second Order Optimality Conditions for Constrained Nonlinear Programming

2.098/6.255/ Optimization Methods Practice True/False Questions

Lagrangian Duality Theory

EE 227A: Convex Optimization and Applications October 14, 2008

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Constrained optimization

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

CS / ISyE 730 Spring 2015 Steve Wright

Duality Theory of Constrained Optimization

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

Algorithms for constrained local optimization

Nonlinear Programming and the Kuhn-Tucker Conditions

Introduction to Nonlinear Stochastic Programming

Support Vector Machines

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Homework Set #6 - Solutions

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Methods

MATH2070 Optimisation

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

4TE3/6TE3. Algorithms for. Continuous Optimization

Lecture 7: Convex Optimizations

Algorithms for nonlinear programming problems II

Appendix A Taylor Approximations and Definite Matrices

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Constrained optimization: direct methods (cont.)

Lecture 13: Constrained optimization

Written Examination

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

The Karush-Kuhn-Tucker (KKT) conditions

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

Optimization. A first course on mathematics for economists

Convex Optimization and Modeling

Lagrange Multipliers

Optimality conditions for unconstrained optimization. Outline

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Algorithms for nonlinear programming problems II

1. f(β) 0 (that is, β is a feasible point for the constraints)

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Tutorial on Convex Optimization: Part II

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Microeconomics I. September, c Leopold Sögner

Applications of Linear Programming

The Karush-Kuhn-Tucker conditions

Lecture 15: SQP methods for equality constrained optimization

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

2.3 Linear Programming

A Brief Review on Convex Optimization

Scientific Computing: Optimization

Convex Optimization Lecture 6: KKT Conditions, and applications

CS-E4830 Kernel Methods in Machine Learning

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Sufficient Conditions for Finite-variable Constrained Minimization

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Transcription:

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 1/27

Problems and solutions minimize f(x) subject to x Ω R n. f : Ω R is (sufficiently) smooth. f objective; x variables. Ω feasible set determined by finitely many (equality and/or inequality) constraints. x global minimizer of f over Ω = f(x) f(x ), x Ω. x local minimizer of f over Ω = N(x,δ) such that f(x) f(x ), for all x Ω N(x,δ). N(x,δ) := {x R n : x x δ}. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 2/27

Example problem in one dimension Example : minf(x) subject to a x b. f(x) x 1 x 2 a b The feasible region Ω is the interval [a,b]. The point x 1 is the global minimizer; x 2 is a local (non-global) minimizer; x = a is a constrained local minimizer. x Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 3/27

An example of a nonlinear constrained problem min x R 2(x 1 2) 2 + (x 2 0.5(3 5)) 2 subject to x 1 x 2 + 1 0, x 2 x 2 1 0. 2.5 2 c 2 1.5 1 c 1 Ω contours of f x 2 0.5 0 0.5 x 1 1.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 3 x = 0.5( 1 + 5,3 5); Ω feasible set. x 1 Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 4/27

Optimality conditions for constrained problems == algebraic characterizations of solutions suitable for computations. provide a way to guarantee that a candidate point is optimal (sufficient conditions) indicate when a point is not optimal (necessary conditions) minimize x R n f(x) subject to c E (x) = 0, c I (x) 0. (CP) f : R n R, c E : R n R m and c I : R n R p (suff.) smooth; c I (x) 0 c i (x) 0, i I. Ω := {x : c E (x) = 0, c I (x) 0} feasible set of the problem. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 5/27

Optimality conditions for constrained problems unconstrained problem ˆx stationary point ( f(ˆx) = 0). constrained problem ˆx Karush-Kuhn-Tucker (KKT) point. Definition: ˆx KKT point of (CP) if there exist ŷ R m and ˆλ R p such that (ˆx,ŷ,ˆλ) satisfies f(ˆx) = j E ŷ j c j (ˆx) + i I ˆλ i c i (ˆx), c E (ˆx) = 0, c I (ˆx) 0, ˆλ i 0, ˆλi c i (ˆx) = 0, for all i I. Let A := E {i I : c i (ˆx) = 0} index set of active constraints at ˆx; c j (ˆx) > 0 inactive constraint at ˆx ˆλ j = 0. Then ˆλ i I i c i (ˆx) = ˆλ i I A i c i (ˆx). J(x) = ( c i (x) T) i j E ŷj c j (ˆx) = J E (x) T ŷ and ˆλ i I i c i (ˆx) = J I (x) Tˆλ. Jacobian matrix of constraints c. Thus Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 6/27

Optimality conditions for constrained problems... ˆx KKT point ŷ and ˆλ Lagrange multipliers of the equality and inequality constraints, respectively. ŷ and ˆλ sensitivity analysis. L : R n R m R p R Lagrangian function of (CP), L(x,y,λ) := f(x) y c E (x) λ c I (x), x R n. Thus x L(x,y,λ) = f(x) J E (x) y J I (x) λ, and ˆx KKT point of (CP) = x L(ˆx,ŷ,ˆλ) = 0 (i. e., ˆx is a stationary point of L(,ŷ,ˆλ)). duality theory... Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 7/27

An illustration of the KKT conditions min x R 2(x 1 2) 2 + (x 2 0.5(3 5)) 2 x 1 x 2 + 1 0, x 2 x 2 1 0. subject to ( ) 2.5 x = 1 2 ( 1+ 5,3 5) : global solution of ( ), KKT point of ( ). f(x ) = ( 5 + 5,0), c 1 (x ) = (1 5,1), c 2 (x ) = ( 1, 1). x 2 2 1.5 1 0.5 0 0.5 1 1.5 c 1 f(x ) Ω c 2 c 1 (x ) c 2 (x ) x 2 2 1 0 1 2 3 f(x ) = λ 1 c 1(x ) + λ 2 c 2(x ), with λ 1 = λ 2 = 5 1 > 0. c 1 (x ) = c 2 (x ) = 0: constraints are active at x. x 1 Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 8/27

An illustration of the KKT conditions... min x R 2(x 1 2) 2 + (x 2 0.5(3 5)) 2 x 1 x 2 + 1 0, x 2 x 2 1 0. x := (0,0) is NOT a KKT point of ( )! c 1 (x) = 0: active at x. c 2 (x) = 1: inactive at x. = λ 2 = 0 and f(x) = λ 1 c 1 (x), with λ 1 0. x 2 2 1.5 1 0.5 0 0.5 f(0) c Ω c 2 1 subject to c 1 (0) ( ) c 2 (0) 1 4 3 2 1 0 1 2 3 Contradiction with f(x) = ( 4, 5 3) and c 1 (x) = (0,1). x 1 Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 9/27

Optimality conditions for constrained problems... In general, need constraints/feasible set of (CP) to satisfy regularity assumption called constraint qualification in order to derive optimality conditions. Theorem 19 (First order necessary conditions) Under suitable constraint qualifications, x local minimizer of (CP) = x KKT point of (CP). Proof of Theorem 19 (for equality constraints only): Let I = and so we must show that c E (x ) = 0 (which is trivial as x feasible) and f(x ) = J E (x ) T y for some y R m. Consider feasible perturbations/paths x(α) around x, where α (small) scalar, x(α) C 2 (R n ) and x(0) = x and c(x(α)) = 0 ( ). ( ) requires constraint qualifications Then by Taylor expansion, x(α) = x + αs + 1 2 α2 p + O(α 3 ) ( ). ( ) [α 2 and higher order terms not needed here; only for 2nd order conditions later] Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 10/27

Optimality conditions for constrained problems... Proof of Theorem 19 (for equality constraints only): (continued) For any i E, by Taylor s theorem for c i (x(α)) around x, 0 = c i (x(α)) = c i (x + αs + 1 2 α2 p + O(α 3 )) = c i (x ) + c i (x ) T (αs + 1 2 α2 p) + 1 2 α2 s T 2 c i (x )s + O(α 3 ) = α c i (x ) T s + 1 2 α2[ c i (x ) T p + s T 2 c i (x )s ] ( ) + O(α 3 ). where we used c i (x ) = 0. Thus for all i E, c i (x ) T s = 0 and c i (x ) T p + s T 2 c i (x )s = 0 ( ), and so J E (x )s = 0. Now expanding f, we deduce f(x(α)) = f(x ) + f(x ) T (αs + 1 2 α2 p) + 1 2 α2 s T 2 f(x )s + O(α 3 ) = f(x ) + α f(x ) T s + 1 2 α2[ f(x ) T p + s T 2 f(x )s ] ( ) + O(α 3 ). ( )[these terms are only needed for 2nd order optimality conditions later] As x(α) feasible, f is unconstrained along x(α) and so f (x(0)) = f(x ) T s = 0 since x is a local minimizer along x(α). Thus f(x ) T s = 0 for all s such that J E (x )s = 0 (1). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 11/27

Optimality conditions for constrained problems... Proof of Theorem 19 (for equality constraints only): (continued) If we let Z be a basis for the null space of J E (x ), we deduce there exists y and s such that f(x ) = J E (x ) T y + Zs. (2) From (1), Z T f(x ) = 0 and so from (2), 0 = Z T J E (x ) T y + Z T Zs, and furthermore, since J E (x )Z = 0, we must have Z T Zs = 0. As Z is a basis, it is full rank and so s = 0. We conclude from (2) that f(x ) = J E (x ) T y. Let (CP) with equalities only (I = ). Then feasible descent direction s at x Ω if f(x) T s < 0 and J E (x)s = 0. Let (CP). Then feasible descent direction s at x Ω if f(x) T s < 0, J E (x)s = 0 and c i (x) T s 0 for all i I A(x). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 12/27

Constraint qualifications Proof of Th 19: used (first-order) Taylor to linearize f and c i along feasible paths/perturbations x(α) etc. Only correct if linearized approximation covers the essential geometry of the feasible set. CQs ensure this is the case. Examples: (CP) satisfies the Slater Constraint Qualification (SCQ) if x s.t. c E (x) = Ax b = 0 and c I (x) > 0 (i.e., c i (x) > 0, i I). (CP) satisfies the Linear Independence Constraint Qualification (LICQ) c i (x), i A(x), are linearly independent (at relevant x). Both SCQ and LICQ fail for Ω = {(x 1,x 2 ) : c 1 (x) = 1 x 2 1 (x 2 1) 2 0; c 2 (x) = x 2 0}. T Ω (x) = {(0,0)} and F(x) = {(s 1,0) : s 1 R}. Thus T Ω (x) F(x). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 13/27

Constraint qualifications... Tangent cone to Ω at x: [See Chapter 12, Nocedal & Wright] T Ω (x) = {s : limitingdirectionof feasiblesequence} [ geometry of Ω] z k x s = lim k t k where z k Ω, t k > 0, t k 0 and z k x as k. Set of linearized feasible directions: [ algebra of Ω] F(x) = {s : s T c i (x) = 0,i E; s T c i (x) 0,i I A(x)} Want T Ω (x) = F(x) [ensured if a CQ holds] min (x1,x 2 )x 1 + x 2 s.t. x 2 1 + x2 2 2 = 0. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 14/27

Optimality conditions for constrained problems... If the constraints of (CP) are linear in the variables, no constraint qualification is required. Theorem 20 (First order necessary conditions for linearly constrained problems) Let (c E,c I )(x) := Ax b in (CP). Then x local minimizer of (CP) = x KKT point of (CP). Let A = (A E,A I ) and b = (b E,b I ) corresponding to equality and inequality constraints. KKT conditions for linearly-constrained (CP): x KKT point there exists (y,λ ) such that f(x ) = A T E y + A T I λ, A E x b E = 0, A I x b I 0, λ 0, (λ ) T (A I x b I ) = 0. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 15/27

Optimality conditions for convex problems (CP) is a convex programming problem if and only if f(x) is a convex function, c i (x) is a concave function for all i I and c E (x) = Ax b. c i is a concave function ( c i ) is a convex function. (CP) convex problem Ω is a convex set. (CP) convex problem any local minimizer of (CP) is global. First order necessary conditions are also sufficient for optimality when (CP) is convex. Theorem 21. (Sufficient optimality conditions for convex problems: Let (CP) be a convex programming problem. ˆx KKT point of (CP) = ˆx is a (global) minimizer of (CP). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 16/27

Optimality conditions for convex problems Proof of Theorem 21. f convex = f(x) f(ˆx) + f(ˆx) (x ˆx), for all x R n. (1) (1)+[ f(ˆx) = A ŷ + i I ˆλ i c i (ˆx)] = f(x) f(ˆx) + (A ŷ) (x ˆx) + i I ˆλ i ( c i (ˆx) (x ˆx)), f(x) f(ˆx) + ŷ A(x ˆx) + i I ˆλ i ( c i (ˆx) (x ˆx)) (2). Let x Ω arbitrary = Ax = b and c(x) 0. Ax = b and Aˆx = b = A(x ˆx) = 0. (3) c i concave = c i (x) c i (ˆx) + c i (ˆx) (x ˆx). = c i (ˆx) (x ˆx) c i (x) c i (ˆx). = ˆλ i ( c i (ˆx) (x ˆx)) ˆλ i (c i (x) c i (ˆx)) = ˆλ i c i (x) 0, since ˆλ 0, ˆλ i c i (x) = 0 and c(x) 0. Thus, from (2), f(x) f(ˆx). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 17/27

Optimality conditions for nonconvex problems When (CP) is not convex, the KKT conditions are not in general sufficient for optimality need positive definite Hessian of the Lagrangian function along feasible directions. More on second-order optimality conditions later on. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 18/27

Optimality conditions for QP problems A Quadratic Programming (QP) problem has the form minimize x R n q + c x + 1 2 x Hx s. t. Ax = b, Ãx b. (QP) H positive semidefinite = (QP) convex problem. The KKT conditions for (QP): ˆx KKT point of (QP) (ŷ,ˆλ) R m R p such that Hˆx + c = A ŷ + à ˆλ, Aˆx = b, Èx b, ˆλ 0, ˆλ (Èx b) = 0. An example of a nonlinear constrained problem is convex; removing the constraint x 2 x 2 1 0 makes it a convex (QP). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 19/27

Linear Programming (LP) Given A IR m n, b IR m and c IR n, minimize x IR n c x subject to Ax = b, x 0. (P) the objective and all constraints are linear in the variables. (P): special case of convex QP, but better to exploit structure by solving as LP. Any local solution of (P) is global. Example. minimize x 1 + x 2 subject to x 2 + x 3 = 2, x 0. x IR 3 2.5 Solution set: {x = (0,0,2)}; x vertex of feasible set of LP. x2 2 1.5 1 0.5 0 3 2.5 2 1.5 1 0.5 0 0.5 0 0.2 0.4 0.6 0.8 1 x1 Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 20/27

Optimality conditions for LP problems LP is a convex programming problem = KKT conditions are necessary and sufficient for optimality. Constraints are linear = no need for regularity assumptions (cf Th. 20). Theorem 22. (Necessary & sufficient optimality cond for LP) ˆx solution of (P) (ŷ,ˆλ) R m R n : c = A ŷ + ˆλ, Aˆx = b, ˆx 0, ˆλ 0, ˆx iˆλ i = 0, i = 1,n. Proof. Just apply Theorems 20 and 21 to (P). Aˆx = b, ˆx 0 primal feasibility of ˆx. ˆx iˆλ i = 0, i = 1,n complementarity slackness eqns. A ŷ + ˆλ = c, ˆλ 0??? (OPT) Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 21/27

Some duality theory for LP The (OPT) system are also the KKT conditions of the LP maximize (y,λ) IR m IR n b y subject to A y + λ = c, λ 0. (D) (P): primal problem, (D): dual problem. A ŷ + ˆλ = c, ˆλ 0 dual feasibility of (ŷ,ˆλ). (x,y,λ) feasible for (P) and (D). Then (x,y,λ) solution of (P) and (D) if and only if c T x = b T y. Example: Then dual of LP Example is maximize (y,λ) IR IR 3 2y subject to { λ1 = 1, y + λ 2 = 1, y + λ 3 = 0, λ 0. The (D) solution: y = 0, λ = (1,1,0). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 22/27

Second-order optimality conditions When (CP) is not convex, the KKT conditions are not in general sufficient for optimality. Assume some CQ holds. Then at a given point x : the set of feasible directions for (CP) at x : F(x ) = { s : J E (x )s = 0, s T c i (x ) 0,i A(x ) I }. If x is a KKT point, then for any s F(x ), either s T f(x ) > 0 so f can only increase and stay feasible along s or s T f(x ) = 0 cannot decide from 1st order info if f increases or not along such s. (λ ) = {s F(x ) : s T c i (x ) = 0, i A(x ) I with λ i > 0}, where λ is a Lagrange multiplier of the inequality constraints. Then note that s T f(x ) = 0 for all s F(λ ). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 23/27

Second-order optimality conditions... Theorem 23 (Second-order necessary conditions) Let some CQ hold for (CP). Let x be a local minimizer of (CP), and (y,λ ) Lagrange multipliers of the KKT conditions at x. Then s T 2 xx L(x,y,λ )s 0 for all s F(λ ), where L(x,y,λ) = f(x) y T c E (x) λ T c I (x) is the Lagrangian function. Theorem 24 (Second-order sufficient conditions) Assume that x is a feasible point of (CP) and (y,λ ) are such that the KKT conditions are satisfied by (x,y,λ ). If s T 2 xx L(x,y,λ )s > 0 for all s F(λ ), s 0, then x is a local minimizer of (CP). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 24/27

Second-order optimality conditions... Proof of Theorem 23 (for equality constraints only) [NON-EXAMINABLE]: Let I = and so F(x ) = F(λ ). We have to show that s T 2 xx L(x,y,λ )s 0 for all s such that J E (x )s = 0. Recall the proof of Theorem 19: along any feasible path of the form x(α) = x + αs + 1 2 α2 p + O(α 3 ) (for any s and p), we showed that J E (x )s = 0 and c i (x ) T p + s T 2 c i (x )s = 0, i E, and that f(x(α)) = f(x ) + 1 2 α2[ f(x ) T p + s T 2 f(x )s ] + O(α 3 ). As x is a local minimizer, we must have that f(x ) T p + s T 2 f(x )s 0. (*) From the KKT conditions, f(x ) = J E (x ) T y and so f(x ) T p = (y ) T J E (x )p = i E y i st 2 c i (x )s. (**) Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 25/27

Second-order optimality conditions... Proof of Theorem 23 (for equality constraints only):(continued) From (*) and (**), we deduce 0 s T 2 f(x )s i E y i st 2 c i (x )s = s T [ 2 f(x ) i E 2 c i (x )]s = s T 2 xx L(x,y )s. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 26/27

Some simple approaches for solving (CP) Equality-constrained problems: direct elimination (a simple approach that may help/work sometimes; cannot be automated in general) Method of Lagrange multipliers: using the KKT and second order conditions to find minimizers (again, cannot be automated in general) [see Pb Sheet 5] Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 27/27