Lectures 9 and 10: Constrained optimization problems and their optimality conditions
|
|
- Alberta Webster
- 5 years ago
- Views:
Transcription
1 Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 1/27
2 Problems and solutions minimize f(x) subject to x Ω R n. f : Ω R is (sufficiently) smooth. f objective; x variables. Ω feasible set determined by finitely many (equality and/or inequality) constraints. x global minimizer of f over Ω = f(x) f(x ), x Ω. x local minimizer of f over Ω = N(x,δ) such that f(x) f(x ), for all x Ω N(x,δ). N(x,δ) := {x R n : x x δ}. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 2/27
3 Example problem in one dimension Example : minf(x) subject to a x b. f(x) x 1 x 2 a b The feasible region Ω is the interval [a,b]. The point x 1 is the global minimizer; x 2 is a local (non-global) minimizer; x = a is a constrained local minimizer. x Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 3/27
4 An example of a nonlinear constrained problem min x R 2(x 1 2) 2 + (x 2 0.5(3 5)) 2 subject to x 1 x , x 2 x c c 1 Ω contours of f x x x = 0.5( 1 + 5,3 5); Ω feasible set. x 1 Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 4/27
5 Optimality conditions for constrained problems == algebraic characterizations of solutions suitable for computations. provide a way to guarantee that a candidate point is optimal (sufficient conditions) indicate when a point is not optimal (necessary conditions) minimize x R n f(x) subject to c E (x) = 0, c I (x) 0. (CP) f : R n R, c E : R n R m and c I : R n R p (suff.) smooth; c I (x) 0 c i (x) 0, i I. Ω := {x : c E (x) = 0, c I (x) 0} feasible set of the problem. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 5/27
6 Optimality conditions for constrained problems unconstrained problem ˆx stationary point ( f(ˆx) = 0). constrained problem ˆx Karush-Kuhn-Tucker (KKT) point. Definition: ˆx KKT point of (CP) if there exist ŷ R m and ˆλ R p such that (ˆx,ŷ,ˆλ) satisfies f(ˆx) = j E ŷ j c j (ˆx) + i I ˆλ i c i (ˆx), c E (ˆx) = 0, c I (ˆx) 0, ˆλ i 0, ˆλi c i (ˆx) = 0, for all i I. Let A := E {i I : c i (ˆx) = 0} index set of active constraints at ˆx; c j (ˆx) > 0 inactive constraint at ˆx ˆλ j = 0. Then ˆλ i I i c i (ˆx) = ˆλ i I A i c i (ˆx). J(x) = ( c i (x) T) i j E ŷj c j (ˆx) = J E (x) T ŷ and ˆλ i I i c i (ˆx) = J I (x) Tˆλ. Jacobian matrix of constraints c. Thus Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 6/27
7 Optimality conditions for constrained problems... ˆx KKT point ŷ and ˆλ Lagrange multipliers of the equality and inequality constraints, respectively. ŷ and ˆλ sensitivity analysis. L : R n R m R p R Lagrangian function of (CP), L(x,y,λ) := f(x) y c E (x) λ c I (x), x R n. Thus x L(x,y,λ) = f(x) J E (x) y J I (x) λ, and ˆx KKT point of (CP) = x L(ˆx,ŷ,ˆλ) = 0 (i. e., ˆx is a stationary point of L(,ŷ,ˆλ)). duality theory... Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 7/27
8 An illustration of the KKT conditions min x R 2(x 1 2) 2 + (x 2 0.5(3 5)) 2 x 1 x , x 2 x subject to ( ) 2.5 x = 1 2 ( 1+ 5,3 5) : global solution of ( ), KKT point of ( ). f(x ) = ( 5 + 5,0), c 1 (x ) = (1 5,1), c 2 (x ) = ( 1, 1). x c 1 f(x ) Ω c 2 c 1 (x ) c 2 (x ) x f(x ) = λ 1 c 1(x ) + λ 2 c 2(x ), with λ 1 = λ 2 = 5 1 > 0. c 1 (x ) = c 2 (x ) = 0: constraints are active at x. x 1 Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 8/27
9 An illustration of the KKT conditions... min x R 2(x 1 2) 2 + (x 2 0.5(3 5)) 2 x 1 x , x 2 x x := (0,0) is NOT a KKT point of ( )! c 1 (x) = 0: active at x. c 2 (x) = 1: inactive at x. = λ 2 = 0 and f(x) = λ 1 c 1 (x), with λ 1 0. x f(0) c Ω c 2 1 subject to c 1 (0) ( ) c 2 (0) Contradiction with f(x) = ( 4, 5 3) and c 1 (x) = (0,1). x 1 Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 9/27
10 Optimality conditions for constrained problems... In general, need constraints/feasible set of (CP) to satisfy regularity assumption called constraint qualification in order to derive optimality conditions. Theorem 19 (First order necessary conditions) Under suitable constraint qualifications, x local minimizer of (CP) = x KKT point of (CP). Proof of Theorem 19 (for equality constraints only): Let I = and so we must show that c E (x ) = 0 (which is trivial as x feasible) and f(x ) = J E (x ) T y for some y R m. Consider feasible perturbations/paths x(α) around x, where α (small) scalar, x(α) C 2 (R n ) and x(0) = x and c(x(α)) = 0 ( ). ( ) requires constraint qualifications Then by Taylor expansion, x(α) = x + αs α2 p + O(α 3 ) ( ). ( ) [α 2 and higher order terms not needed here; only for 2nd order conditions later] Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 10/27
11 Optimality conditions for constrained problems... Proof of Theorem 19 (for equality constraints only): (continued) For any i E, by Taylor s theorem for c i (x(α)) around x, 0 = c i (x(α)) = c i (x + αs α2 p + O(α 3 )) = c i (x ) + c i (x ) T (αs α2 p) α2 s T 2 c i (x )s + O(α 3 ) = α c i (x ) T s α2[ c i (x ) T p + s T 2 c i (x )s ] ( ) + O(α 3 ). where we used c i (x ) = 0. Thus for all i E, c i (x ) T s = 0 and c i (x ) T p + s T 2 c i (x )s = 0 ( ), and so J E (x )s = 0. Now expanding f, we deduce f(x(α)) = f(x ) + f(x ) T (αs α2 p) α2 s T 2 f(x )s + O(α 3 ) = f(x ) + α f(x ) T s α2[ f(x ) T p + s T 2 f(x )s ] ( ) + O(α 3 ). ( )[these terms are only needed for 2nd order optimality conditions later] As x(α) feasible, f is unconstrained along x(α) and so f (x(0)) = f(x ) T s = 0 since x is a local minimizer along x(α). Thus f(x ) T s = 0 for all s such that J E (x )s = 0 (1). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 11/27
12 Optimality conditions for constrained problems... Proof of Theorem 19 (for equality constraints only): (continued) If we let Z be a basis for the null space of J E (x ), we deduce there exists y and s such that f(x ) = J E (x ) T y + Zs. (2) From (1), Z T f(x ) = 0 and so from (2), 0 = Z T J E (x ) T y + Z T Zs, and furthermore, since J E (x )Z = 0, we must have Z T Zs = 0. As Z is a basis, it is full rank and so s = 0. We conclude from (2) that f(x ) = J E (x ) T y. Let (CP) with equalities only (I = ). Then feasible descent direction s at x Ω if f(x) T s < 0 and J E (x)s = 0. Let (CP). Then feasible descent direction s at x Ω if f(x) T s < 0, J E (x)s = 0 and c i (x) T s 0 for all i I A(x). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 12/27
13 Constraint qualifications Proof of Th 19: used (first-order) Taylor to linearize f and c i along feasible paths/perturbations x(α) etc. Only correct if linearized approximation covers the essential geometry of the feasible set. CQs ensure this is the case. Examples: (CP) satisfies the Slater Constraint Qualification (SCQ) if x s.t. c E (x) = Ax b = 0 and c I (x) > 0 (i.e., c i (x) > 0, i I). (CP) satisfies the Linear Independence Constraint Qualification (LICQ) c i (x), i A(x), are linearly independent (at relevant x). Both SCQ and LICQ fail for Ω = {(x 1,x 2 ) : c 1 (x) = 1 x 2 1 (x 2 1) 2 0; c 2 (x) = x 2 0}. T Ω (x) = {(0,0)} and F(x) = {(s 1,0) : s 1 R}. Thus T Ω (x) F(x). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 13/27
14 Constraint qualifications... Tangent cone to Ω at x: [See Chapter 12, Nocedal & Wright] T Ω (x) = {s : limitingdirectionof feasiblesequence} [ geometry of Ω] z k x s = lim k t k where z k Ω, t k > 0, t k 0 and z k x as k. Set of linearized feasible directions: [ algebra of Ω] F(x) = {s : s T c i (x) = 0,i E; s T c i (x) 0,i I A(x)} Want T Ω (x) = F(x) [ensured if a CQ holds] min (x1,x 2 )x 1 + x 2 s.t. x x2 2 2 = 0. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 14/27
15 Optimality conditions for constrained problems... If the constraints of (CP) are linear in the variables, no constraint qualification is required. Theorem 20 (First order necessary conditions for linearly constrained problems) Let (c E,c I )(x) := Ax b in (CP). Then x local minimizer of (CP) = x KKT point of (CP). Let A = (A E,A I ) and b = (b E,b I ) corresponding to equality and inequality constraints. KKT conditions for linearly-constrained (CP): x KKT point there exists (y,λ ) such that f(x ) = A T E y + A T I λ, A E x b E = 0, A I x b I 0, λ 0, (λ ) T (A I x b I ) = 0. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 15/27
16 Optimality conditions for convex problems (CP) is a convex programming problem if and only if f(x) is a convex function, c i (x) is a concave function for all i I and c E (x) = Ax b. c i is a concave function ( c i ) is a convex function. (CP) convex problem Ω is a convex set. (CP) convex problem any local minimizer of (CP) is global. First order necessary conditions are also sufficient for optimality when (CP) is convex. Theorem 21. (Sufficient optimality conditions for convex problems: Let (CP) be a convex programming problem. ˆx KKT point of (CP) = ˆx is a (global) minimizer of (CP). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 16/27
17 Optimality conditions for convex problems Proof of Theorem 21. f convex = f(x) f(ˆx) + f(ˆx) (x ˆx), for all x R n. (1) (1)+[ f(ˆx) = A ŷ + i I ˆλ i c i (ˆx)] = f(x) f(ˆx) + (A ŷ) (x ˆx) + i I ˆλ i ( c i (ˆx) (x ˆx)), f(x) f(ˆx) + ŷ A(x ˆx) + i I ˆλ i ( c i (ˆx) (x ˆx)) (2). Let x Ω arbitrary = Ax = b and c(x) 0. Ax = b and Aˆx = b = A(x ˆx) = 0. (3) c i concave = c i (x) c i (ˆx) + c i (ˆx) (x ˆx). = c i (ˆx) (x ˆx) c i (x) c i (ˆx). = ˆλ i ( c i (ˆx) (x ˆx)) ˆλ i (c i (x) c i (ˆx)) = ˆλ i c i (x) 0, since ˆλ 0, ˆλ i c i (x) = 0 and c(x) 0. Thus, from (2), f(x) f(ˆx). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 17/27
18 Optimality conditions for nonconvex problems When (CP) is not convex, the KKT conditions are not in general sufficient for optimality need positive definite Hessian of the Lagrangian function along feasible directions. More on second-order optimality conditions later on. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 18/27
19 Optimality conditions for QP problems A Quadratic Programming (QP) problem has the form minimize x R n q + c x x Hx s. t. Ax = b, Ãx b. (QP) H positive semidefinite = (QP) convex problem. The KKT conditions for (QP): ˆx KKT point of (QP) (ŷ,ˆλ) R m R p such that Hˆx + c = A ŷ + à ˆλ, Aˆx = b, Èx b, ˆλ 0, ˆλ (Èx b) = 0. An example of a nonlinear constrained problem is convex; removing the constraint x 2 x makes it a convex (QP). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 19/27
20 Linear Programming (LP) Given A IR m n, b IR m and c IR n, minimize x IR n c x subject to Ax = b, x 0. (P) the objective and all constraints are linear in the variables. (P): special case of convex QP, but better to exploit structure by solving as LP. Any local solution of (P) is global. Example. minimize x 1 + x 2 subject to x 2 + x 3 = 2, x 0. x IR Solution set: {x = (0,0,2)}; x vertex of feasible set of LP. x x1 Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 20/27
21 Optimality conditions for LP problems LP is a convex programming problem = KKT conditions are necessary and sufficient for optimality. Constraints are linear = no need for regularity assumptions (cf Th. 20). Theorem 22. (Necessary & sufficient optimality cond for LP) ˆx solution of (P) (ŷ,ˆλ) R m R n : c = A ŷ + ˆλ, Aˆx = b, ˆx 0, ˆλ 0, ˆx iˆλ i = 0, i = 1,n. Proof. Just apply Theorems 20 and 21 to (P). Aˆx = b, ˆx 0 primal feasibility of ˆx. ˆx iˆλ i = 0, i = 1,n complementarity slackness eqns. A ŷ + ˆλ = c, ˆλ 0??? (OPT) Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 21/27
22 Some duality theory for LP The (OPT) system are also the KKT conditions of the LP maximize (y,λ) IR m IR n b y subject to A y + λ = c, λ 0. (D) (P): primal problem, (D): dual problem. A ŷ + ˆλ = c, ˆλ 0 dual feasibility of (ŷ,ˆλ). (x,y,λ) feasible for (P) and (D). Then (x,y,λ) solution of (P) and (D) if and only if c T x = b T y. Example: Then dual of LP Example is maximize (y,λ) IR IR 3 2y subject to { λ1 = 1, y + λ 2 = 1, y + λ 3 = 0, λ 0. The (D) solution: y = 0, λ = (1,1,0). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 22/27
23 Second-order optimality conditions When (CP) is not convex, the KKT conditions are not in general sufficient for optimality. Assume some CQ holds. Then at a given point x : the set of feasible directions for (CP) at x : F(x ) = { s : J E (x )s = 0, s T c i (x ) 0,i A(x ) I }. If x is a KKT point, then for any s F(x ), either s T f(x ) > 0 so f can only increase and stay feasible along s or s T f(x ) = 0 cannot decide from 1st order info if f increases or not along such s. (λ ) = {s F(x ) : s T c i (x ) = 0, i A(x ) I with λ i > 0}, where λ is a Lagrange multiplier of the inequality constraints. Then note that s T f(x ) = 0 for all s F(λ ). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 23/27
24 Second-order optimality conditions... Theorem 23 (Second-order necessary conditions) Let some CQ hold for (CP). Let x be a local minimizer of (CP), and (y,λ ) Lagrange multipliers of the KKT conditions at x. Then s T 2 xx L(x,y,λ )s 0 for all s F(λ ), where L(x,y,λ) = f(x) y T c E (x) λ T c I (x) is the Lagrangian function. Theorem 24 (Second-order sufficient conditions) Assume that x is a feasible point of (CP) and (y,λ ) are such that the KKT conditions are satisfied by (x,y,λ ). If s T 2 xx L(x,y,λ )s > 0 for all s F(λ ), s 0, then x is a local minimizer of (CP). Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 24/27
25 Second-order optimality conditions... Proof of Theorem 23 (for equality constraints only) [NON-EXAMINABLE]: Let I = and so F(x ) = F(λ ). We have to show that s T 2 xx L(x,y,λ )s 0 for all s such that J E (x )s = 0. Recall the proof of Theorem 19: along any feasible path of the form x(α) = x + αs α2 p + O(α 3 ) (for any s and p), we showed that J E (x )s = 0 and c i (x ) T p + s T 2 c i (x )s = 0, i E, and that f(x(α)) = f(x ) α2[ f(x ) T p + s T 2 f(x )s ] + O(α 3 ). As x is a local minimizer, we must have that f(x ) T p + s T 2 f(x )s 0. (*) From the KKT conditions, f(x ) = J E (x ) T y and so f(x ) T p = (y ) T J E (x )p = i E y i st 2 c i (x )s. (**) Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 25/27
26 Second-order optimality conditions... Proof of Theorem 23 (for equality constraints only):(continued) From (*) and (**), we deduce 0 s T 2 f(x )s i E y i st 2 c i (x )s = s T [ 2 f(x ) i E 2 c i (x )]s = s T 2 xx L(x,y )s. Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 26/27
27 Some simple approaches for solving (CP) Equality-constrained problems: direct elimination (a simple approach that may help/work sometimes; cannot be automated in general) Method of Lagrange multipliers: using the KKT and second order conditions to find minimizers (again, cannot be automated in general) [see Pb Sheet 5] Lectures 9 and 10: Constrained optimization problems and their optimality conditions p. 27/27
Constrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationLecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming
Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationTMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM
TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationComputational Optimization. Constrained Optimization Part 2
Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS
SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More informationIntroduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING
Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationMiscellaneous Nonlinear Programming Exercises
Miscellaneous Nonlinear Programming Exercises Henry Wolkowicz 2 08 21 University of Waterloo Department of Combinatorics & Optimization Waterloo, Ontario N2L 3G1, Canada Contents 1 Numerical Analysis Background
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationDuality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities
Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form
More informationLecture 2: Linear SVM in the Dual
Lecture 2: Linear SVM in the Dual Stéphane Canu stephane.canu@litislab.eu São Paulo 2015 July 22, 2015 Road map 1 Linear SVM Optimization in 10 slides Equality constraints Inequality constraints Dual formulation
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationLecture 3: Basics of set-constrained and unconstrained optimization
Lecture 3: Basics of set-constrained and unconstrained optimization (Chap 6 from textbook) Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 9, 2018 Optimization basics Outline Optimization
More informationSecond Order Optimality Conditions for Constrained Nonlinear Programming
Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationLagrangian Duality Theory
Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationConstrained optimization
Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationCS / ISyE 730 Spring 2015 Steve Wright
CS / ISyE 730 Spring 2015 Steve Wright (swright@cs.wisc.edu) Convex Analysis Basics. We work mostly in IR n and sometimes in SIR n n. Recall vector notation (subscripts for components), inner products,
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS
ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationNonlinear Programming and the Kuhn-Tucker Conditions
Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationSupport Vector Machines
Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationHomework Set #6 - Solutions
EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More informationInterior Point Methods for Convex Quadratic and Convex Nonlinear Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationAppendix A Taylor Approximations and Definite Matrices
Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationThe Karush-Kuhn-Tucker (KKT) conditions
The Karush-Kuhn-Tucker (KKT) conditions In this section, we will give a set of sufficient (and at most times necessary) conditions for a x to be the solution of a given convex optimization problem. These
More informationOptimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function
More informationOptimization. A first course on mathematics for economists
Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationLagrange Multipliers
Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize
More informationOptimality conditions for unconstrained optimization. Outline
Optimality conditions for unconstrained optimization Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University September 13, 2018 Outline 1 The problem and definitions
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More informationKarush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725
Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More information1. f(β) 0 (that is, β is a feasible point for the constraints)
xvi 2. The lasso for linear models 2.10 Bibliographic notes Appendix Convex optimization with constraints In this Appendix we present an overview of convex optimization concepts that are particularly useful
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationTutorial on Convex Optimization: Part II
Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation
More informationSubgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus
1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality
More informationMicroeconomics I. September, c Leopold Sögner
Microeconomics I c Leopold Sögner Department of Economics and Finance Institute for Advanced Studies Stumpergasse 56 1060 Wien Tel: +43-1-59991 182 soegner@ihs.ac.at http://www.ihs.ac.at/ soegner September,
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationThe Karush-Kuhn-Tucker conditions
Chapter 6 The Karush-Kuhn-Tucker conditions 6.1 Introduction In this chapter we derive the first order necessary condition known as Karush-Kuhn-Tucker (KKT) conditions. To this aim we introduce the alternative
More informationLecture 15: SQP methods for equality constrained optimization
Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained
More informationUniversity of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris
University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationConvex Optimization Lecture 6: KKT Conditions, and applications
Convex Optimization Lecture 6: KKT Conditions, and applications Dr. Michel Baes, IFOR / ETH Zürich Quick recall of last week s lecture Various aspects of convexity: The set of minimizers is convex. Convex
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationSufficient Conditions for Finite-variable Constrained Minimization
Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More information