Generalization to inequality constrained problem. Maximize

Size: px
Start display at page:

Download "Generalization to inequality constrained problem. Maximize"

Transcription

1 Lecture September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum point. If the sufficiency condition is not met, the point may not be an isolated minimum point. Comments on Use of Optimality Conditions. Always use the standard for of the NLP problem Make sure to check all the KKT conditions Check regularity of the candidate solution points Review for midterm exam. Exam will be on Thursday, 9/28/06. Duality in Nonlinear Programming. Duality in Nonlinear Programming: Lagrangian duality, or local duality Equality Constrained Problem: x* is a local min for the equality constrained problem as well as the Lagrangian function at u*. Given optimum u, optimum x can be found by minimizing the Lagrangian. Given u in the neighborhood of its optimum, x found by minimizing the Lagrangian is also in the neighborhood of its optimum value. Thus, there is a unique correspondence between u and x; x = x(u) and x(u) is differentiable of u. Dual function. Definition of the dual function. Gradient and Hessian of the dual function. Local duality theorem. Maximize the dual function. Example problem Generalization to inequality constrained problem. Maximize the dual function subject to non-negativity of the dual variables. Strong duality theorem; weak duality theorem. Saddle points. Saddle point theorem. Example problem. Read: Duality in NLP. 1

2 Project #2; Report due today. 53:235 Applied Optimal Design HW#9: Solve Exercise 5.4 using KKT optimality conditions; Check duality assumption; calculate the dual function; maximize the dual function; show x* = x(u*), and f(x*) = phi (u*). No need to submit. 2

3 4.8 DUALITY IN NONLINEAR PROGRAMMING (J.S. Arora) Introduction Given a nonlinear programming problem, there is another nonlinear programming problem closely associated with it. The former is called the primal problem, and the latter is called the Lagrangian dual problem, or simply dual problem. Under certain convexity assumptions, the primal and dual problems have equal optimal cost values and therefore it is possible to solve the primal problem indirectly by solving the dual problem. As a by-product of one of the duality theorems, we obtain the saddle point necessary optimality conditions that are explained later. In recent years, duality has played a very important role in development of optimization theory and numerical methods. Development of the duality theory requires assumptions about convexity of the problem. However to be broadly applicable, the theory should require a minimum of convexity assumptions. This leads to the concept requiring local convexity and to the local duality theory. In this section, we shall present only the local duality theory and discuss its computational aspects. The theory can be used to develop computational methods for solving optimization problems. We shall see later that it can be used to develop the so-called multiplier or augmented Lagrangian methods Local Duality EQUALITY CONSTRAINT CASE. For sake of developing the local duality theory, we consider the equality-constrained problem first: Problem PE: Minimize f(x), x R n (4.8.1) Subject to g i (x) = 0; i = 1 to p (4.8.2) Later on we will extend the theory to both equality and inequality constrained problems. The theory we are going to present is sometimes called strong duality or Lagrangian duality. We will assume that f and g i C 2, i = 1 to p. Let x * be a local minimum of the Problem PE that is also a 3

4 regular point of the constraint set. Then there exists a unique Lagrange multiplier vector u * R p such that x f(x * ) + x g(x * )u * = 0 (4.8.3) where x g is an n p matrix whose columns are the gradients of the constraints. Also Hessian of the Lagrange function x 2 L(x *,u * ), where L(x,u) = f(x) + (u,g(x)) (4.8.4) must be at least positive semidefinite (second order necessary condition) on the tangent subspace Or, M = {y R n ( x g i (x * ),y) = 0; i = 1 to p, y 0} (4.8.5) (y, x 2 L(x *,u * )y) 0, y 0, y M (4.8.6) Now, we introduce the assumption that x 2 L(x *,u * ) is actually positive definite; i.e., (y, x 2 L(x *,u * )y) > 0, for all y R n (4.8.7) This assumption is necessary for development of the local duality theory. The assumption guarantees that the Lagrangian of Eq. (4.8.4) is locally convex at x *. This also satisfies the sufficiency condition for x * to be an isolated local minimum of Problem PE. With this assumption, the point x * is not only a local minimum of the Problem PE, it is also a local minimum for the unconstrained problem: Minimize L(x,u * ) = f(x) + (u *,g(x)) (4.8.8) where u * is a vector of Lagrange multipliers at x *. The necessary and sufficient conditions for the above unconstrained problem are the same as for the constrained Problem PE (with x 2 L(x *,u * ) being positive definite). In addition for any u sufficiently close to u * the Lagrange function f(x) + (u,g(x)) will have a local minimum at a point x near x *. Now we shall establish the condition that x(u) exists and is a differentiable function of u. The Karush-Kuhn-Tucker necessary condition is x L(x,u) x f + ( x g)u = 0 (4.8.9) 4

5 Since x 2 L(x *,u * ) is positive definite, it is nonsingular. Also because of positive definiteness, x 2 L(x,u) is nonsingular in a neighborhood of (x *,u * ). This is a generalization of a theorem from calculus: if a function is positive at a point, it is positive in a neighborhood of that point. x 2 L(x,u) is also the Jacobian of the necessary conditions of Eq. (4.8.9) with respect to x. Therefore, Eq. (4.8.9) has a solution x near x * when u is near u *. Thus, locally there is a unique correspondence between u and x through solution of the unconstrained problem Minimize L(x,u) = f(x) + (u,g(x)) (4.8.10) Furthermore, for a given u, x(u) is a differentiable function (by the Implicit Functions Theorem of calculus). The necessary condition for the problem (4.8.10) can be written as x f(x) + ( x g(x))u = 0 (4.8.11) and x 2 L(x,u) is positive definite as x 2 L(x *,u * ) is positive definite. Def (Dual Function): Near u *, we define the dual function φ by the equation φ(u) = min x [f(x) + (u,g(x))] = min x L(x,u). (4.8.12) In the above definition, the minimum is taken locally with respect to x near x *. With this definition of the dual function we can show that locally the original constrained Problem PE is equivalent to unconstrained local maximization of the dual function φ with respect to u. Thus, we can establish equivalence between a constrained problem in x and an unconstrained problem in u. To establish the duality relation, we must prove two lemmas. Lemma 4.8.1: The dual function φ(u) has gradient u φ(u) = g(x(u)) (4.8.13) Proof: Let x(u) represent a local minimum for the Lagrange function L(x) = f(x) + (u,g(x)) (4.8.14) Therefore, the dual function can be explicitly written from Eq. (4.8.12) as φ(u) = f(x(u)) + (u,g(x(u))) (4.8.15) Therefore, 5

6 u φ(u) dφ du = φ + u dx φ = g(x(u)) + du x dx L du x (4.8.16) But L/ x in Eq. (4.8.16) is zero because x(u) minimizes the Lagrange function of Eq. (4.8.14). This proves the result of Eq. (4.8.13). Lemma is of extreme practical importance, since it shows that the gradient of the dual function is quite simple to calculate. Once the dual function is evaluated by minimization with respect to x, the corresponding g(x), which is the gradient of φ(u), can be evaluated without any further calculation. Lemma 4.8.2: The Hessian of the dual function is 2 u φ(u) = - x gt (x) [ 2 x L(x)]-1 x g(x) (4.8.17) Proof: By Lemma 4.8.1, 2 u φ(u) = u ( u φ) = u g(x(u)) = u x x g(x) (4.8.18) To calculate u x, we observe that x L(x,u) x f(x) + x g(x)u = 0 (4.8.19) where L(x,u) is defined in Eq. (4.8.14). Differentiating Eq. (4.8.19) with respect to u, u ( x L) x g(x) T + u x 2 x L(x) = 0 u x = - x g(x) T [ 2 x L(x)]-1. (4.8.20) Substituting Eq. (4.8.20) into Eq. (4.8.18), we obtain the result of Eq. (4.8.17) that was to be proved. Since [ 2 x L(x)]-1 is positive definite, and since x g(x) is of full column rank near x *, we have 2 u φ(u), a p p matrix (Hessian of φ) to be negative definite. This observation and the Hessian of φ play dominant role in analysis of dual methods. Theorem (Local Duality Theorem): Consider the problem Minimize f(x) subject to g(x) = 0 6

7 Let (i) x * be a local minimum, (ii) x * be a regular point, (iii) u * be the Lagrange multipliers at x *, and (iv) x 2 L(x *,u * ) be positive definite. Then the dual problem Maximize φ(u) has a local solution at u * with x * = x(u * ). The maximum value of the dual function is equal to the minimum value of f(x); i.e., φ(u * ) = f(x * ) Proof: It is clear that x * = x(u * ) by definition of φ. Now at u *, we have Lemma 4.8.1: u φ(u * ) = g(x) = 0 and by Lemma the Hessian of φ is negative definite. Thus, u * satisfies the first order necessary and second order sufficiency conditions for an unconstrained maximum point of φ. Substituting u * in the definition of φ of Eq. (4.8.15), φ(u * ) = f(x(u * )) + (u *,g(x(u * ))) = f(x * ) + (u *,g(x * )) = f(x * ) which was to be proved INEQUALITY CONSTRAINT CASE. Consider the inequality-constrained problem: Problem P Minimize f(x) Subject to x S S = {x R n g i (x) = 0, i = 1 to p; g i (x) 0; i = p+1 to m} (4.8.21) Define the Lagrange function as L(x,u) = f(x) + (u,g(x)) with u i 0, i > p (4.8.22) Dual function for the Problem P is defined as φ(u) = min x Dual problem is defined as Maximize φ(u) L(x,u); u i 0, i > p (4.8.23) 7

8 Subject to u i 0, i > p (4.8.24) Theorem (Strong Duality Theorem). Let (i) x * be a local minimum of the Problem P, (ii) x * be a regular point, (iii) x 2 L(x * ) be positive definite, and (iv) u * be the Lagrange multipliers at the optimum point x *. Then u * solves the dual problem defined in Eq. (4.8.24) with f(x * ) = φ(u * ) and x * = x(u * ). If the assumption of positive definiteness of x 2 L(x * ) is not made, we get the weak duality theorem. Theorem (Weak Duality Theorem). Let x be a feasible solution to Problem P and let u the feasible solution for the dual problem defined in Eq. (4.8.24); i.e., g i (x) = 0, i = 1 to p; g i (x) 0, i = p + 1 to m and u i 0, i = p + 1 to m. Then φ(u) f(x). Proof: By definition φ(u) = min x L(x,u) = min x [f(x) + (u,g(x))] f(x) + (u,g(x)) f(x) since u i 0, i = p + 1 to m; g i (x) 0, i = p + 1 to m; and g i (x) = 0, i = 1 to p. From the Theorem 4.8.3, we obtain the following results: 1. Minimum [f(x) with x S] Maximum [φ(u) with u i 0, i = p + 1 to m] 2. If f(x * ) = φ(u * ) with u i 0, i = p + 1 to m and x * S, then x * and u * solve the primal and dual problems, respectively. 3. If Minimum [f(x) for x S] = -, then the dual is infeasible, and vice versa (i.e., if dual is infeasible, the primal is unbounded). 4. If Maximum [φ(u); u i 0, i > p] =, then the primal problem has no feasible solution, and vice versa (i.e., if primal is infeasible, the dual is unbounded). 8

9 Lemma (Lower Bound for Primal Cost Function): Let u R m. Then for any u with u i 0, i = p + 1 to m φ(u) f(x * ) Proof: φ(u) maximum φ(u); u i 0, i = p + 1 to m = f(x) The above Lemma is quite useful for practical applications. It tells us how to find a lower bound on the optimal primal cost function. Dual cost function for arbitrary u i, i = 1 to p and u i 0, i = p + 1 to m provides a lower bound for the cost function. For any x S, f(x) provides an upper bound for the cost function. Def (Saddle Points): Let L(x,u) be the Lagrange function with u R m. L has a saddle point at (x *,u * ) subject to u i 0, i = p + 1 to m if L(x *,u) L(x *,u * ) L(x,u * ) holds for all x near x * and u near u * with u i 0 for i = p + 1 to m. Theorem (Saddle Point Theorem). Consider the NLP problem: Minimize f(x) with x S. Let f and g i C 2, i = 1 to m and let L(x,u) be defined as L(x,u)= f(x) + (u,g(x)) Let L(x *,u * ) exist with u i 0, i = p + 1 to m. Also let 2 x L(x *,u * ) be positive definite. Then x * satisfying a suitable constraint qualification is a local minimum of NLP if and only if (x *,u * ) is a saddle point of the Lagrangian, i.e., L(x *,u) L(x *,u * ) L(x,u * ) for all x near x * and all u near u * with u i 0 for i = p + 1 to m. See Bazaraa and Shetty (1979) p. 185 for proof. Example: Consider the following problem in two variables (Ref. ): min f = x 1 x = subject to ( x 3) + x 5 9

10 Let us first solve the problem using the optimality conditions first. The Lagrangian for the problem is defined as 2 2 = ] L x x + u[ ( x 3) + x 5 The first order necessary conditions are -x 2 + (2x 1-6)u = 0 (b) -x 1 + 2x 2 u = 0 (c) (a) together with the equality constraint. These equations have a solution x1 = 4, x2 = 2, u = 1, f = 8 The Hessian of the Lagrangian is (d) x 2 L = Since this is positive definite, we conclude that the solution obtained is an isolated local minimum. Since x 2 L(x * ) is positive definite, we can apply the local duality theory near the solution. (e) Define a dual function as φ( u ) min = x L( x,u) (f) Solving Eqs. (b) and (c), we get x 1 and x 2 in terms of u as 2 12u x 1 = (g) 2 4 u 1 6u x 2 = (h) 2 4 u 1 provided 4u Substituting Eqs. (g) and (h) into Eq. (f), the dual function is given as φ(u) = 3 5 4u + 4u 80u 2 2 ( 4u 1) (i) valid for u ± 1/2. This φ has a local maximum at u * = 1. Substituting u = 1 in Eqs. (g) and (h), we get the same solution as in Eqs. (d). Note that φ = 8 (=f * ). 10

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module - 5 Lecture - 22 SVM: The Dual Formulation Good morning.

More information

CONSTRAINED OPTIMALITY CRITERIA

CONSTRAINED OPTIMALITY CRITERIA 5 CONSTRAINED OPTIMALITY CRITERIA In Chapters 2 and 3, we discussed the necessary and sufficient optimality criteria for unconstrained optimization problems. But most engineering problems involve optimization

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization Etienne de Klerk (UvT)/Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos Course WI3031 (Week 4) February-March, A.D. 2005 Optimization Group 1 Outline

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

Microeconomics I. September, c Leopold Sögner

Microeconomics I. September, c Leopold Sögner Microeconomics I c Leopold Sögner Department of Economics and Finance Institute for Advanced Studies Stumpergasse 56 1060 Wien Tel: +43-1-59991 182 soegner@ihs.ac.at http://www.ihs.ac.at/ soegner September,

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

Finite Dimensional Optimization Part III: Convex Optimization 1

Finite Dimensional Optimization Part III: Convex Optimization 1 John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

E 600 Chapter 4: Optimization

E 600 Chapter 4: Optimization E 600 Chapter 4: Optimization Simona Helmsmueller August 8, 2018 Goals of this lecture: Every theorem in these slides is important! You should understand, remember and be able to apply each and every one

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark Lagrangian Duality Richard Lusby Department of Management Engineering Technical University of Denmark Today s Topics (jg Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality R Lusby (42111) Lagrangian

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

The Fundamental Theorem of Linear Inequalities

The Fundamental Theorem of Linear Inequalities The Fundamental Theorem of Linear Inequalities Lecture 8, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) Constrained Optimisation

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

On the Method of Lagrange Multipliers

On the Method of Lagrange Multipliers On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

10-725/ Optimization Midterm Exam

10-725/ Optimization Midterm Exam 10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

OPTIMISATION /09 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Math 5311 Constrained Optimization Notes

Math 5311 Constrained Optimization Notes ath 5311 Constrained Optimization otes February 5, 2009 1 Equality-constrained optimization Real-world optimization problems frequently have constraints on their variables. Constraints may be equality

More information

8. Constrained Optimization

8. Constrained Optimization 8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Support Vector Machines for Regression

Support Vector Machines for Regression COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x

More information

CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN MATHEMATICS SUBMITTED TO NATIONAL INSTITUTE OF TECHNOLOGY,

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification

More information

Constraint qualifications for nonlinear programming

Constraint qualifications for nonlinear programming Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Lagrangian Duality. Evelien van der Hurk. DTU Management Engineering

Lagrangian Duality. Evelien van der Hurk. DTU Management Engineering Lagrangian Duality Evelien van der Hurk DTU Management Engineering Topics Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality 2 DTU Management Engineering 42111: Static and Dynamic Optimization

More information

Symmetric and Asymmetric Duality

Symmetric and Asymmetric Duality journal of mathematical analysis and applications 220, 125 131 (1998) article no. AY975824 Symmetric and Asymmetric Duality Massimo Pappalardo Department of Mathematics, Via Buonarroti 2, 56127, Pisa,

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Sufficient Conditions for Finite-variable Constrained Minimization

Sufficient Conditions for Finite-variable Constrained Minimization Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

Homework Set #6 - Solutions

Homework Set #6 - Solutions EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information