Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Size: px
Start display at page:

Download "Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming"

Transcription

1 Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 1/25

2 Penalty methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 2/25

3 Nonlinear equality-constrained problems min f(x) subject to c(x) = 0, (ecp) x R n where f : R n R, c = (c 1,...,c m ) : R n R m smooth. attempt to find local solutions (at least KKT points). constrained optimization conflict of requirements: objective minimization & feasibility of the solution. easier to generate feasible iterates for linear equality and general inequality constrained problems; very hard, even impossible, in general, when general equality constraints are present. = form a single, parametrized and unconstrained objective, whose minimizers approach initial problem solutions as parameters vary Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 3/25

4 A penalty function for (ecp) min f(x) subject to c(x) = 0. (ecp) x R n The quadratic penalty function: min Φ σ (x) = f(x) + 1 x R n 2σ c(x) 2, (ecp σ ) where σ > 0 penalty parameter. σ: penalty on infeasibility; σ 0: forces constraint to be satisfied and achieve optimality for f. Φ σ may have other stationary points that are not solutions for (ecp); eg., when c(x) = 0 is inconsistent. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 4/25

5 Contours of the penalty function Φ σ - an example σ = 100 σ = 1 The quadratic penalty function for minx x2 2 subject to x 1 + x 2 2 = 1 Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 5/25

6 Contours of the penalty function Φ σ - an example σ = 0.1 σ = 0.01 The quadratic penalty function for minx x2 2 subject to x 1 + x 2 2 = 1 Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 6/25

7 A quadratic penalty method Given σ 0 > 0, let k = 0. Until convergence do: Choose 0 < σ k+1 < σ k. Starting from x k 0 (possibly, xk 0 := xk ), use an unconstrained minimization algorithm to find an approximate minimizer x k+1 of Φ σ k+1. Let k := k + 1. Must have σ k 0, k 0. σ k+1 := 0.1σ k, σ k+1 := (σ k ) 2, etc. Algorithms for minimizing Φ σ : Linesearch, trust-region methods. σ small: Φ σ very steep in the direction of constraints gradients, and so rapid change in Φ σ for steps in such directions; implications for shape of trust region. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 7/25

8 A convergence result for the penalty method Theorem 25. (Global convergence of penalty method) Apply the basic quadratic penalty method to the (ecp). Assume that f,c C 1, y k i = c i(x k )/σ k, i = 1,m, and Φ σ k(x k ) ǫ k, where ǫ k 0,k, and also σ k 0, as k. Moreover, assume that x k x, where c i (x ), i = 1,m, are linearly independent. Then x is a KKT point of (ecp) and y k y, where y is the vector of Lagrange multipliers of (ecp) constraints. c i (x ), i = 1,m, lin. indep. the Jacobian matrix J(x ) of the constraints is full row rank and so m n. J(x ) not full rank, then x (locally) minimizes the infeasibility c(x). [let y k in ( ) on the next slide] Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 8/25

9 A convergence result for the penalty method Proof of Theorem 25. J(x ) full rank = J(x ) + = (J(x )J(x ) T ) 1 J(x ) pseudo-inverse. As x k x and J cont. J(x k ) + bounded above and cont. for all suff. large k. Let y k = c(x k )/σ k and y = J(x ) + f(x ). Φ σ k(x k ) = f(x k ) J(x k ) T y k ǫ k ( ) J(x k ) + f(x k ) y k = J(x k ) + ( f(x k ) J(x k ) T y k ) J(x k ) + f(x k ) J(x k ) T y k { J(x k ) + J(x ) + + J(x ) + } ǫ k 2 J(x ) + ǫ k ( ) where in the last we used x k x and J + continuous. Triangle inequality (add and subtr J + f) and def of y give y k y J(x k ) + f(x k ) J(x ) + f(x ) + J(x k ) + f(x k ) y k Thus y k y since x k x, J + and f cont., ( ) and ǫ k 0. Using all these again in ( ) as k : f(x ) J(x ) T y = 0. As c(x k ) = σ k y k, σ k 0, y k y c(x ) = 0. Thus x KKT. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 9/25

10 Derivatives of the penalty function Let y(σ) := c(x)/σ: estimates of Lagrange multipliers. Let L be the Lagrangian function of (ecp), Φ σ (x) = f(x) + 1 2σ c(x) 2. L(x,y) := f(x) y T c(x). Then Φ σ (x) = f(x) + 1 σ J(x)T c(x) = x L(x,y(σ)), where J(x) Jacobian m n matrix of constraints c(x). 2 Φ σ (x) = 2 f(x) + 1 σ m i=1 c i(x) 2 c i (x) + 1 σ J(x)T J(x) = 2 xx L(x,y(σ)) + 1 σ J(x)T J(x). σ 0: generally, c i (x) 0 at the same rate with σ for all i. Thus usually, 2 xxl(x,y(σ)) well-behaved. σ 0: J(x) T J(x)/σ J(x ) T J(x )/0 =. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 10/25

11 Ill-conditioning of the penalty s Hessian... Fact [cf. Th 5.2, Gould ref.] = m eigenvalues of 2 Φ σ k(x k ) are O(1/σ k ) and hence, tend to infinity as k (ie, σ k 0); remaining n m are O(1) in the limit. Hence, the condition number of 2 Φ σ k(x k ) is O(1/σ k ) = it blows up as k. = worried that we may not be able to compute changes to x k accurately. Namely, whether using linesearch or trust-region methods, asymptotically, we want to minimize Φ σ k+1(x) by taking Newton steps, i.e., solve the system 2 Φ σ (x)dx = Φ σ (x), (*) for dx from some current x = x k,i and σ = σ k+1. Despite ill-conditioning present, we can still solve for dx accurately! Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 11/25

12 Solving accurately for the Newton direction Due to computed formulas for derivatives, (*) is equivalent to ( 2 xx L(x,y(σ)) + 1 σ J(x)T J(x) ) dx = ( f(x) + 1 σ J(x)T c(x) ), where y(σ) = c(x)/σ. Define auxiliary variable w w = 1 σ (J(x)dx + c(x)). Then the Newton system (*) can be re-written as ( 2 L(x,y(σ)) J(x) J(x) σi )( dx w ) = ( ) f(x) c(x) This system is essentially independent of σ for small σ = cannot suffer from ill-conditioning due to σ 0. Still need to be careful about minimizing Φ σ for small σ. Eg, when using TR methods, use dx B for TR constraint. B takes into account ill-conditioned terms of Hessian so as to encourage equal model decrease in all directions. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 12/25

13 Perturbed optimality conditions min f(x) subject to c(x) = 0. (ecp) x R n (ecp) satisfies the KKT conditions (dual feasibility) f(x) = J(x) T y and (primal feasibility) c(x) = 0. Consider the perturbed problem f(x) J(x) T y = 0 c(x)+σy = 0 (ecp p ) Find roots of nonlinear system (ecp p ) as σ 0 (σ > 0); use Newton s method for root finding. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 13/25

14 Perturbed optimality conditions... Newton s method for system (ecp p ) computes change (dx,dy) to (x,y) from 2 L(x,y) J(x) = J(x) σi dx dy f(x) J(x) y c(x) + σy Eliminating dy, gives ( 2 xx L(x,y) + 1 ) ( σ J(x)T J(x) dx = f(x) + 1 ) σ J(x)T c(x) = same as Newton for quadratic penalty! what s different? Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 14/25

15 Perturbed optimality conditions... Primal: ( 2 xx L(x,y(σ)) + 1 ) ( σ J(x)T J(x) dx p = f(x) + 1 ) σ J(x)T c(x) where y(σ) = c(x)/σ. Primal-dual: ( 2 xx L(x,y) + 1 ) ( σ J(x)T J(x) dx pd = f(x) + 1 ) σ J(x)T c(x) The difference is in freedom to choose y in 2 L(x,y) in primal-dual methods - it makes a big difference computationally. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 15/25

16 Other penalty functions Consider the general (CP) problem minimize x R n f(x) subject to c E (x) = 0, c I (x) 0. (CP) Exact penalty function: Φ(x,σ) is exact if there is σ > 0 such that if σ < σ, any local solution of (CP) is a local minimizer of Φ(x, σ). (Quadratic penalty is inexact.) Examples: l 2 -penalty function: Φ(x,σ) = f(x) + 1 σ c E(x) l 1 -penalty function: let z = min{z,0}, Φ(x,σ) = f(x) + 1 σ i E c i(x) + 1 σ Extension of quadratic penalty to (CP): Φ(x,σ) = f(x) + 1 2σ c E(x) σ i I i I [c i(x)]. ( [ci (x)] ) 2 (may no longer be suff. smooth; it is inexact) Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 16/25

17 Augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 17/25

18 Nonlinear equality-constrained problems min f(x) subject to c(x) = 0, (ecp) x R n where f : R n R, c = (c 1,...,c m ) : R n R m smooth. Another example of merit function and method for (ecp): augmented Lagragian function Φ(x,u,σ) = f(x) u T c(x) + 1 2σ c(x) 2 where u R m and σ > 0 are auxiliary parameters. Two interpretations: shifted quadratic penalty function convexification of the Lagrangian function Aim: adjust u and σ to encourage convergence. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 18/25

19 Derivatives of the augmented Lagrangian function Let J(x) Jacobian of constraints c(x) = (c 1 (x),...,c m (x)). x Φ(x,u,σ) = f(x) J(x) T u + 1 σ J(x)T c(x) = x Φ(x,u,σ) = f(x) J(x) T y(x) = x L(x,y(x)) where y(x) = u c(x) σ Lagrange multiplier estimates 2 Φ(x,u,σ) = 2 f(x) m i=1 u i 2 c i (x)+ 1 σ m i=1 c i(x) 2 c i (x) + 1 σ J(x)T J(x) = 2 Φ(x,u,σ) = 2 f(x) m i=1 y i 2 c i (x) + 1 σ J(x)T J(x) = 2 Φ(x,u,σ) = 2 L(x,y(x)) + 1 σ J(x)T J(x) Lagrangian: L(x,y) = f(x) y T c(x) Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 19/25

20 A convergence result for the augmented Lagrangian Theorem 26. (Global convergence of augmented Lagrangian) Assume that f,c C 1 in (ecp) and let y k = u k c(xk ) σ k, for given u k R m, and assume that Φ(x k,u k,σ k ) ǫ k, where ǫ k 0,k. Moreover, assume that x k x, where c i (x ), i = 1,m, are linearly independent. Then y k y as k with y satisfying f(x ) J(x ) T y = 0. If additionally, either σ k 0 for bounded u k or u k y for bounded σ k then x is a KKT point of (ecp) with associated Lagrange multipliers y. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 20/25

21 A convergence result for the augmented Lagrangian Proof of Theorem 26. The first part of Th 26, namely, convergence of y k to y = J(x ) + f(x ) follows exactly as in the proof of Theorem 25 (penalty method convergence). (Note that the assumption σ k 0 is not needed for this part of the proof of Th 25.) It remains to show that under the additional assumptions on u k and σ k, x is feasible for the constraints. To see this, use the definition of y k to deduce c(x k ) = σ k (u k y k ) and so c(x k ) = σ k u k y k σ k y k y + σ k u k y Thus c(x k ) 0 as k due to y k y (cf. first part of theorem) and the additional assumptions on u k and σ k. As x k x and c is continuous, we deduce that c(x ) = 0. Note that Augmented Lagrangian may converge to KKT points without σ k 0, which limits the ill-conditioning. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 21/25

22 Contours of the augmented Lagrangian - an example u = 0.5 u = 0.9 The augmented Lagrangian function for minx x2 2 = 1 for fixed σ = 1 x 1 + x 2 2 subject to Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 22/25

23 Contours of the augmented Lagrangian - an example u = 0.99 u = y = 1 The augmented Lagrangian function for minx x2 2 = 1 for fixed σ = 1 x 1 + x 2 2 subject to Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 23/25

24 Augmented Lagrangian methods Th 26 = convergence guaranteed if u k fixed and σ k 0 [similar to quadratic penalty methods] = y k y and c(x k ) 0 check if c(x k ) η k where η k 0 if so, set u k+1 = y k and σ k+1 = σ k [recall expression of y k in Th 26] if not, set u k+1 = u k and σ k+1 τσ k for some τ (0,1) reasonable: η k = (σ k ) j where j iterations since σ k last changed Under such rules, can ensure that σ k is eventually unchanged under modest assumptions, and (fast) linear convergence. Need also to ensure that σ k is sufficiently large that the Hessian 2 Φ(x k,u k,σ k ) is positive (semi-)definite. Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 24/25

25 A basic augmented Lagrangian method Given σ 0 > 0 and u 0, let k = 0. convergence do: Until Set η k and ǫ k+1. If c(x k ) η k, set u k+1 = y k and σ k+1 = σ k. Otherwise, set u k+1 = u k and σ k+1 τσ k. Starting from x k 0 (possibly, xk 0 := xk ), use an unconstrained minimization algorithm to find an approximate minimizer x k+1 of Φ(,u k+1,σ k+1 ) for which x Φ(x k+1,u k+1,σ k+1 ) ǫ k+1. Let k := k + 1. Often choose τ = min(0.1, σ k ) Reasonable: ǫ k = (σ k ) j+1, where j iterations since σ k last changed Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming p. 25/25

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Sufficient Conditions for Finite-variable Constrained Minimization

Sufficient Conditions for Finite-variable Constrained Minimization Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Quadratic Programming

Quadratic Programming Quadratic Programming Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2 SideBar: Matrix Spaces Four fundamental subspaces

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Lecture 3: Linesearch methods (continued). Steepest descent methods

Lecture 3: Linesearch methods (continued). Steepest descent methods Lecture 3: Linesearch methods (continued). Steepest descent methods Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 3: Linesearch methods (continued).

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725 Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)

More information

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory Preprint ANL/MCS-P1015-1202, Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx. Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Primal-dual Subgradient Method for Convex Problems with Functional Constraints Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lectures 10-12 Lectures 10,11 Slide# 1 Today s Lecture 1. Introduction to norms: L 1,L 2,L. 2. Casting absolute value and max operators. 3. Norm minimization problems.

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization

A globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 A globally and quadratically convergent primal dual augmented Lagrangian

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 3, pp. 863 897 c 2005 Society for Industrial and Applied Mathematics A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR NONLINEAR OPTIMIZATION MICHAEL P. FRIEDLANDER

More information

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

Optimal Newton-type methods for nonconvex smooth optimization problems

Optimal Newton-type methods for nonconvex smooth optimization problems Optimal Newton-type methods for nonconvex smooth optimization problems Coralia Cartis, Nicholas I. M. Gould and Philippe L. Toint June 9, 20 Abstract We consider a general class of second-order iterations

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

An interior-point stochastic approximation method and an L1-regularized delta rule

An interior-point stochastic approximation method and an L1-regularized delta rule Photograph from National Geographic, Sept 2008 An interior-point stochastic approximation method and an L1-regularized delta rule Peter Carbonetto, Mark Schmidt and Nando de Freitas University of British

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

A GLOBALLY CONVERGENT STABILIZED SQP METHOD A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

Convex Optimization and SVM

Convex Optimization and SVM Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

MPC Infeasibility Handling

MPC Infeasibility Handling MPC Handling Thomas Wiese, TU Munich, KU Leuven supervised by H.J. Ferreau, Prof. M. Diehl (both KUL) and Dr. H. Gräb (TUM) October 9, 2008 1 / 42 MPC General MPC Strategies 2 / 42 Linear Discrete-Time

More information

Convex Optimization Algorithms for Machine Learning in 10 Slides

Convex Optimization Algorithms for Machine Learning in 10 Slides Convex Optimization Algorithms for Machine Learning in 10 Slides Presenter: Jul. 15. 2015 Outline 1 Quadratic Problem Linear System 2 Smooth Problem Newton-CG 3 Composite Problem Proximal-Newton-CD 4 Non-smooth,

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

Penalty, Barrier and Augmented Lagrangian Methods

Penalty, Barrier and Augmented Lagrangian Methods Penalty, Barrier and Augmented Lagrangian Methods Jesús Omar Ocegueda González Abstract Infeasible-Interior-Point methods shown in previous homeworks are well behaved when the number of constraints are

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS

REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

Convex Optimization and Support Vector Machine

Convex Optimization and Support Vector Machine Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We

More information