Apolynomialtimeinteriorpointmethodforproblemswith nonconvex constraints

Size: px
Start display at page:

Download "Apolynomialtimeinteriorpointmethodforproblemswith nonconvex constraints"

Transcription

1 Apolynomialtimeinteriorpointmethodforproblemswith nonconvex constraints Oliver Hinder, Yinyu Ye Department of Management Science and Engineering Stanford University June 28, 2018

2 The problem I Consider problem min f (x) s.t.a(x) apple 0 x2r d where f : R d! R and a : R d! R m have Lipschitz first and second derivatives.

3 The problem I Consider problem min f (x) s.t.a(x) apple 0 x2r d where f : R d! R and a : R d! R m have Lipschitz first and second derivatives. I Captures a huge variety of practical problems... drinking water network, electricity network, trajectory optimization, engineering design, etc.

4 The problem I Consider problem min f (x) s.t.a(x) apple 0 x2r d where f : R d! R and a : R d! R m have Lipschitz first and second derivatives. I Captures a huge variety of practical problems... drinking water network, electricity network, trajectory optimization, engineering design, etc. I In worst-case finding global optima takes an exponential amount of time.

5 The problem I Consider problem min f (x) s.t.a(x) apple 0 x2r d where f : R d! R and a : R d! R m have Lipschitz first and second derivatives. I Captures a huge variety of practical problems... drinking water network, electricity network, trajectory optimization, engineering design, etc. I In worst-case finding global optima takes an exponential amount of time. I Instead we want to find an approximate local optima, more precisely a Fritz John point.

6 µ-approximate Fritz John point a(x) < 0 q (Primal feasibility) r x f (x) y T ra(x) 2 apple µ kyk 1 +1 y > 0 (Dual feasibility) y i a i (x) µ 2 [1/2, 3/2] 8i 2{1,...,m} (Complementarity)

7 µ-approximate Fritz John point a(x) < 0 q (Primal feasibility) r x f (x) y T ra(x) 2 apple µ kyk 1 +1 y > 0 (Dual feasibility) y i a i (x) µ 2 [1/2, 3/2] 8i 2{1,...,m} (Complementarity) I Fritz John is a necessary condition for local optimality I MFCQ constraint qualification, i.e., dual multipliers are bounded then Fritz John = KKT point objective objective Fritz John points KKT point Dual multipliers Fritz John point Dual multipliers Local minimizers KKT points

8 How do we solve this problem?

9 How do we solve this problem? I One popular approach, move constraints into a log barrier: µ(x) :=f (x) µ log( a(x)). and apply (modified) Newton s method to find an approximate local optima.

10 How do we solve this problem? I One popular approach, move constraints into a log barrier: µ(x) :=f (x) µ log( a(x)). and apply (modified) Newton s method to find an approximate local optima. I Used for real systems, e.g., drinking water networks, electricity networks, trajectory control, etc.

11 How do we solve this problem? I One popular approach, move constraints into a log barrier: µ(x) :=f (x) µ log( a(x)). and apply (modified) Newton s method to find an approximate local optima. I Used for real systems, e.g., drinking water networks, electricity networks, trajectory control, etc. I Examples of practical codes using this method include IPOPT, KNITRO, LOQO, One-Phase (my code), etc.

12 How do we solve this problem? I One popular approach, move constraints into a log barrier: µ(x) :=f (x) µ log( a(x)). and apply (modified) Newton s method to find an approximate local optima. I Used for real systems, e.g., drinking water networks, electricity networks, trajectory control, etc. I Examples of practical codes using this method include IPOPT, KNITRO, LOQO, One-Phase (my code), etc. I Local superlinear convergence is known.

13 How do we solve this problem? I One popular approach, move constraints into a log barrier: µ(x) :=f (x) µ log( a(x)). and apply (modified) Newton s method to find an approximate local optima. I Used for real systems, e.g., drinking water networks, electricity networks, trajectory control, etc. I Examples of practical codes using this method include IPOPT, KNITRO, LOQO, One-Phase (my code), etc. I Local superlinear convergence is known. I Global convergence: most results show convergence in limit but provide no explicit runtime bound. Implicitly runtime bound is exponential in 1/µ.

14 How do we solve this problem? I One popular approach, move constraints into a log barrier: µ(x) :=f (x) µ log( a(x)). and apply (modified) Newton s method to find an approximate local optima. I Used for real systems, e.g., drinking water networks, electricity networks, trajectory control, etc. I Examples of practical codes using this method include IPOPT, KNITRO, LOQO, One-Phase (my code), etc. I Local superlinear convergence is known. I Global convergence: most results show convergence in limit but provide no explicit runtime bound. Implicitly runtime bound is exponential in 1/µ. I Goal: runtime with polynomial in 1/µ to find µ-approximate Fritz John point.

15 Brief history of IPM theory

16 Brief history of IPM theory Linear programming Birth of interior point methods [Karmarkar, 1984] O( m log(1/ ))

17 Brief history of IPM theory Linear programming Birth of interior point methods [Karmarkar, 1984] O( m log(1/ )) Log barrier + Newton [Regnar, 1988]

18 Brief history of IPM theory Linear programming Birth of interior point methods [Karmarkar, 1984] O( m log(1/ )) Log barrier + Newton [Regnar, 1988]

19 Brief history of IPM theory Conic optimization Linear programming Birth of interior point methods [Karmarkar, 1984] O( m log(1/ )) General objective/constraints Log barrier + Newton [Regnar, 1988] Self-concordant barriers [Nesterov, Nemiroviskii, 1994] O( v log(1/ ))

20 Brief history of IPM theory Conic optimization Linear programming Birth of interior point methods [Karmarkar, 1984] O( m log(1/ )) General objective/constraints Log barrier + Newton [Regnar, 1988] Self-concordant barriers [Nesterov, Nemiroviskii, 1994] O( v log(1/ )) Successful implementations: MOSEK, GUROBI, Successful implementations: LOQO, KNTIRO, IPOPT, Successful implementations: SEDUMI, MOSEK, GUROBI,

21 Brief history of IPM theory Conic optimization Linear programming Birth of interior point methods [Karmarkar, 1984] O( m log(1/ )) General objective/constraints Log barrier + Newton [Regnar, 1988] Self-concordant barriers [Nesterov, Nemiroviskii, 1994] O( v log(1/ )) Successful implementations: MOSEK, GUROBI, Successful implementations: LOQO, KNTIRO, IPOPT, Successful implementations: SEDUMI, MOSEK, GUROBI, Good theoretical understanding Less theoretical understanding

22 Literature review nonconvex optimization I Unconstrained. Goal: find a point with krf (x)k 2 apple.

23 Literature review nonconvex optimization I Unconstrained. Goal: find a point with krf (x)k 2 apple. I Gradient descent O( 2 ).

24 Literature review nonconvex optimization I Unconstrained. Goal: find a point with krf (x)k 2 apple. I Gradient descent O( 2 ). I Cubic regularization O( 3/2 )[Nesterov,Polyak,2006].

25 Literature review nonconvex optimization I Unconstrained. Goal: find a point with krf (x)k 2 apple. I Gradient descent O( 2 ). I Cubic regularization O( 3/2 )[Nesterov,Polyak,2006]. I Best achievable runtimes (of any method) with rf and r 2 f Lipschitz respectively [Carmon, Duchi, Hinder, Sidford, 2017].

26 Literature review nonconvex optimization I Unconstrained. Goal: find a point with krf (x)k 2 apple. I Gradient descent O( 2 ). I Cubic regularization O( 3/2 )[Nesterov,Polyak,2006]. I Best achievable runtimes (of any method) with rf and r 2 f Lipschitz respectively [Carmon, Duchi, Hinder, Sidford, 2017]. I Nonconvex objective, linear inequality constraints. A ne scaling IPM [Ye, (1998), Bian, Chen, and Ye (2015), Haeser, Liu, and Ye (2017)]. Has O( 3/2 ) runtime to find KKT points.

27 Literature review nonconvex optimization I Unconstrained. Goal: find a point with krf (x)k 2 apple. I Gradient descent O( 2 ). I Cubic regularization O( 3/2 )[Nesterov,Polyak,2006]. I Best achievable runtimes (of any method) with rf and r 2 f Lipschitz respectively [Carmon, Duchi, Hinder, Sidford, 2017]. I Nonconvex objective, linear inequality constraints. A ne scaling IPM [Ye, (1998), Bian, Chen, and Ye (2015), Haeser, Liu, and Ye (2017)]. Has O( 3/2 ) runtime to find KKT points. I Nonconvex objective and constraints.

28 Literature review nonconvex optimization I Unconstrained. Goal: find a point with krf (x)k 2 apple. I Gradient descent O( 2 ). I Cubic regularization O( 3/2 )[Nesterov,Polyak,2006]. I Best achievable runtimes (of any method) with rf and r 2 f Lipschitz respectively [Carmon, Duchi, Hinder, Sidford, 2017]. I Nonconvex objective, linear inequality constraints. A ne scaling IPM [Ye, (1998), Bian, Chen, and Ye (2015), Haeser, Liu, and Ye (2017)]. Has O( 3/2 ) runtime to find KKT points. I Nonconvex objective and constraints. I Apply cubic regularization to quadratic penalty function [Birgin et al, 2016]. Has O( 2 )runtime.

29 Literature review nonconvex optimization I Unconstrained. Goal: find a point with krf (x)k 2 apple. I Gradient descent O( 2 ). I Cubic regularization O( 3/2 )[Nesterov,Polyak,2006]. I Best achievable runtimes (of any method) with rf and r 2 f Lipschitz respectively [Carmon, Duchi, Hinder, Sidford, 2017]. I Nonconvex objective, linear inequality constraints. A ne scaling IPM [Ye, (1998), Bian, Chen, and Ye (2015), Haeser, Liu, and Ye (2017)]. Has O( 3/2 ) runtime to find KKT points. I Nonconvex objective and constraints. I Apply cubic regularization to quadratic penalty function [Birgin et al, 2016]. Has O( 2 )runtime. I Sequential linear programming [Cartis et al, 2015] has O( 2 )runtime.

30 Literature review nonconvex optimization I Unconstrained. Goal: find a point with krf (x)k 2 apple. I Gradient descent O( 2 ). I Cubic regularization O( 3/2 )[Nesterov,Polyak,2006]. I Best achievable runtimes (of any method) with rf and r 2 f Lipschitz respectively [Carmon, Duchi, Hinder, Sidford, 2017]. I Nonconvex objective, linear inequality constraints. A ne scaling IPM [Ye, (1998), Bian, Chen, and Ye (2015), Haeser, Liu, and Ye (2017)]. Has O( 3/2 ) runtime to find KKT points. I Nonconvex objective and constraints. I Apply cubic regularization to quadratic penalty function [Birgin et al, 2016]. Has O( 2 )runtime. I Sequential linear programming [Cartis et al, 2015] has O( 2 )runtime. I Both these methods plug and play directly with unconstrained optimization methods, e.g., they need their penalty functions to have Lipschitz derivatives.

31 Literature review nonconvex optimization I Unconstrained. Goal: find a point with krf (x)k 2 apple. I Gradient descent O( 2 ). I Cubic regularization O( 3/2 )[Nesterov,Polyak,2006]. I Best achievable runtimes (of any method) with rf and r 2 f Lipschitz respectively [Carmon, Duchi, Hinder, Sidford, 2017]. I Nonconvex objective, linear inequality constraints. A ne scaling IPM [Ye, (1998), Bian, Chen, and Ye (2015), Haeser, Liu, and Ye (2017)]. Has O( 3/2 ) runtime to find KKT points. I Nonconvex objective and constraints. I Apply cubic regularization to quadratic penalty function [Birgin et al, 2016]. Has O( 2 )runtime. I Sequential linear programming [Cartis et al, 2015] has O( 2 )runtime. I Both these methods plug and play directly with unconstrained optimization methods, e.g., they need their penalty functions to have Lipschitz derivatives. I Log barrier does not have Lipschitz derivatives!

32 Our IPM for nonconvex optimization Feasible region

33 Our IPM for nonconvex optimization

34 Our IPM for nonconvex optimization

35 Our IPM for nonconvex optimization! " ($) Reminder: µ(x) :=f (x) µ log( a(x)).

36 Our IPM for nonconvex optimization! " ($) Reminder: µ(x) :=f (x) µ log( a(x)).

37 Our IPM for nonconvex optimization! " ($) Each iteration of our IPM we: 1 Pick trust region size r µ 3/4 (1 + kyk 1 ) 1/2. 2 Compute direction d x by solving trust region problem. 3 Pick step size 2 (0, 1] to ensure x new = x + d x satisfies a(x) < 0. 4 Is new point Fritz John point? If no then return to step one.

38 Our IPM for nonconvex optimization! " ($) & Each iteration of our IPM we: 1 Pick trust region size r µ 3/4 (1 + kyk 1 ) 1/2. 2 Compute direction d x by solving trust region problem. 3 Pick step size 2 (0, 1] to ensure x new = x + d x satisfies a(x) < 0. 4 Is new point Fritz John point? If no then return to step one.

39 Our IPM for nonconvex optimization & $! " ($) ' Each iteration of our IPM we: d x 2 argmin u: u 2 1 Pick trust region size r µ 3/4 (1 + kyk 1 ) 1/2. 1 r 2 ut r 2 µ(x)u + u T r µ (x) 2 Compute direction d x by solving trust region problem. 3 Pick step size 2 (0, 1] to ensure x new = x + d x satisfies a(x) < 0. 4 Is new point Fritz John point? If no then return to step one.

40 Our IPM for nonconvex optimization! " ($) '( $ & Each iteration of our IPM we: 1 Pick trust region size r µ 3/4 (1 + kyk 1 ) 1/2. 2 Compute direction d x by solving trust region problem. 3 Pick step size 2 (0, 1] to ensure x new = x + d x satisfies a(x) < 0. 4 Is new point Fritz John point? If no then return to step one.

41 Our IPM for nonconvex optimization! " ($) Each iteration of our IPM we: 1 Pick trust region size r µ 3/4 (1 + kyk 1 ) 1/2. 2 Compute direction d x by solving trust region problem. 3 Pick step size 2 (0, 1] to ensure x new = x + d x satisfies a(x) < 0. 4 Is new point Fritz John point? If no then return to step one.

42 Our IPM for nonconvex optimization?! " ($) Each iteration of our IPM we: 1 Pick trust region size r µ 3/4 (1 + kyk 1 ) 1/2. 2 Compute direction d x by solving trust region problem. 3 Pick step size 2 (0, 1] to ensure x new = x + d x satisfies a(x) < 0. 4 Is new point Fritz John point? If no then return to step one.

43 Our IPM for nonconvex optimization! " ($) Theoretical properties? Each iteration of our IPM we: 1 Pick trust region size r µ 3/4 (1 + kyk 1 ) 1/2. 2 Compute direction d x by solving trust region problem. 3 Pick step size 2 (0, 1] to ensure x new = x + d x satisfies a(x) < 0. 4 Is new point Fritz John point? If no then return to step one.

44 Nonconvex result Theorem Assume: I Strictly feasible initial point x (0). I rf, ra, r 2 f, r 2 a are Lipschitz. Then our IPM takes as most O µ(x (0) ) inf z µ(z) µ 7/4 trust region steps to terminate with a µ-approximate second-order Fritz John point (x +, y + ).

45 Nonconvex result Theorem Assume: I Strictly feasible initial point x (0). I rf, ra, r 2 f, r 2 a are Lipschitz. Then our IPM takes as most O µ(x (0) ) inf z µ(z) µ 7/4 trust region steps to terminate with a µ-approximate second-order Fritz John point (x +, y + ). algorithm # iteration primitive evaluates Birgin et al., 2016 O(µ 3 ) vector operation r Cartis et al., 2011 O(µ 2 ) linear program r Birgin et al., 2016 O(µ 2 ) KKT of NCQP r, r 2 IPM (this paper) O(µ 7/4 ) linear system r, r 2

46 Convex result Modify our IPM have sequence of decreasing µ, i.e.,µ (j) with µ (j)! 0. Theorem Assume: I Strictly feasible initial point x (0) and feasible region is bounded. I f, rf, ra, r 2 f, r 2 a are Lipschitz. I Slaters condition holds. Then our IPM starting takes at most Õ m 1/3 2/3 trust region steps to terminate with a -optimal solution.

47 Questions?

48 One-phase results

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Convex Optimization Prof. Nati Srebro Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Equality Constrained Optimization f 0 (x) s. t. A R p n, b R p Using access to: 2 nd order oracle

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

arxiv: v2 [math.oc] 11 Jan 2018

arxiv: v2 [math.oc] 11 Jan 2018 A one-phase interior point method for nonconvex optimization Oliver Hinder, Yinyu Ye January 12, 2018 arxiv:1801.03072v2 [math.oc] 11 Jan 2018 Abstract The work of Wächter and Biegler [40] suggests that

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

IE 5531 Midterm #2 Solutions

IE 5531 Midterm #2 Solutions IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization Convex Optimization Ofer Meshi Lecture 6: Lower Bounds Constrained Optimization Lower Bounds Some upper bounds: #iter μ 2 M #iter 2 M #iter L L μ 2 Oracle/ops GD κ log 1/ε M x # ε L # x # L # ε # με f

More information

An interior-point trust-region polynomial algorithm for convex programming

An interior-point trust-region polynomial algorithm for convex programming An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods

On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods Gabriel Haeser Oliver Hinder Yinyu Ye July 23, 2017 Abstract This paper analyzes sequences generated by

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Composite nonlinear models at scale

Composite nonlinear models at scale Composite nonlinear models at scale Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with D. Davis (Cornell), M. Fazel (UW), A.S. Lewis (Cornell) C. Paquette (Lehigh), and S. Roy (UW)

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:

More information

Oracle Complexity of Second-Order Methods for Smooth Convex Optimization

Oracle Complexity of Second-Order Methods for Smooth Convex Optimization racle Complexity of Second-rder Methods for Smooth Convex ptimization Yossi Arjevani had Shamir Ron Shiff Weizmann Institute of Science Rehovot 7610001 Israel Abstract yossi.arjevani@weizmann.ac.il ohad.shamir@weizmann.ac.il

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Primal-dual Subgradient Method for Convex Problems with Functional Constraints Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual

More information

Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models by E. G. Birgin, J. L. Gardenghi, J. M. Martínez, S. A. Santos and Ph. L. Toint Report NAXYS-08-2015

More information

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

Numerical Optimization

Numerical Optimization Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

On Polynomial-time Path-following Interior-point Methods with Local Superlinear Convergence

On Polynomial-time Path-following Interior-point Methods with Local Superlinear Convergence On Polynomial-time Path-following Interior-point Methods with Local Superlinear Convergence by Shuxin Zhang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

Unconstrained minimization: assumptions

Unconstrained minimization: assumptions Unconstrained minimization I terminology and assumptions I gradient descent method I steepest descent method I Newton s method I self-concordant functions I implementation IOE 611: Nonlinear Programming,

More information

Infeasibility Detection in Nonlinear Optimization

Infeasibility Detection in Nonlinear Optimization Infeasibility Detection in Nonlinear Optimization Frank E. Curtis, Lehigh University Hao Wang, Lehigh University SIAM Conference on Optimization 2011 May 16, 2011 (See also Infeasibility Detection in Nonlinear

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

COM S 578X: Optimization for Machine Learning

COM S 578X: Optimization for Machine Learning COM S 578X: Optimization for Machine Learning Lecture Note 4: Duality Jia (Kevin) Liu Assistant Professor Department of Computer Science Iowa State University, Ames, Iowa, USA Fall 2018 JKL (CS@ISU) COM

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3 Contents Preface ix 1 Introduction 1 1.1 Optimization view on mathematical models 1 1.2 NLP models, black-box versus explicit expression 3 2 Mathematical modeling, cases 7 2.1 Introduction 7 2.2 Enclosing

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER CONE, AND SEMIDEFINITE PROGRAMMING. Hande Yurttan Benson

INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER CONE, AND SEMIDEFINITE PROGRAMMING. Hande Yurttan Benson INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER CONE, AND SEMIDEFINITE PROGRAMMING Hande Yurttan Benson A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN CANDIDACY FOR THE DEGREE OF

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Complexity of gradient descent for multiobjective optimization

Complexity of gradient descent for multiobjective optimization Complexity of gradient descent for multiobjective optimization J. Fliege A. I. F. Vaz L. N. Vicente July 18, 2018 Abstract A number of first-order methods have been proposed for smooth multiobjective optimization

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Nonlinear Optimization Solvers

Nonlinear Optimization Solvers ICE05 Argonne, July 19, 2005 Nonlinear Optimization Solvers Sven Leyffer leyffer@mcs.anl.gov Mathematics & Computer Science Division, Argonne National Laboratory Nonlinear Optimization Methods Optimization

More information

A Value Estimation Approach to the Iri-Imai Method for Constrained Convex Optimization. April 2003; revisions on October 2003, and March 2004

A Value Estimation Approach to the Iri-Imai Method for Constrained Convex Optimization. April 2003; revisions on October 2003, and March 2004 A Value Estimation Approach to the Iri-Imai Method for Constrained Convex Optimization Szewan Lam, Duan Li, Shuzhong Zhang April 2003; revisions on October 2003, and March 2004 Abstract In this paper,

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7. SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

On the complexity of an Inexact Restoration method for constrained optimization

On the complexity of an Inexact Restoration method for constrained optimization On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization

More information

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Frank E. Curtis, Lehigh University Beyond Convexity Workshop, Oaxaca, Mexico 26 October 2017 Worst-Case Complexity Guarantees and Nonconvex

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

Convex Optimization Overview (cnt d)

Convex Optimization Overview (cnt d) Convex Optimization Overview (cnt d) Chuong B. Do October 6, 007 1 Recap During last week s section, we began our study of convex optimization, the study of mathematical optimization problems of the form,

More information