Lecture 14: Primal-Dual Interior-Point Method

Size: px
Start display at page:

Download "Lecture 14: Primal-Dual Interior-Point Method"

Transcription

1 CSE 599: Interplay between Convex Optimization and Geometry Winter 218 Lecturer: Yin Tat Lee Lecture 14: Primal-Dual Interior-Point Method Disclaimer: Please tell me any mistake you noticed. In this and next week, we study interior point methods. In practice, this framework reduces the problem of optimizing a convex function to <5 many linear systems. In theory, it reduces the problem to Õ( n) many. Although it has many numerical stability problem in practice and it rarely offers nearly linear time algorithm, this is the current best framework for optimizing general convex functions in both theory and practice for the high accuracy regime. For simplicity, we start by describing interior point method for linear programs first. The first half is about the polynomial bound and the second half is about the implementation details Short-step interior point method Basic Properties of Central Path We first consider the primal problem (P) : min x c x subjects to Ax = b,x where A R m n. The difficulty of linear programs is the constraint x. Without this constraint, we can simply solve it by a linear system. One natural idea to solve linear programs is to replace the hard constraint x by some other function. So, let consider the regularized version of the linear program (P t ) : min x c x t n lnx i subjects to Ax = b. i=1 We will explain the reason of choosing lnx in more detail next lecture. For now, you can think it as a nice function that blows up at x =. One can think that lnx gives a force from every constraint x to make sure x is true. Since the gradient of lnx blows up when x =, when x is close enough, the force is large enough to counter the cost c. When t, then the problem (P t ) is closer to the original problem (P) and hence the minimizer of (P t ) is closer to a minimizer of (P). First, we give a formula for the minimizer of (P t ). Lemma (Existenceand Uniquenessofcentralpath). If the polytope {Ax = b,x } has an interior, then the optimum of (P t ) is uniquely given by where xs = t is a shorthand of x i s i = t for all i. xs = t, Ax = b, A y +s = c, (x,s) 14-1

2 14-2 Lecture 14: Primal-Dual Interior-Point Method Proof. The optimality condition is given by c t x = A y where y is the dual variable. Write s i = t x i, we have the formula. The solution is unique because the function lnx is strictly convex. Definition We define the central path (x t,y t,s t ) of the linear program is given by x t s t = t, Ax t = b, A y t +s t = c, (x t,s t ). To give another interpretation of the central path, we note that the dual problem is (D) : max y,s b y subjects to A y +s = c,s. Note that for any feasible x and y, we have that c x b y = c x x A y = x s. Hence, (x,y,s) solves the linear program if it satisfies the central path equation with t =. Therefore, central path is a balanced way to decrease x i s i uniformly to. Now, we formalize the intuition that for small t, x t is a good approximation of the primal solution. Lemma (Duality Gap). We have that Duality Gap = c x t b y t = c x t x t A y t = x t s t = t n. The interior point method follows the following framework: 1. Find C 1 2. Until t < ε n, (a) Use C t to find C (1 h)t for h = 1 1 n. Note that this algorithm only finds a solution with ε error. If the linear program is integral, we can simply stop at small enough ε and round it off to the closest integral point Finding the initial point The first question to find C 1. This can be handled by extending the problem into higher dimension. By rescaling the problem and adding a new variable, we can assume the solution satisfies i x i = n and that c 1 2. Now, we consider the following linear program min x,x c x+m x subjects to [ ][ ] A (b A1) x 1 x = for some very large M. This linear program has the following properties: [ b n ]

3 Lecture 14: Primal-Dual Interior-Point Method 14-3 All 1 vector is a feasible interior point of the linear program. [ ] x Any vector x such that Ax = b and x gives a feasible vector in this linear program. Therefore, if M is large enough, this linear program has the same solution as the old linear program. The dual polytope [ [ m has a trivial interior point y = 1 ] [ ] A 1 c (b A1) y M ]. (used c 1 2 and M large enough) Therefore, we have an initial point (x,y,s) with (x,s) >. To make xs = 1, we can rescale the constraint matrix A Following the central path During the algorithm, we maintain a point (x,y,s) such that Ax = b, A y + s = c and xs is close to t. We show how to find a feasible (x+ x,y + y,s+ y) such that it is even closer to t. We can write the equation as follows: (x+ x)(s+ s) t, A(x+ x) = b, A (y + y)+(s+ s) = c. (Omitted the non-negative conditions.) Using our assumption on (x,y,s) and noting that x s is small, the equation can simplified as follows I A S X A x y s = t xs This is a linear equation and hence we can solve it exactly as follows: Exercise Letr = t xs. ProvethatS x = (I P)randX s = PrwhereP = XA (AS 1 XA ) 1 AS 1. Lemma Let x = x+ x and s = s+ s. If i (x is i t) 2 ε 2 t 2, we have that (x i s i t)2 ( ε 4 +O(ε 5 ) ) t 2. Proof. We have that LHS = i (x i s i t)2 = i Using the formula in Exercise , this can be bounded by i. ( x) 2 i ( s)2 i 1 (1 ε) 2 t 2 x 2 i s2 i ( x)2 i ( s)2 i. i (1 ε) 2 t 2 LHS i ((I P)r) 2 i (Pr)2 i (I P)r 2 4 Pr 2 4 (I P)r 2 2 Pr 2 2. Note that P is a projection matrix and that P op = S X 2 S X 2 A (AS 1 XA ) 1 AS X 2 S 1 2 X op op op 1 ε where weused that the middle matrixis an orthogonalprojectionmatrix. Similarly, we have I P op α. Hence, we have 1 LHS (1 ε) 6 t 2 r 4 2 = ε 4 t 4 (1 ε) 6 t 2.

4 14-4 Lecture 14: Primal-Dual Interior-Point Method Using this, we have the following theorem: Theorem We can solve linear program with ε error in O( nlog( n ε )) many iterations and each iteration only need to solve 1 linear system. Proof. Let E = i (x is i t) 2 be the error of the current iteration. We always maintain E t2 5 for current (x,y,s) and current t. Whenever E t2, we decrease E by Lemma If not, we decrease t by t(1 1 1 n ). Note that 2 (x i s i t(1 h)) 2 2 i i (x i s i t) 2 +2t 2 h 2 n 2t t2 1 (t(1 h))2 5. Therefore, the invariant is preserved after this step of t. Hence, each iteration, we simply solve 1 linear system and decrease t by (1 1 1 n ) factor. So, it takes O( nlog( n ε )) to decrease t from 1 to ε n. Remark. Due the our reduction to get the initial point, we are paying huge factors in the logarithmic terms. For many problems, this factor is polynomial and hence the total running time is still O( nlog( n ε )). However, for some problem such as circuit evaluation, the factor can be exponentially large and hence we only get a n 1.5 time algorithm. This is natural because otherwise, we would prove that any program can be run in n depth! Where does n come from? Central path satisfies the following ODE S t d dt x t +X t d dt S t = 1, A d dt x t =, A d dt y t + d dt s t =. dx Solvingthislinearsystem,wehavethatS t ds t dt = (I P t )1andX t t dt = P t1wherep t = X t A (ASt 1 X t A ) 1 AS 1 Using that x t s t = t, we have that and that Xt 1 dx t P t = X t A (AX 2 ta) 1 AX t = S 1 t A (AS 2 t A) 1 AS t dt = 1 t (I P t)1 and St 1 ds t dt = 1 t P t1. Equivalently, we have dlnx t dlnt = (I P t)1 and dlns t dlnt = P t1. t. Note that P t 1 P t 1 2 = n. Hence, x t and s t can changes at most constant factor when we change t by a 1± 1 n factor. Exercise If we are given x such that lnx lnx t O(1), then we can find x t by solving Õ(1) linear systems. If v is a random vector with length n, then we have P t v = O( logn). Maybe this is the reason we see the running time of interior point closer to nearly constant iterations instead of polynomial.

5 Lecture 14: Primal-Dual Interior-Point Method Mehrotra predictor corrector method In practice, we implement the interior point slightly differently. First, we do not need to find an initial point that is feasible. Hence, we do not need to do that reduction. Second, our step is not aiming at xs = t. Instead, every step, we first aim at xs = (called affine direction). Then, we check if we can take a large step on this affine direction. This is used to evaluation how long step size h we can take. Finally, we compute a second order update step. Formally, the algorithm as follows: Compute initial point: Set x, y and s are the minimizer of the problem min Ax=b x 2 and the problem min A y+s=c s 2. Set δ x = max( 1.5min i x i,) and δ s = max( 1.5min i s i,). Set y () = y, x () = x+δ x +.5 (x+δxe) (s+δ se) i (si+δs) and s () = s+δ s +.5 (x+δxe) (s+δ se) i (xi+δx). For k =,1,2, Solve the equation A I A S (k) X (k) x aff y aff s aff = r c r b X (k) S (k) e where r b = Ax (k) b and r c = A y (k) +s (k) c. Calculate α pri aff = argmax{α [,1] x(k) +α x aff }; α dual aff = argmax{α [,1] s (k) +α s aff }; µ = x (k) s (k) /n; µ aff = (x (k) +α pri aff xaff ) (s (k) +α dual aff s aff )/n; (µaff ) ) 3,1 Set centering parameter to σ = min( ; Solve the equation Calculate A I A S (k) X (k) µ xcc y cc s cc = σµe X aff S aff e ( x (k), y (k), s (k) ) = ( x aff + x cc, y aff + y cc, s aff + s cc ); Set α pri k = min(.99 α pri max,1) and α dual k Set α pri max = argmax{α [,1] x (k) +α x (k) }; α dual max = argmax{α [,1] s (k) +α s (k) }; = min(.99 α dual max,1); x (k+1) = x (k) +α pri k x(k) ; (y (k+1),s (k+1) ) = (y (k),s (k) )+α dual k ( y (k), s (k) ); ;

6 14-6 Lecture 14: Primal-Dual Interior-Point Method Listing 2: Mehrotra predictor corrector method 1 % This is an implementation of Mehrotra's predictor coorector algorithm. 2 % We solve the linear program min <c,x> subject to Ax=b, x>= 3 function [x, iter, mu] = solvelp(a, b, c) 4 m = size (A,1) ; n = size (A,2) ; maxiter = 1; 5 P = colamd(a') ; 6 A = A(P,:) ; b = b(p) ; 7 8 % Find an initial point 9 R = builtin (' cholinf', A * A') ; 1 y = R\(R'\(A * c)) ; s = c A' * y; x = A' * (R\(R'\b)) ; 11 dx = max( 1.5 * min(x ), ) ; ds = max( 1.5 * min( s ), ) ; 12 x = x + dx +.5 * (( x + dx)' * ( s +ds))/sum( s +ds) ; 13 s = s + ds +.5 * (( x + dx)' * ( s +ds))/sum(x +dx) ; 14 nb = mean(abs(b)) ; 15 mu = []; 16 for iter = 1:maxiter 17 mu(end+1) = sum(x. * s)/n; 18 rb = A * x b; rc = A' * y + s c; rxs = x. * s ; 19 if mu(end) < 1e 6 && mean(abs(rb)) < 1e 6 * nb, break ; end 2 21 R = builtin (' cholinf', A * diag(sparse(x./s)) * A') ; 22 [dx a, dy a, ds a ] = step dir (A, R, x, s, rc, rb, rxs) ; xa step = dist (x, dx a) ; sa step = dist (s, ds a) ; 25 mu a = sum((x+xa step * dx a). * ( s+sa step * ds a))/n; 26 sigma = min((mu a/mu(end))ˆ3,1) ; rxs = (sigma * mu(end) dx a. * ds a) ; 29 [dx c, dy c, ds c ] = step dir (A, R, x, s, zeros(n,1), zeros(m,1), rxs) ; 3 31 dx = dx a + dx c ; dy = dy a + dy c ; ds = ds a + ds c ; 32 x step = min(.99 * dist (x, dx),1) ; 33 s step = min(.99 * dist (s, ds),1) ; x = x + x step * dx; y = y + s step * dy; s = s + s step * ds; 36 end 37 if iter == maxiter, x = ones(n,1) * NaN; end % x = NaN if doesn't converge. 38 end 39 % This outputs the solution of the linear system 4 % [ A' I ; A ;S X][ dx;dy;ds] = [ rc, rb, rxs ] 41 % The chol decomposition of A Sˆ 1 X A' = R' R is given 42 function [dx, dy, ds] = step dir (A, R, x, s, rc, rb, rxs) 43 r = rb + A * (( x. * rc+rxs)./s) ; 44 dy = R\(R'\r) ; ds = rc A' * dy; dx = (rxs + x. * ds)./s; 45 end 46 function [ step ] = dist (x, dx) 47 step = x./dx; 48 step(dx>=) = Inf ; 49 step = min(step) ; 5 end

7 Lecture 14: Primal-Dual Interior-Point Method 14-7 We note the following: 1. Usually linear programs are given by min Ax=b,l x u c x. The best way is to handle them directly. But for the simplicity of the code, we convert the linear program to min Ax=b,x c x as follows: (a) For any unconstrained x, replaced it by x = x + x with new constraints x + and x. (b) For any two sided constraints l x u, we replace it to x = u x, x l and x. (c) For any one-sided constraints x l, shift it to x. 2. This code is barely stable enough to pass all test cases in netlib problems if we perturb c by c+ε1 where ε is some tiny positive number. This avoided the issue that the set of optimum point can be unbounded. All examples are solved in 7 iterations, most of them solved in 4 iterations. (for 1 6 error). 3. I have not implement the test for infeasibility (See [85]) and the procedure of rounding (See [83]). 4. I have not implement methods to simplify the linear program or to rescale the program. However, to make the algorithm faster for problems with dense col in A, we split the corresponding columns into many copies and duplicate the corresponding variables (not shown in the matlab code above, but in the zip.). Usually this is not done in this way, but modify the linear system algorithm to handle those columns separately. 5. One may able to slightly improve the running time by doing a higher order approximation of the central path. For more implementation details, see [84]. References [83] Erling D Andersen. On exploiting problem structure in a basis identification procedure for linear programming. INFORMS Journal on Computing, 11(1):95 13, [84] Stephen J Wright. Primal-dual interior-point methods. Siam, [85] Xiaojie Xu, Pi-Fang Hung, and Yinyu Ye. A simplified homogeneous and self-dual linear programming algorithm and its implementation. Annals of Operations Research, 62(1): , 1996.

Constraint Reduction for Linear Programs with Many Constraints

Constraint Reduction for Linear Programs with Many Constraints Constraint Reduction for Linear Programs with Many Constraints André L. Tits Institute for Systems Research and Department of Electrical and Computer Engineering University of Maryland, College Park Pierre-Antoine

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Duality revisited. Javier Peña Convex Optimization /36-725

Duality revisited. Javier Peña Convex Optimization /36-725 Duality revisited Javier Peña Conve Optimization 10-725/36-725 1 Last time: barrier method Main idea: approimate the problem f() + I C () with the barrier problem f() + 1 t φ() tf() + φ() where t > 0 and

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function Linear programming Input: System of inequalities or equalities over the reals R A linear cost function Output: Value for variables that minimizes cost function Example: Minimize 6x+4y Subject to 3x + 2y

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

Theory and Internet Protocols

Theory and Internet Protocols Game Lecture 2: Linear Programming and Zero Sum Nash Equilibrium Xiaotie Deng AIMS Lab Department of Computer Science Shanghai Jiaotong University September 26, 2016 1 2 3 4 Standard Form (P) Outline

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

Interior Point Methods for LP

Interior Point Methods for LP 11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method - A Boundary Method: Starting at an extreme point of the feasible set, the simplex

More information

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2 Linear programming Saad Mneimneh 1 Introduction Consider the following problem: x 1 + x x 1 x 8 x 1 + x 10 5x 1 x x 1, x 0 The feasible solution is a point (x 1, x ) that lies within the region defined

More information

Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999.

Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999. Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999. 1. page xviii, lines 1 and 3: (x, λ, x) should be (x, λ, s) (on both lines) 2. page 6, line

More information

Numerical Optimization

Numerical Optimization Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm

More information

Lecture 24: August 28

Lecture 24: August 28 10-725: Optimization Fall 2012 Lecture 24: August 28 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Jiaji Zhou,Tinghui Zhou,Kawa Cheung Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Problem 1 (Exercise 2.2, Monograph)

Problem 1 (Exercise 2.2, Monograph) MS&E 314/CME 336 Assignment 2 Conic Linear Programming January 3, 215 Prof. Yinyu Ye 6 Pages ASSIGNMENT 2 SOLUTIONS Problem 1 (Exercise 2.2, Monograph) We prove the part ii) of Theorem 2.1 (Farkas Lemma

More information

BBM402-Lecture 20: LP Duality

BBM402-Lecture 20: LP Duality BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

EE364a Homework 8 solutions

EE364a Homework 8 solutions EE364a, Winter 2007-08 Prof. S. Boyd EE364a Homework 8 solutions 9.8 Steepest descent method in l -norm. Explain how to find a steepest descent direction in the l -norm, and give a simple interpretation.

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

Convex Optimization : Conic Versus Functional Form

Convex Optimization : Conic Versus Functional Form Convex Optimization : Conic Versus Functional Form Erling D. Andersen MOSEK ApS, Fruebjergvej 3, Box 16, DK 2100 Copenhagen, Blog: http://erlingdandersen.blogspot.com Linkedin: http://dk.linkedin.com/in/edandersen

More information

Research overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview.

Research overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview. Research overview Lehigh University Department of Industrial & Systems Engineering COR@L Seminar September 4, 2008 1 Duality without regularity condition Duality in non-exact arithmetic 2 interior point

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Lecture 14 Barrier method

Lecture 14 Barrier method L. Vandenberghe EE236A (Fall 2013-14) Lecture 14 Barrier method centering problem Newton decrement local convergence of Newton method short-step barrier method global convergence of Newton method predictor-corrector

More information

Approximate Farkas Lemmas in Convex Optimization

Approximate Farkas Lemmas in Convex Optimization Approximate Farkas Lemmas in Convex Optimization Imre McMaster University Advanced Optimization Lab AdvOL Graduate Student Seminar October 25, 2004 1 Exact Farkas Lemma Motivation 2 3 Future plans The

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

The Johnson-Lindenstrauss Lemma in Linear Programming

The Johnson-Lindenstrauss Lemma in Linear Programming The Johnson-Lindenstrauss Lemma in Linear Programming Leo Liberti, Vu Khac Ky, Pierre-Louis Poirion CNRS LIX Ecole Polytechnique, France Aussois COW 2016 The gist Goal: solving very large LPs min{c x Ax

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class ODE Homework Due Wed. 9 August 2009; At the beginning of the class. (a) Solve Lẏ + Ry = E sin(ωt) with y(0) = k () L, R, E, ω are positive constants. (b) What is the limit of the solution as ω 0? (c) Is

More information

Conic Linear Optimization and its Dual. yyye

Conic Linear Optimization and its Dual.   yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

EE364b Homework 5. A ij = φ i (x i,y i ) subject to Ax + s = 0, Ay + t = 0, with variables x, y R n. This is the bi-commodity network flow problem.

EE364b Homework 5. A ij = φ i (x i,y i ) subject to Ax + s = 0, Ay + t = 0, with variables x, y R n. This is the bi-commodity network flow problem. EE364b Prof. S. Boyd EE364b Homewor 5 1. Distributed method for bi-commodity networ flow problem. We consider a networ (directed graph) with n arcs and p nodes, described by the incidence matrix A R p

More information

Lecture Note 18: Duality

Lecture Note 18: Duality MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

IE 521 Convex Optimization Homework #1 Solution

IE 521 Convex Optimization Homework #1 Solution IE 521 Convex Optimization Homework #1 Solution your NAME here your NetID here February 13, 2019 Instructions. Homework is due Wednesday, February 6, at 1:00pm; no late homework accepted. Please use the

More information

1 Quantum states and von Neumann entropy

1 Quantum states and von Neumann entropy Lecture 9: Quantum entropy maximization CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: February 15, 2016 1 Quantum states and von Neumann entropy Recall that S sym n n

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

Lecture 18 Oct. 30, 2014

Lecture 18 Oct. 30, 2014 CS 224: Advanced Algorithms Fall 214 Lecture 18 Oct. 3, 214 Prof. Jelani Nelson Scribe: Xiaoyu He 1 Overview In this lecture we will describe a path-following implementation of the Interior Point Method

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization Convex Optimization Ofer Meshi Lecture 6: Lower Bounds Constrained Optimization Lower Bounds Some upper bounds: #iter μ 2 M #iter 2 M #iter L L μ 2 Oracle/ops GD κ log 1/ε M x # ε L # x # L # ε # με f

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

Week 3 Linear programming duality

Week 3 Linear programming duality Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

18.01 Single Variable Calculus Fall 2006

18.01 Single Variable Calculus Fall 2006 MIT OpenCourseWare http://ocw.mit.edu 18.01 Single Variable Calculus Fall 2006 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Exam 4 Review 1. Trig substitution

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Inequality constrained minimization: log-barrier method

Inequality constrained minimization: log-barrier method Inequality constrained minimization: log-barrier method We wish to solve min c T x subject to Ax b with n = 50 and m = 100. We use the barrier method with logarithmic barrier function m ϕ(x) = log( (a

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

CS 372: Computational Geometry Lecture 14 Geometric Approximation Algorithms

CS 372: Computational Geometry Lecture 14 Geometric Approximation Algorithms CS 372: Computational Geometry Lecture 14 Geometric Approximation Algorithms Antoine Vigneron King Abdullah University of Science and Technology December 5, 2012 Antoine Vigneron (KAUST) CS 372 Lecture

More information