Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
|
|
- Jemima Goodwin
- 6 years ago
- Views:
Transcription
1 Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, Massachusetts Institute of Technology.
2 The Problem The logarithmic barrier approach to solving a linear program dates back to the work of Fiacco and McCormick in 967 in their book Sequential Unconstrained Minimization Techniques, also known simply as SUMT. The method was not believed then to be either practically or theoretically interesting, when in fact today it is both! The method was re-born as a consequence of Karmarkar s interior-point method, and has been the subject of an enormous amount of research and computation, even to this day. In these notes we present the basic algorithm and a basic analysis of its performance. Consider the linear programming problem in standard form: T P : minimize c x s.t. Ax = b x 0, where x is a vector of n variables, whose standard linear programming dual problem is: D : maximize b T π s.t. A T π + s = c s 0. Given a feasible solution x of P and a feasible solution (π, s) of D, the duality gap is simply 2
3 T c x b T π = x s 0. T We introduce the following notation which will be very convenient for manipulating equations, etc. Suppose that x> 0. Define the matrix X to be the n n diagonal matrix whose diagonal entries are precisely the components of x. Then X looks like: x x x n Notice that X is positive definite, and so is X 2, which looks like: (x ) (x 2 ) (x n ) 2 Similarly, the matrices X and X 2 look like: (/x ) (/x 2 ) (/x n ) 3
4 and /(x ) /(x 2 ) /(x n ) 2 Let us introduce a logarithmic barrier term for P. We obtain: n P () : minimize ct x ln(x j ) j= s.t. Ax = b x> 0. Because the gradient of the objective function of P () issimply c X e, (where e is the vector of ones, i.e., e = (,,,..., ) T ), the Karush-Kuhn- Tucker conditions for P () are: Ax = b, x > 0 c X e = A T π. () If we define s = X e, then Xs = e, 4
5 equivalently XSe = e, and we can rewrite the Karush-Kuhn-Tucker conditions as: Ax = b, x > 0 A T π + s = c (2) XSe e = 0. From the equations of (2) it follows that if (x, π, s) is a solution of (2), then x is feasible for P,(π, s) isfeasiblefor D, and the resulting duality gap is: T x s = e T XSe = e T e = n. This suggests that we try solving P () for a variety of values of as 0. However, we cannot usually solve (2) exactly, because the third equation group is not linear in the variables. We will instead define a β-approximate solution of the Karush-Kuhn-Tucker conditions (2). A β-approximate solution of P () is defined as any solution (x, π, s) of Ax = b, x > 0 A T π + s = c (3) Xs e β. Here the norm is the Euclidean norm. 5
6 Lemma. If ( x, π, s ) is a β-approximate solution of P () and β <, then x is feasible for P, ( π, s) is feasible for D, and the duality gap satisfies: n( β) c T x b T π = x T s n( + β). (4) Proof: Primal feasibility is obvious. To prove dual feasibility, we need to show that s 0. To see this, note that the third equation system of (3) implies that β x j s j β which we can rearrange as: ( β) x j s j ( + β). (5) Therefore x j s j ( β) > 0, which implies x j s j > 0, and so s j > 0. From (5) we have n n n n( β) = ( β) x j s j = x T s ( + β) = n( + β). j= j= j= 2 The Primal-Dual Algorithm Based on the analysis just presented, we are motivated to develop the following algorithm: Step 0. Initialization. Data is (x 0,π 0,s 0, 0 ). k = 0. Assume that (x 0,π 0,s 0 )isa β-approximate solution of P ( 0 ) for some known value of β that satisfies β < 2. 6
7 Step. Set current values. ( x, π, s ) =(x k,π k,s k ), = k. Step 2. Shrink. Set = α for some α (0, ). Step 3. Compute the primal-dual Newton direction. Compute the Newton step ( x, π, s) for the equation system (2) at (x, π, s) =( x, π, s ) for =, by solving the following system of equations in the variables ( x, π, s): A x = 0 A T π + s = 0 (6) S x + X s = XSe e. Denote the solution to this system by ( x, π, s). Step 4. Update All Values. (x,π,s)=( x, π, s )+( x, π, s) Step 5. Reset Counter and Continue. (x k+,π k+,s k+ )=(x,π,s). =. k k +. Go to Step. k+ Figure shows a picture of the algorithm. Some of the issues regarding this algorithm include: how to set the approximation constant β and the fractional decrease 3 parameter α. We will see that it will be convenient to set β = 40 and 8 α =. + n the derivation of the primal-dual Newton equation system (6) 5 7
8 = 0 = /0 = 0 = 00 Figure : A conceptual picture of the interior-point algorithm. 8
9 whether or not successive iterative values (x k,π k,s k )are β-approximate solutions to P ( k ) how to get the method started 3 The Primal-Dual Newton Step We introduce a logarithmic barrier term for P to obtain P (): n P () : minimize T x c x ln(x j ) j= s.t. Ax = b x> 0. Because the gradient of the objective function of P () issimply c X e, (where e is the vector of ones, i.e., e = (,,,..., ) T ), the Karush-Kuhn- Tucker conditions for P () are: Ax = b, x > 0 c X e = A T π. (7) We define s = X e, whereby equivalently Xs = e, XSe = e, 9
10 and we can rewrite the Karush-Kuhn-Tucker conditions as: Ax = b, x > 0 A T π + s = c (8) XSe e = 0. Let ( x, π, s ) be our current iterate, which we assume is primal and dual feasible, namely: A x = b, x> 0, A T π + s = c, s> 0. (9) Introducing a direction ( x, π, s), the next iterate will be ( x, π, s ) + ( x, π, s), and we want to solve: A( x + x) = b, x + x> 0 A T ( π + π)+( s + s) = c ( X + X)(S + S)e e = 0. Keeping in mind that ( x, π, s ) is primal-dual feasible and so satisfies (9), we can rearrange the above to be: A x = 0 A T π + s = 0 S x + X s = e XSe X Se. Notice that the only nonlinear term in the above system of equations in ( x, π, s) is term the term X Se in the last system. If we erase this term, which is the same as the linearized version of the equations, we obtain the following primal-dual Newton equation system: 0
11 A x = 0 A T π + s = 0 (0) S x + X s = e XSe. The solution ( x, π, s) of the system (0) is called the primaldual Newton step. We can manipulate these equations to yield the following formulas for the solution: [ ( ] ( ) S A T ) S A T x I X AX A x + S e, π ( ) ) S A T ( AX A x S e, () ( ( ) ) S A T s A T AX A x + S e. Notice, by the way, that the computational effort in these equations lies primarily in solving a single equation: ( ) ( ) AX S A T π = A x S e. Once this system is solved, we can easily substitute: s A T π x x + S e X s. S (2) However, let us instead simply work with the primal-dual Newton system (0). Suppose that ( x, π, s) is the (unique) solution of the primal-dual Newton system (0). We obtain the new value of the variables (x, π, s) by taking the Newton step:
12 (x,π,s)=( x, π, s)+( x, π, s). We have the following very powerful convergence theorem which demonstrates the quadratic convergence of Newton s method for this problem, with an explicit guarantee of the range in which quadratic convergence takes place. Theorem 3. (Explicit Quadratic Convergence of Newton s Method). Suppose that ( x, π, s ) is a β-approximate solution of P () and β< 2. Let ( x, π, s) be the solution to the primal-dual Newton equations (0), and let: (x,π,s)=( x, π, s )+( x, π, s). ( ) +β Then (x,π,s) is a β ( β) 2 -approximate solution of P (). 2 Proof: Our current point ( x, π, s ) satisfies: A x = b, x> 0 A T π + s = c (3) XSe e β. Furthermore the primal-dual Newton step ( x, π, s) satisfies: A x = 0 A T π + s = 0 (4) S x + X s = e XSe. Note from the first two equations of (4) that x T s = 0. From the third equation of (3) we have β s j x j + β, j =,...,n, (5) 2
13 which implies: ( β) ( β) xj and s j, j =,...,n. (6) s j x j As a result of this we obtain: ( β) X x 2 = ( β) x T X X x x T X S x ) = xt X ( e XSe X s ) = xt X ( e XSe X x e XSe X x β. From this it follows that β X x <. β Therefore x = x + x = X(e + X x) > 0. We have the exact same chain of inequalities for the dual variables: ( β) S s 2 = ( β) s T S S s s T S X s ( ) = s T S e XSe ( ) S x = s T S e XSe S s e XSe S s β. From this it follows that β S s <. β Therefore s = s + s = S(e + S s) > 0. Next note from (4) that for j =,...,n we have: x j s j =( x j + x j )( s j + s j )= x j s j + x j s j + x j s j + x j s j = + x j s j. 3
14 Therefore ( ) x j s j e X S e =. j From this we obtain: n x = j s j j= n x = j s j x j s j j= x j s j n x j s j j= x j s j e X S e e X S e ( + β) X x S s ( + β) ( ) β 2 ( + β) β 4 Complexity Analysis of the Algorithm Theorem 4. (Relaxation Theorem). Suppose that ( x, π, s ) is a β = approximate solution of P (). Let α = 8 + n 5 and let = α. Then ( x, π, s) is a β = 5 -approximate solution of P ( ). Proof: The triplet ( x, π, s) satisfies A x = b, x> 0, and A T π + s = c, and so it remains to show that Xs e 5. We have ( ) ( ) Xs e = Xs e = Xs e e α α α ( ) + α Xs e e α α 4
15 3 ( ) α 3 + n n = n =. α α α 5 Theorem 4.2 (Convergence Theorem). Suppose that (x 0,π 0,s 0 ) is a 3 β = 40 -approximate solution of P ( 0 ). Then for all k =, 2, 3,..., (x k,π k,s k ) 3 is a β = 40 -approximate solution of P ( k ). Proof: By induction, suppose that the theorem is true for iterates 0,, 2,..., k. 3 Then (x k,π k,s k )isa β = 40 -approximate solution of P ( k ). From the Relaxation Theorem, (x k,π k,s k )isa 5 -approximate solution of P (k+ ) where k+ = α k. From the Quadratic Convergence Theorem, (x k+,π k+,s k+ )isa β-approximate solution of P ( k+ )for ( ) β = ( )2 = Therefore, by induction, the theorem is true for all values of k. Figure 2 shows a better picture of the algorithm. Theorem 4.3 (Complexity Theorem). Suppose that (x 0,π 0,s 0 ) is a 3 β = 40 -approximate solution of P ( 0 ). In order to obtain primal and dual feasible solutions (x k,π k,s k ) with a duality gap of at most ɛ, one needs to run the algorithm for at most iterations. ( ) 43 (x 0 ) T s 0 k = 0 n ln 37 ɛ Proof: Let k be as defined above. Note that α = 8 = ( ) n 5 +8 n 0 n 5
16 = 80 x^ ~ x = 90 _ x = 00 Figure 2: Another picture of the interior-point algorithm. 6
17 Therefore This implies that k ( ) k 0 0. n ( c T x k b T π k =(x k ) T s k k n( + β) 0 n ( ) ( ) k ( ) 43 0 n 40 n (x 0 ) T s n, ) k ( n ) 0 ) from (4). Taking logarithms, we obtain ( ) ( ) T 43 0 ln(c x k b T π k ) k ln +ln (x 0 ) T s 0 n 37 ( ) k ln (x 0 ) T s 0 n 37 ( ) ( ) 43 (x 0 ) T s 0 43 ln +ln (x 0 ) T s 0 =ln(ɛ). 37 ɛ 37 Therefore c T x k b T π k ɛ. 5 An Implementable Primal-Dual Interior-Point Algorithm Herein we describe a more implementable primal-dual interior-point algorithm. This algorithm differs from the previous method in the following respects: We do not assume that the current point is near the central path. In fact, we do not assume that the current point is even feasible. The fractional decrease parameter α is set to α = conservative value of 8 α =. + n 5 0 rather than the 7
18 We do not necessarily take the full Newton step at each iteration, and we take different step-sizes in the primal and dual. Our current point is ( x, π, s) for which x> 0and s> 0, but quite possibly A x = b and/or A T π + s = c. We are given a value of the central path barrier parameter > 0. We want to compute a direction ( x, π, s) so that ( x + x, π + π, s + s) approximately solves the central path equations. We set up the system: () A( x + x) = b (2) A T ( π + π)+( s + s) = c (3) ( X + X)(S + S)e = e. We linearize this system of equations and rearrange the terms to obtain the Newton equations for the current point ( π, s ): x, () A x = b A x =: r (2) A T π + s = c A T π s =: r 2 (3) S x + X s = e XSe =: r 3 We refer to the solution ( x, π, s) to the above system as the primaldual Newton direction at the point ( x, π, s). It differs from that derived earlier only in that earlier it was assumed that r = 0 and r 2 =0. Given our current point ( x, π, s ) and a given value of, we compute the Newton direction ( x, π, s) and we update our variables by choosing primal and dual step-sizes α P and α D to obtain new values: ( x, π, s ) ( x + α P x, π + α D π, s + α D s). 8
19 In order to ensure that x> 0and s> 0, we choose a value of r satisfying 0 < r<(r =0.99 is a common value in practice), and determine α P and α D as follows: { { }} x j α P = min, r min x j <0 x j { { }} s j α D = min, r min. s j <0 s j These step-sizes ensure that the next iterate ( x, π, s) satisfies x>0 and s >0. 5. Decreasing the Path Parameter We also want to shrink at each iteration, in order to (hopefully) shrink the duality gap. The current iterate is ( π, x, s), and the current values satisfy: T x s n We then re-set to ( ) ( x T s ) 0 n, where the fractional decrease is user-specified The Stopping Criterion We typically declare the problem solved when it is almost primal feasible, almost dual feasible, and there is almost no duality gap. We set our 9
20 tolerance ɛ to be a small positive number, typically ɛ = 0 8, for example, and we stop when: () A x b ɛ (2) A T π + s c ɛ (3) s T x ɛ 5.3 The Full Interior-Point Algorithm. Given (x 0,π 0,s 0 ) satisfying x 0 > 0, s 0 > 0, and 0 > 0, and r satisfying 0 < r <, and ɛ > 0. Set k Test stopping criterion. Check if: () Ax k b ɛ (2) A T π k + s k c ɛ (3) (s k ) T x k ɛ. If so, STOP. If not, proceed. ( ) ( (x k ) T (s k ) ) 3. Set 0 n 4. Solve the Newton equation system: () A x = b Ax k =: r (2) A T π + s = c A T π k s k =: r 2 (3) S k x + X k s = e X k S k e =: r 3 20
21 5. Determine the step-sizes: P = { { }} xk j min, r min x j <0 x j D = { { }} sk j min, r min. s j <0 s j 6. Update values: (x k+,π k+,s k+ ) (x k + α P x, π k + α D π, s k + α D s). Re-set k k + and return to (b). 5.4 Remarks on Interior-Point Methods The algorithm just described is almost exactly what is used in commercial interior-point method software. A typical interior-point code will solve a linear or quadratic optimization problem in iterations, regardless of the dimension of the problem. These days, interior-point methods have been extended to allow for the solution of a very large class of convex nonlinear optimization problems. 6 Exercises on Interior-Point Methods for LP. (Interior Point Method) Verify that the formulas given in () indeed solve the equation system (0). 2
22 2. (Interior Point Method) Prove Proposition 6. of the notes on Newton s method for a logarithmic barrier algorithm for linear programming. 3. (Interior Point Method) Create and implement your own interior-point algorithm to solve the following linear program: LP: T minimize c x s.t. Ax = b x 0, where ( ) ( ) 0 00 A =, b =, 0 50 and ( ) T c = In running your algorithm, you will need to specify the starting point (x 0,π 0,s 0 ), the starting value of the barrier parameter 0, and the value of the fractional decrease parameter α. Use the following values and make eight runs of the algorithm as follows: Starting Points: (x 0,π 0,s 0 ) = ((,,, ), (0, 0), (,,, )) (x 0,π 0,s 0 ) = ((00, 00, 00, 00), (0, 0), (00, 00, 00, 00)) Values of 0 : 0 = =0.0 Values of α: α = 0 n α = 0 4. (Interior Point Method) Suppose that x> 0 is a feasible solution of the following logarithmic barrier problem: 22
23 P (): n minimize ct x ln(x j ) j= s.t. Ax = b x 0, andthatwewishtotestif x is a β-approximate solution to P () for a given value of. The conditions for x to be a β-approximate solution to P () are that there exist values (π, s) for which: A x = b, x> 0 A T π + s = c (7) Xs e β. Construct an analytic test for whether or not such (π, s) exist by solving an associated equality-constrained quadratic program (which has a closed form solution). HINT: Think of trying to minimize Xs e over appropriate values of (π, s). 5. Consider the logarithmic barrier problem: n P () : min ct x ln x j j= s.t. Ax = b x> 0. Suppose x is a feasible point, i.e., x> 0and A x = b. Let X = diag( x). a. Construct the (primal) Newton direction d at x. Show that d can be written as: ( ) d = X(I XA T (AX 2 A T ) AX) e Xc. b. Show in the above that P =(I XA T (AX 2 A T ) AX) isaprojection matrix. 23
24 c. Suppose A T π + s = c and s> 0 for some π, s. Show that d can also be written as: ( ) d = X(I XA T (AX 2 A T ) AX) e Xs. d. Show that if there exists ( π, s) with A T π + s = c and s> 0, and e s X 4, then X d 4, and then the Newton iterate satisfies x + d > 0. 24
Interior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationThe Steepest Descent Algorithm for Unconstrained Optimization
The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem
More informationLinear programming II
Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationLecture 9 Sequential unconstrained minimization
S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationIntroduction to Semidefinite Programming (SDP)
Introduction to Semidefinite Programming (SDP) Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. 1 1 Introduction Semidefinite programming (SDP ) is the most exciting development
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationPrimal-Dual Interior-Point Methods
Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg
MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationCSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods
CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationSF2822 Applied nonlinear optimization, final exam Wednesday June
SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed.
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationPattern Classification, and Quadratic Problems
Pattern Classification, and Quadratic Problems (Robert M. Freund) March 3, 24 c 24 Massachusetts Institute of Technology. 1 1 Overview Pattern Classification, Linear Classifiers, and Quadratic Optimization
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More informationNumerical Optimization
Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationOperations Research Lecture 4: Linear Programming Interior Point Method
Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient
More informationminimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,
4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationConditional Gradient (Frank-Wolfe) Method
Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationIE 5531 Midterm #2 Solutions
IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,
More informationLecture 15: October 15
10-725: Optimization Fall 2012 Lecturer: Barnabas Poczos Lecture 15: October 15 Scribes: Christian Kroer, Fanyi Xiao Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationHomework 4. Convex Optimization /36-725
Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationPrimal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725
Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationTutorial on Convex Optimization: Part II
Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation
More informationA Distributed Newton Method for Network Utility Maximization, II: Convergence
A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationImplementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization
Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization February 12, 2002 Overview The Practical Importance of Duality ffl Review of Convexity ffl A Separating Hyperplane Theorem ffl Definition of the Dual
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationA SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More informationPrimal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization
Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationOptimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function
More informationLecture #21. c T x Ax b. maximize subject to
COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationOn the Relationship Between Regression Analysis and Mathematical Programming
JOURNAL OF APPLIED MATHEMATICS AND DECISION SCIENCES, 8(2), 3 40 Copyright c 2004, Lawrence Erlbaum Associates, Inc. On the Relationship Between Regression Analysis and Mathematical Programming DONG QIAN
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More information