Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999.

Similar documents
Interior-Point Methods

Lecture 10. Primal-Dual Interior Point Method for LP

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A priori bounds on the condition numbers in interior-point methods

Lecture 14: Primal-Dual Interior-Point Method

Conic Linear Optimization and its Dual. yyye

Research overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview.

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Chapter 14 Linear Programming: Interior-Point Methods

IMPLEMENTATION OF INTERIOR POINT METHODS

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path

Interior Point Methods in Mathematical Programming

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

BBM402-Lecture 20: LP Duality

A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm

Review Solutions, Exam 2, Operations Research

Nonsymmetric potential-reduction methods for general cones

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

SF2822 Applied nonlinear optimization, final exam Saturday December

Lecture 3 Interior Point Methods and Nonlinear Optimization

CCO Commun. Comb. Optim.

c 2005 Society for Industrial and Applied Mathematics

Positive Political Theory II David Austen-Smith & Je rey S. Banks

Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

New stopping criteria for detecting infeasibility in conic optimization

Lecture 17: Primal-dual interior-point methods part II

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Solving conic optimization problems via self-dual embedding and facial reduction: a unified approach

Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

Convex Optimization : Conic Versus Functional Form

18. Primal-dual interior-point methods

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Constraint Reduction for Linear Programs with Many Constraints

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Interior Point Methods for Linear Programming: Motivation & Theory

4. The Dual Simplex Method

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Duality revisited. Javier Peña Convex Optimization /36-725

Problem 1 (Exercise 2.2, Monograph)

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger

On Mehrotra-Type Predictor-Corrector Algorithms

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

Summary of the simplex method

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

Interior Point Methods for Mathematical Programming

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

Semidefinite Programming

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

Corrections and Minor Revisions of Mathematical Methods in the Physical Sciences, third edition, by Mary L. Boas (deceased)

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Operations Research Lecture 4: Linear Programming Interior Point Method

A Strongly Polynomial Simplex Method for Totally Unimodular LP

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani

Week 3 Linear programming duality

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Optimisation and Operations Research

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

32//-1: jxj on [ 1; 1] 34//1: [ 1; 1] 34//-5: order kn 36/2./-2: SHOULD READ ( 1 ; i ), 36/2./-1: INSERT ) BEFORE. 36/3./-6: := cos ((2i 1) 36/3.-3: S

Part 1. The Review of Linear Programming

On self-concordant barriers for generalized power cones

The Simplex Algorithm

EE364a Review Session 5

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Absolute value equations

Linear Programming: Simplex

Optimization: Then and Now

2 The SDP problem and preliminary discussion

Introduction to integer programming II

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

General Notation. Exercises and Problems

An interior-point gradient method for large-scale totally nonnegative least squares problems

This is logically equivalent to the conjunction of the positive assertion Minimal Arithmetic and Representability

Errata for Instructor s Solutions Manual for Gravity, An Introduction to Einstein s General Relativity 1st printing

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Lecture: Algorithms for LP, SOCP and SDP

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems

arxiv:math/ v1 [math.co] 23 May 2000

On implementing a primal-dual interior-point method for conic quadratic optimization

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Transcription:

Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999. 1. page xviii, lines 1 and 3: (x, λ, x) should be (x, λ, s) (on both lines) 2. page 6, line 4: s and λ should be swapped. The corrected equation is x J(x, λ, s) λ = F (x, λ, s), s 3. page 13, lines 12 13: Insert a phrase to stress that we consider only monotone LCP in this book, though the qualifier monotone is often omitted. Replace the sentence preceding the formula (1.21) by The monotone LCP the qualifier monotone is implicit throughout this book is the problem of finding vectors x and s in IR n that satisfy the following conditions: 4. page 13, line 12: delete of. Corrected line is because it is such a versatile tool for formulating and solving a wide range 5. page 16, equation (1.25a): S 1 Xr c should be S 1 Xr c. Corrected equation is AD 2 A T λ = r b + A( S 1 Xr c + x σµs 1 e), 6. page 69, line 1: replace (4.2) by (4.4). Corrected line is From (4.4) and Lemma 4.1(i) 7. page 74, line 10: should be =. Corrected line is X 1 x 2 + S 1 s 2 = V 1 D 1 x 2 + V 1 D s 2 8. page 76, line 2: q(0) q(ᾱ) should be q(ᾱ) q(0). Corrected equation is q(ᾱ) q(0) 0.15, 9. page 79, line 6: (ρ n)x T s should be (ρ n) log x T s. 10. page 104, line 7: replace by =. Corrected formula reads = 2 3/2 u + v 2, 1

11. page 119, line 7: The argument (α) should appear after r k b and rk c. Corrected line is r k b (α) = b Ax k (α), r k c (α) = A T λ k (α) + s k (α) c, 12. page 120, lines 14, 15, 17, 21 are missing an n. Corrected lines should read 0.99αµ k + 0.5αµ k + α 2 C 2 2µ k /n = 0.49αµ k + α 2 C 2 2µ k /n. ( nσmin ᾱ = min 2 α 0.49n 2,, σ min(1 γ) C 2 2, 0.49n ) 2, 1. 13. page 122, proof of Theorem 6.8. The use of ɛ(a, b, c) and consequently the definition of C 3 in (6.52) is incorrect. The proof can be fixed by making the following specific amendments to the text on page 122: On line 2, change any solution to any strictly complementary solution. Delete the material starting with Since we are free... to the end of the paragraph, and replace it with the following: A similar bound can be found for s i, i B. Therefore, the result (6.49) holds when we define ( ) C 3 = C 3 min min i N s i, min i B x i. (6.52) 14. page 137, line 1: replace γ by α. Corrected line is return α. 15. page 159, Algorithm PC (for LCP). The predictor and corrector steps are reversed in the specification of the algorithm. The correct specification is as follows: Algorithm PC (for LCP) Given (x 0, s 0 ) N 2 (0.25); for k = 0, 1, 2,... if k is even solve (8.2) with σ k = 0 to obtain ( x k, s k ); choose α k as the largest value of α in [0, 1] such that (x k (α), s k (α)) N 2 (0.5); 2

set (x k+1, s k+1 ) = (x k, s k ) + α k ( x k, s k ); else solve (8.2) with σ k = 1 to obtain ( x k, s k ); set (x k+1, s k+1 ) = (x k, s k ) + ( x k, s k ); end (if) end (for). 16. page 162, line 9: replace that by have. Corrected line is complementary solution. strictly complementary Some LCPs have solutions but no 17. page 166, formula (8.20): (x, s) 0 should be (λ, y) 0. 18. page 178, line 15: Replace all by most of. Corrected sentence should read The homogeneous self-dual (HSD) formulation of Ye, Todd, and Mizuno [159] possesses most of these desirable properties and is quite efficient as well. (There is an example in the YTM paper [159, page 62] of a problem that is both primal and dual infeasible, for which a solution of the HSD formulation reveals only the primal infeasibility.) 19. page 182, line 9. Replace z by s. Corrected line is (τ, x, λ, κ, s ) for which κ > 0. 20. page 182-183, proof of Theorem 9.3. This proof is inadequate since it assume feasibility of the primal problem. It should be replaced by the following argument: Proof. Since κ > 0, its complementary variable τ must be zero. It follows from (9.5a) that at least one of c T x and b T λ is negative. Suppose that c T x < 0. Since τ = 0, we have from (9.5) that x 0 and Ax = bτ = 0. If the dual problem (9.3b) were feasible, we would have a vector ( λ, s) with s 0 and A T λ + s = c. By multiplying the latter equation by (x ) T, we obtain 0 > c T x = λ T Ax + s T x = s T x 0, which is a contradiction. The proof of the second claim is similar. 21. page 197, line 10: replace that by than. Corrected line is smaller in norm than the affine-scaling step in fact, it is often much larger. Even 22. page 198, equation (10.8b): replace α pri aff by αdual aff. 3

23. page 210, equation (11.3a): S 1 X should be S 1 X. Corrected formula is AD 2 A T λ = r b + A( S 1 Xr c + S 1 r xs ), 24. page 212, line 13: replace 10 14 by 10 16. Corrected phrase is (typically, u 10 16 for double-precision arithmetic) Correspondingly, on page xx, line 2, replace 10 14 by 10 16. 25. page 212, equation (11.6): Rather than use the undefined symbol, we replace (11.6) and the remainder of the sentence in which it lies by the following passage: z z βκ(m)u, z where β is a positive value that is independent of M and u. 26. page 240, line 13: replace U IR N by U IR N. 27. page 246, line 4. Replace f (t) by f(t) on the right-hand side. 28. page 251, equation (A.24): the formula is incorrect. It should read G 1 = G 1 G 1 U(I + V T G 1 U) 1 V T G 1. 29. page 257, the second and third URLs on this page should be replaced by the following: http://www.mcs.anl.gov/otc/interiorpoint/ http://www.mcs.anl.gov/otc/guide/faq/ 30. page 257, line 2. Replace URLs by URL 31. page 258, lines 1 and 2. These two URLs should be replaced by the following single URL: http://www.mcs.anl.gov/neos/server/ 32. page 258, last line. Replace this URL by the following: http://www.sztaki.hu/ meszaros/bpmpd/ 33. page 259, line 10. Replace the URL on this line by the following: http://www.cplex.com/products/barsolv.html 34. page 259, line 2. Replace the URL on this line by the following: http://www.maths.ed.ac.uk/ gondzio/software/hopdm.html 4

35. page 260, line 12. Replace the URL on this line by the following: http://www.caam.rice.edu/ zhang/lipsol/ 36. page 261, line 2. Replace the URL on this line by the following: http://www.orfe.princeton.edu/ loqo/ 37. page 261, line 2. In addition to the URL already on this line, add the following URL: http://www.dashopt.com/ 38. page 263, line 6. Replace the URL on this line by the following: http://www.mcs.anl.gov/otc/tools/pcx/ 39. page 263, line 11. Replace this paragraph with the following: Input: AMPL, MPL, MPS, callable. Java and Windows-95 interfaces are available; see the web site. 40. page 263, line 15. Add the following sentence to the end of this paragraph: A version for IBM RS6000 machines that calls IBM s solver WSSMP is also available. See the web page for details. 41. page 263, lines 3 to 1. Replace this paragraph by the following: Availability: Freely available for research and evaluation. A license must be obtained for production use in a commercial environment. 42. page 271, reference [72]: This book Solving Least Squares Problems was reprinted by SIAM Publications in 1995. 43. page 273. References [92] and [93] are identical. Thanks to: Reha Tutuncu, Nick Higham, Bob Daniel, Trevor Kearney, Sergio Lucero, Piet Groeneboom, Luis Mauricio Graña Drummond, Hung Kuo Ting, Anders Forsgren. 5