Interior Point Methods for Linear Programming: Motivation & Theory

Similar documents
Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Introduction to Nonlinear Stochastic Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior-Point Methods for Linear Optimization

Interior-Point Methods

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

10 Numerical methods for constrained problems

Operations Research Lecture 4: Linear Programming Interior Point Method

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

A new primal-dual path-following method for convex quadratic programming

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Interior Point Methods in Mathematical Programming

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

Second-order cone programming

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

A priori bounds on the condition numbers in interior-point methods

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Optimization for Machine Learning

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

12. Interior-point methods

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

Lecture 10. Primal-Dual Interior Point Method for LP

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Lagrangian Duality Theory

Primal-Dual Interior-Point Methods

Chapter 14 Linear Programming: Interior-Point Methods

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

Lecture: Algorithms for LP, SOCP and SDP

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

Chapter 6 Interior-Point Approach to Linear Programming

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Algorithms for nonlinear programming problems II

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Lecture 14 Barrier method


Interior Point Methods for Mathematical Programming

Lecture 8. Strong Duality Results. September 22, 2008

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Supplement: Universal Self-Concordant Barrier Functions

4TE3/6TE3. Algorithms for. Continuous Optimization

Conic Linear Optimization and its Dual. yyye

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Largest dual ellipsoids inscribed in dual cones

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

LP. Kap. 17: Interior-point methods

Using interior point methods for optimization in training very large scale Support Vector Machines. Interior Point Methods for QP. Elements of the IPM

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Interior Point Algorithms for Constrained Convex Optimization

Local Self-concordance of Barrier Functions Based on Kernel-functions

Nonsymmetric potential-reduction methods for general cones

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

12. Interior-point methods

Lecture 17: Primal-dual interior-point methods part II

Interior Point Methods for LP

Numerical Optimization

18. Primal-dual interior-point methods

Interior Point Methods for Nonlinear Optimization

LECTURE 10 LECTURE OUTLINE

Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization

A Simpler and Tighter Redundant Klee-Minty Construction

CS711008Z Algorithm Design and Analysis

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

Lecture 24: August 28

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

Semidefinite Programming

Nonlinear optimization

More First-Order Optimization Algorithms

Solving Dual Problems

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

c 2002 Society for Industrial and Applied Mathematics

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

On Mehrotra-Type Predictor-Corrector Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture #21. c T x Ax b. maximize subject to

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Linear Programming Duality

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

4. Algebra and Duality

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Optimisation in Higher Dimensions

On self-concordant barriers for generalized power cones

Barrier Method. Javier Peña Convex Optimization /36-725

Transcription:

School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio Paris, January 2018 1

Outline IPM for LP: Motivation complementarity conditions first order optimality conditions central trajectory primal-dual framework Polynomial Complexity of IPM Newton method short step path-following method polynomial complexity proof Paris, January 2018 2

Building Blocks of the IPM What do we need to derive the Interior Point Method? duality theory: Lagrangian function; first order optimality conditions. logarithmic barriers. Newton method. Paris, January 2018 3

Primal-Dual Pair of Linear Programs Primal Lagrangian Dual min c T x max b T y s.t. Ax = b, s.t. A T y +s = c, x 0; s 0. Optimality Conditions L(x,y) = c T x y T (Ax b) s T x. Ax = b, A T y +s = c, XSe = 0, ( i.e., x j s j = 0 j), (x,s) 0, X=diag{x 1,,x n }, S=diag{s 1,,s n }, e=(1,,1) R n. Paris, January 2018 4

Logarithmic barrier lnx j replaces the inequality x j 0. ln x 1 x Observe that min e n j=1 lnx j max n j=1 x j Theminimizationof n j=1 lnx j isequivalenttothemaximization of the product of distances from all hyperplanes defining the positive orthant: it prevents all x j from approaching zero. Paris, January 2018 5

Logarithmic barrier Replace the primal LP min c T x s.t. Ax = b, x 0, with the primal barrier program min c T x µ n j=1 s.t. Ax = b. lnx j Lagrangian: L(x,y,µ) = c T x y T (Ax b) µ n j=1 lnx j. Paris, January 2018 6

Conditions for a stationary point of the Lagrangian x L(x,y,µ) = c A T y µx 1 e = 0 y L(x,y,µ) = Ax b = 0, where X 1 = diag{x 1 1,x 1 2,,x 1 n }. Let us denote s = µx 1 e, i.e. XSe = µe. The First Order Optimality Conditions are: Ax = b, A T y +s = c, XSe = µe, (x,s) > 0. Paris, January 2018 7

The pronunciation of Greek letter µ [mi] Robert De Niro, Taxi Driver (1976) Paris, January 2018 8

Central Trajectory The first order optimality conditions for the barrier problem Ax = b, A T y +s = c, XSe = µe, (x,s) 0 approximate the first order optimality conditions for the LP Ax = b, A T y +s = c, XSe = 0, (x,s) 0 more and more closely as µ goes to zero. Paris, January 2018 9

Central Trajectory Parameter µ controls the distance to optimality. c T x b T y = c T x x T A T y = x T (c A T y) = x T s = nµ. Analytic centre (µ-centre): a (unique) point that satisfies FOC. (x(µ),y(µ),s(µ)), x(µ) > 0, s(µ) > 0 The path {(x(µ),y(µ),s(µ)) : µ > 0} is called the primal-dual central trajectory. Paris, January 2018 10

Newton Method is used to find a stationary point of the barrier problem. Recall how to use Newton Method to find a root of a nonlinear equation f(x) = 0. A tangent line z f(x k ) = f(x k ) (x x k ) is a local approximation of the graph of the function f(x). Substituting z = 0 defines a new point x k+1 = x k ( f(x k )) 1 f(x k ). Paris, January 2018 11

Newton Method z f(x) k f(x ) k z-f(x ) = f(x )(x-x ) k k k+1 f(x ) k+2 f(x ) x k x k+1 x k+2 x x k+1 = x k ( f(x k )) 1 f(x k ). Paris, January 2018 12

Apply Newton Method to the FOC The first order optimality conditions for the barrier problem form a large system of nonlinear equations f(x,y,s) = 0, where f : R 2n+m R 2n+m is a mapping defined as follows: Ax b f(x,y,s) = A T y +s c. XSe µe Actually, the first two terms of it are linear; only the last one, corresponding to the complementarity condition, is nonlinear. Paris, January 2018 13

Newton Method (cont d) Note that f(x,y,s) = A 0 0 0 A T I S 0 X. Thus, for a given point (x,y,s) we find the Newton direction ( x, y, s) by solving the system of linear equations: A 0 0 [ ] x 0 A T I y = b Ax c A T y s. S 0 X s µe XSe Paris, January 2018 14

Interior-Point Framework The logarithmic barrier lnxj replaces the inequality x j 0. We derive the first order optimality conditions for the primal barrier problem: Ax = b, A T y +s = c, XSe = µe, and apply Newton method to solve this system of (nonlinear) equations. Actually,wefixthebarrierparameterµandmakeonlyone(damped) Newton step towards the solution of FOC. We do not solve the current FOC exactly. Instead, we immediately reduce the barrier parameter µ (to ensure progress towards optimality) and repeat the process. Paris, January 2018 15

Interior Point Algorithm Initialize k = 0 (x 0,y 0,s 0 ) F 0 µ 0 = n 1 (x0 ) T s 0 α 0 = 0.9995 Repeat until optimality k = k +1 µ k = σµ k 1, where σ (0,1) = ( x, y, s) = Newton direction towards µ-centre Ratio test: α P := max {α > 0 : x+α x 0}, α D := max {α > 0 : s+α s 0}. Make step: x k+1 = x k +α 0 α P x, y k+1 = y k +α 0 α D y, s k+1 = s k +α 0 α D s. Paris, January 2018 16

Notations X = diag{x 1,x 2,,x n } = x 1 x2.... x n e = (1,1,,1) R n, X 1 = diag{x 1 1,x 1 2,,x 1 n }. An equation is equivalent to XSe = µe, x j s j = µ, j = 1,2,,n. Paris, January 2018 17

Notations(cont d) Primal feasible set P = {x R n Ax = b, x 0}. Primal strictly feasible set P 0 = {x R n Ax = b, x > 0}. Dual feasible set D={y R m,s R n A T y +s = c, s 0}. Dual strictly feasible set D 0 = {y R m,s R n A T y + s = c, s>0}. Primal-dual feasible set F = {(x,y,s) Ax = b, A T y +s = c, (x,s) 0}. Primal-dual strictly feasible set F 0 = {(x,y,s) Ax = b, A T y +s = c, (x,s) > 0}. Paris, January 2018 18

Path-Following Algorithm The analysis given in this lecture comes from the book of Steve Wright: Primal-Dual Interior-Point Methods, SIAM Philadelphia, 1997. We analyze a feasible interior-point algorithm with the following properties: all its iterates are feasible and stay in a close neighbourhood of the central path; the iterates follow the central path towards optimality; systematic (though slow) reduction of duality gap is ensured. This algorithm is called the short-step path-following method. Indeed, it makes very slow progress (short-steps) to optimality. Paris, January 2018 19

Central Path Neighbourhood Assume a primal-dual strictly feasible solution (x,y,s) F 0 lying in a neighbourhood of the central path is given; namely (x,y,s) satisfies: Ax = b, A T y +s = c, XSe µe. We define a θ-neighbourhood of the central path N 2 (θ), a set of primal-dual strictly feasible solutions (x,y,s) F 0 that satisfy: XSe µe θµ, where θ (0,1) and the barrier µ satisfies: x T s = nµ. Hence N 2 (θ) = {(x,y,s) F 0 XSe µe θµ}. Paris, January 2018 20

Central Path Neighbourhood N ( 2 θ ) neighbourhoodof the central path Paris, January 2018 21

Progress towards optimality Assume a primal-dual strictly feasible solution (x,y,s) N 2 (θ) for some θ (0,1) is given. Interior point algorithm tries to move from this point to another one that also belongs to a θ-neighbourhood of the central path but corresponds to a smaller µ. The required reduction of µ is small: for some β (0,1). µ k+1 = σµ k, where σ = 1 β/ n, This is a short-step method: It makes short steps to optimality. Paris, January 2018 22

Progress towards optimality Given a new µ-centre, interior point algorithm computes Newton direction: A 0 0 [ ] [ ] x 0 0 A T I y = 0, S 0 X s σµe XSe and makes step in this direction. Magic numbers (will be explained later): θ = 0.1 and β = 0.1. θ controls the proximity to the central path; β controls the progress to optimality. Paris, January 2018 23

How to prove the O( n) complexity result We will prove the following: full step in Newton direction is feasible; the new iterate (x k+1,y k+1,s k+1 )=(x k,y k,s k )+( x k, y k, s k ) belongs to the θ-neighbourhood of the new µ-centre (with µ k+1 = σµ k ); duality gap is reduced 1 β/ n times. Paris, January 2018 24

O( n) complexity result Note that since at one iteration duality gap is reduced 1 β/ n times, after n iterations the reduction achieves: (1 β/ n) n e β. After C n iterations, the reduction is e Cβ. For sufficiently large constant C the reduction can thus be arbitrarily large(i.e. the duality gap can become arbitrarily small). Hence this algorithm has complexity O( n). This should be understood as follows: after the number of iterations proportional to n the algorithm solves the problem. Paris, January 2018 25

Worst-Case Complexity Result Paris, January 2018 26

Technical Results Lemma 1 Newton direction ( x, y, s) defined by the equation system A 0 0 [ ] [ ] x 0 0 A T I y = 0 S 0 X s σµe XSe satisfies: x T s = 0. Proof: From the first two equations in (1) we get, (1) Hence A x = 0 and s = A T y. x T s = x T ( A T y) = y T (A x) = 0. Paris, January 2018 27

Technical Results (cont d) Lemma 2 Let ( x, y, s) be the Newton direction that solves the system (1). The new iterate satisfies where ( x,ȳ, s) = (x,y,s)+( x, y, s) x T s = n µ, µ = σµ. Paris, January 2018 28

Proof: From the third equation in (1) we get S x+x s = XSe+σµe. By summing the n components of this equation we obtain Thus e T (S x+x s) = s T x+x T s = e T XSe+σµe T e = x T s+nσµ = x T s (1 σ). which is equivalent to: x T s = (x+ x) T (s+ s) = x T s+(s T x+x T s)+( x) T s = x T s+(σ 1)x T s+0 = σx T s, n µ = σnµ. Paris, January 2018 29

Reminder: Norms of the vector x R n. For any x R n : x = ( n x 2 j )1/2 j=1 x = max j {1..n} x j x 1 = n j=1 x j x x 1 x 1 n x x x x n x x x 1 x 1 n x Paris, January 2018 30

Reminder: Triangle Inequality For any vectors p,q and r and for any norm. p q p r + r q. The relation between algebraic and geometric means. For any scalars a and b such that ab 0: ab 1 2 a+b. Paris, January 2018 31

Technical Result (algebra) Lemma 3LetuandvbeanytwovectorsinR n suchthatu T v 0. Then UVe 2 3/2 u+v 2, where U=diag{u 1,,u n }, V =diag{v 1,,v n }. Proof: Letuspartitionallproductsu j v j intopositiveandnegative ones: P = {j u j v j 0} and M = {j u j v j < 0} : 0 u T v= u j v j + j v j j P j Mu = u j v j u j v j. j P j M Paris, January 2018 32

Proof (cont d) We can now write UVe = ( [u j v j ] j P 2 + [u j v j ] j M 2 ) 1/2 ( [u j v j ] j P 2 1 + [u jv j ] j M 2 1 )1/2 (2 [u j v j ] j P 2 1 )1/2 2 [ 1 4 (u j +v j ) 2 ] j P 1 = 2 3/2 (u j +v j ) 2 j P n 2 3/2 (u j +v j ) 2 j=1 = 2 3/2 u+v 2, as requested. Paris, January 2018 33

IPM Technical Results (cont d) Lemma 4 If (x,y,s) N 2 (θ) for some θ (0,1), then (1 θ)µ x j s j (1+θ)µ j. In other words, min x js j (1 θ)µ, j {1..n} max x js j (1+θ)µ. j {1..n} Paris, January 2018 34

Proof: Since x x, from the definition of N 2 (θ), N 2 (θ) = {(x,y,s) F 0 XSe µe θµ}, we conclude XSe µe XSe µe θµ. Hence x j s j µ θµ j, which is equivalent to θµ x j s j µ θµ j. Thus (1 θ)µ x j s j (1+θ)µ j. Paris, January 2018 35

IPM Technical Results (cont d) Lemma 5 If (x,y,s) N 2 (θ) for some θ (0,1), then Proof: Note first that Therefore XSe σµe 2 XSe σµe 2 θ 2 µ 2 +(1 σ) 2 µ 2 n. e T (XSe µe) = x T s µe T e = nµ nµ = 0. = (XSe µe)+(1 σ)µe 2 = XSe µe 2 +2(1 σ)µe T (XSe µe)+(1 σ) 2 µ 2 e T e θ 2 µ 2 +(1 σ) 2 µ 2 n. Paris, January 2018 36

IPM Technical Results (cont d) Lemma 6 If (x,y,s) N 2 (θ) for some θ (0,1), then X Se θ2 +n(1 σ) 2 2 3/2 (1 θ) µ. Proof: 3rd equation in the Newton system gives S x+x s = XSe+σµe. Having multiplied it with (XS) 1/2, we obtain X 1/2 S 1/2 x+x 1/2 S 1/2 s=(xs) 1/2 ( XSe+σµe). Paris, January 2018 37

Proof (cont d) Define u=x 1/2 S 1/2 x and v=x 1/2 S 1/2 s and observe that (by Lemma 1) u T v = x T s = 0. Now apply Lemma 3: X Se = (X 1/2 S 1/2 X)(X 1/2 S 1/2 S)e 2 3/2 X 1/2 S 1/2 x+x 1/2 S 1/2 s 2 = 2 3/2 X 1/2 S 1/2 ( XSe+σµe) 2 = 2 3/2 n j=1 ( x j s j +σµ) 2 x j s j 2 3/2 XSe σµe 2 min j x j s j θ2 +n(1 σ) 2 2 3/2 µ (by Lemmas 4 and 5). (1 θ) Paris, January 2018 38

Magic Numbers We have previously set two parameters for the short-step pathfollowing method: θ [0.05, 0.1] and β [0.05, 0.1]. Now it s time to justify this particular choice. Both θ and β have to be small to make sure that a full step in the Newton direction does not take the new iterate outside the neighbourhood N 2 (θ). θ controls the proximity to the central path; β controls the progress to optimality. Paris, January 2018 39

Magic numbers choice lemma Lemma 7 If θ [0.05, 0.1] and β [0.05, 0.1], then θ 2 +n(1 σ) 2 2 3/2 σθ. (1 θ) Proof: Recall that σ = 1 β/ n. Hence n(1 σ) 2 = β 2 and for any β [0.05, 0.1] (for any n 1) σ 0.9. Substituting θ [0.05, 0.1] and β [0.05, 0.1], we obtain θ 2 +n(1 σ) 2 +0.1 2 2 3/2 (1 θ) =0.12 0.02 0.9 0.1 σθ. 23/2 0.9 Paris, January 2018 40

Magic Numbers Set θ = 0.07 and β = 0.07 0.07 is a Super Number. The name is Bond, James Bond. Paris, January 2018 41

Full Newton step in N 2 (θ) Lemma 8 Suppose (x,y,s) N 2 (θ) and ( x, y, s) is the Newton direction computed from the system (1). Then the new iterate ( x,ȳ, s) = (x,y,s)+( x, y, s) satisfies ( x,ȳ, s) N 2 (θ), i.e. X Se µe θ µ. Proof: From Lemma 2, the new iterate ( x,ȳ, s) satisfies x T s = n µ = nσµ, so we have to prove that X Se µe θ µ. For a given component j {1..n}, we have x j s j µ=(x j + x j )(s j + s j ) µ =x j s j +(s j x j +x j s j )+ x j s j µ =x j s j +( x j s j +σµ)+ x j s j σµ = x j s j. Paris, January 2018 42

Proof (cont d) Thus, from Lemmas 6 and 7, we get X Se µe = X Se θ2 +n(1 σ) 2 2 3/2 (1 θ) µ σθµ = θ µ. Paris, January 2018 43

A property of log function Lemma 9 For all δ > 1: Proof: Consider the function and its derivative ln(1+δ) δ. f(δ) = δ ln(1+δ) f (δ) = 1 1 1+δ = δ 1+δ. Obviously f (δ) < 0 for δ ( 1,0) and f (δ) > 0 for δ (0, ). Hence f(.) has a minimum at δ=0. We find that f(δ = 0) = 0. Consequently, for any δ ( 1, ), f(δ) 0, i.e. δ ln(1+δ) 0. Paris, January 2018 44

O( n) Complexity Result Theorem 10 Given ǫ > 0, suppose that a feasible starting point (x 0,y 0,s 0 ) N 2 (0.1) satisfies (x 0 ) T s 0 = nµ 0, where µ 0 1/ǫ κ, for some positive constant κ. Then there exists an index K with K = O( n ln(1/ǫ)) such that µ k ǫ, k K. Paris, January 2018 45

O( n) Complexity Result Proof: From Lemma 2, µ k+1 = σµ k. Having taken logarithms of both sides of this equality we obtain lnµ k+1 = lnσ +lnµ k. By repeatedly applying this formula and using µ 0 1/ǫ κ, we get lnµ k = klnσ +lnµ 0 kln(1 β/ n)+κln(1/ǫ). From Lemma 9 we have ln(1 β/ n) β/ n. Thus lnµ k k( β/ n)+κln(1/ǫ). To satisfy µ k ǫ, we need: k( β/ n)+κln(1/ǫ) lnǫ. This inequality holds for any k K, where K = κ+1 n ln(1/ǫ). β Paris, January 2018 46

Polynomial Complexity Result Main ingredients of the polynomial complexity result for the shortstep path-following algorithm: Stay close to the central path: all iterates stay in the N 2 (θ) neighbourhood of the central path. Make (slow) progress towards optimality: reduce systematically duality gap where for some β (0,1). µ k+1 = σµ k, σ = 1 β/ n, Paris, January 2018 47

Reading about IPMs S. Wright Primal-Dual Interior-Point Methods, SIAM Philadelphia, 1997. Gondzio Interior point methods 25 years later, European J. of Operational Research 218 (2012) 587 601. http://www.maths.ed.ac.uk/~gondzio/reports/ipmxxv.html Gondzio and Grothey Direct solution of linear systems of size 10 9 arising in optimization with interior point methods, in: Parallel Processing and Applied Mathematics PPAM 2005, R. Wyrzykowski, J. Dongarra, N. Meyer and J. Wasniewski (eds.), Lecture Notes in Computer Science, 3911, Springer-Verlag, Berlin, 2006, pp 513 525. OOPS: Object-Oriented Parallel Solver http://www.maths.ed.ac.uk/~gondzio/parallel/solver.html Paris, January 2018 48