A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

Similar documents
Curvature as a Complexity Bound in Interior-Point Methods

A Simpler and Tighter Redundant Klee-Minty Construction

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

McMaster University. Advanced Optimization Laboratory. Title: The Central Path Visits all the Vertices of the Klee-Minty Cube.

The continuous d-step conjecture for polytopes

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Interior Point Methods for Nonlinear Optimization

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Semidefinite Programming

Central path curvature and iteration-complexity for redundant Klee-Minty cubes

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

On Mehrotra-Type Predictor-Corrector Algorithms

CCO Commun. Comb. Optim.

Interior Point Methods for Mathematical Programming

Interior Point Methods in Mathematical Programming

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Primal-dual IPM with Asymmetric Barrier

Interior-Point Methods

On self-concordant barriers for generalized power cones

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A new primal-dual path-following method for convex quadratic programming

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

Optimization: Then and Now

A Continuous d-step Conjecture for Polytopes

CS711008Z Algorithm Design and Analysis

Nonsymmetric potential-reduction methods for general cones

A priori bounds on the condition numbers in interior-point methods

On the Number of Solutions Generated by the Simplex Method for LP

Lecture 17: Primal-dual interior-point methods part II

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

New stopping criteria for detecting infeasibility in conic optimization

Numerical Optimization

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

New Interior Point Algorithms in Linear Programming

arxiv: v1 [math.oc] 21 Jan 2019

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES

An Example with Decreasing Largest Inscribed Ball for Deterministic Rescaling Algorithms

4TE3/6TE3. Algorithms for. Continuous Optimization

Interior Point Methods for Linear Programming: Motivation & Theory

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

10 Numerical methods for constrained problems

Linear Programming: Simplex

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization

A New Self-Dual Embedding Method for Convex Programming

On well definedness of the Central Path

IMPLEMENTATION OF INTERIOR POINT METHODS

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path

Chapter 6 Interior-Point Approach to Linear Programming

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

On the number of distinct solutions generated by the simplex method for LP

Interior-Point Methods for Linear Optimization

Introduction to Nonlinear Stochastic Programming

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

CONSTRAINED PERCOLATION ON Z 2

On implementing a primal-dual interior-point method for conic quadratic optimization

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems

12. Interior-point methods

Local Self-concordance of Barrier Functions Based on Kernel-functions

Combinatorial Optimization

A strongly polynomial algorithm for linear systems having a binary solution

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47

Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

Lecture Note 18: Duality

LP. Kap. 17: Interior-point methods

Advances in Convex Optimization: Theory, Algorithms, and Applications

2.098/6.255/ Optimization Methods Practice True/False Questions

Lecture 9 Sequential unconstrained minimization

Constrained Optimization and Lagrangian Duality

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Largest dual ellipsoids inscribed in dual cones

Operations Research Lecture 4: Linear Programming Interior Point Method

Lines With Many Points On Both Sides

Smoothed Analysis of Condition Numbers and Complexity Implications for Linear Programming

3. Linear Programming and Polyhedral Combinatorics

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING

Lecture 15: October 15

Transcription:

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University Bethlehem, PA August, 204 Abstract It is an open question whether there is an interior-point algorithm for linear optimization problems with a lower iteration-complexity than the classical bound O n log )). This paper provides a negative answer to that question for a variant of the Mizuno-Todd-Ye predictor-corrector algorithm. In fact, we prove that for any ɛ > 0, there is a redundant Klee-Minty cube for which the aforementioned algorithm requires n 2 ɛ) iterations to reduce the barrier parameter by at least a constant. This is provably the first case of an adaptive step interior-point algorithm, where the classical iteration-complexity upper bound is shown to be tight. Introduction The paper of Karmarkar [5] in 984 launched the field of interior-point methods IPMs). Since then, IPMs have changed the landscape of optimization theory and been extended successfully for linear, nonlinear, and conic linear optimization [0]. For linear optimization problems LO), to reduce the barrier parameter from µ to, the best known iteration-complexity upper bound is O n log µ )). In practice however, IPMs require much less iterations than predicted by the theory. It has been conjectured that the required number of iterations grows logarithmically in the number of variables [4]. Sonnevend et al. [3] showed that for two distinct special classes of LO problems, we have the

complexity upper bounds On 4 log µ )) and On 3 8 log µ )). Using an anticipated iterationcomplexity analysis, [6] gives an On 4 log µ )) iteration-complexity bound for the Mizuno- Todd-Ye predictor-corrector MTY P-C) algorithm. Huhn and Borgwardt [3] presents a thorough probabilistic analysis of the iteration-complexity of IPMs and establish that under the rotation-symmetry-model with certain probabilistic assumptions, the average iterationcomplexity is strongly polynomial. Another direction of research regarding the iteration-complexity of IPMs is to construct worst-case examples. Sonnevend et al. [3] showed that a variant of MTY predictor-corrector algorithm requires Ωn 3 ) iterations to reduce the duality gap by log n for certain LO problems. A similar result has been obtained by Todd et al. [5] for the primal-dual affine scaling algorithm and has been later extended by Todd and Ye [6] for long step primal-dual IPMs; they showed that these algorithms take Ωn 3 ) iterations to reduce the duality gap by a constant. In a series of papers [, 2, 8, 9], LO problems have been constructed with central paths making a large number of sharp turns with the intuitive idea that for a path-following algorithm each turn should should lead to an extra Newton step. These constructions share the common feature; that is, the dual) feasible set is a perturbed Klee-Minty KM) cube and the central path visits all the vertices of the KM cube. In [9], for instance, the authors show that the n central path makes Ω log n ) sharp turns. A curvature integral developed by [3, 4] accurately estimates the number of iterations of a variant of MTY predictor-corrector algorithm, see Section 2. This curvature integral is one of the main tools in our paper and we will refer to this curvature as Sonnevend s curvature. In this paper, we build our work upon the KM construction in [9]. The main argument of the paper can be summarized as follows: We first prove that a KM construction [9] with a carefully chosen neighborhood of the central path which depends on the dimension of the cube, visits every vertices of the cube in such a way that following the central path within that neighborhood requires an exponential number of steps. From Theorem 2., this yields a large lower bound for the Sonnevend curvature. Then by using a modified hybrid version of that construction as well as Theorem 2. once again, we are able to conclude that for any ɛ > 0, there is a redundant hybrid version of the KM cube for which the MTY predictor-corrector algorithm requires Ω n ɛ) )) 2 log where log µ = Olog n). Hence by a rigorous analysis, our modified KM construction provides the first case of an IPM, the MTY predictor-corrector algorithm, for which the classical iteration-complexity upper bound is essentially tight. 2

In the rest of this section, the basic terminology used in this paper is presented. Let A be an m n matrix of full rank. For c R n and b R m, we consider the standard form primal and dual linear optimization problems, min c T x max b T y s.t. Ax = b s.t. A T y + s = c ) x 0, s 0, where x, s R n, y R m are vectors of variables. Denote the sets of primal and dual feasible solutions by P = {x R n : Ax = b, x 0} and D = {y, s) R m R n : A T y +s = c, s 0}; the sets of strictly feasible primal and dual solutions by P + and D +, respectively. Without loss of generality, see e.g., [2], we may assume that P + and D +. For a parameter µ > 0 and a vector w > 0, the w-weighted path equations are given by Ax = b, x 0 A T y + s = c, s 0 xs = µw, 2) where uv denotes [u v,..., u n v n ] T for u, v R n. For w = e, with e being the all-one vector, equation 2) gives the central path equations. 2 IPMs and Sonnevend s curvature of the central path First we briefly review the relevant algorithms to this paper. Roughly speaking, pathfollowing IPMs differ by the way the barrier parameter µ + := θ)µ is chosen and for what values of µ, the Newton steps are calculated. While for short-step IPMs, we have θ = Ω n ), predictor-corrector type algorithms allow a larger θ, hence a larger reduction in µ. Given µ > 0, and β > 0, we define the β-neighborhood of the point on the central path corresponding to µ as N β, µ) := {x, s) P + D + : xs µ e β}. 3) The β-neighborhood of the central path is defined as N β) := µ>0 N β, µ). Both the algorithm of [4] and the MTY predictor-corrector algorithm use two nested neighborhoods N β 0 ) and N β ) for 0 < β 0 < β <. The MTY predictor-corrector algorithm alternates between two search directions: The predictor search direction is used within the smaller neighborhood N β 0 ) and it aims to reduce µ to zero. Let x, s) be the current iterate, x, s) the predictor search direction and x +, s + ) := x + θ x, s + θ s). The MTY 3

predictor-corrector algorithm and the algorithm in [4] differ in the way the value of θ is determined. In the MTY predictor-corrector algorithm, θ is determined as being the largest step for which x +, s + ) stays within the larger neighborhood N β ). In the algorithm of [4], the value of θ is determined as the largest number for which x + s + µ + ξ β, where ξ = xs. Then a pure centering step is taken which will take the iterate back to the smaller µ neighborhood N β 0 ) in such a way that the normalized duality gap µ = xt s n does not change. Both algorithms can take long steps, in fact, it is known that [, 4] as k, θ k, where θ k is the step length of the predictor direction at iteration k. For the rest of the paper, we will refer to the both algorithms as MTY predictor-corrector algorithm. Sonnevend s curvature, introduced in [3], is closely related to the iteration-complexity of a variant of the MTY predictor-corrector algorithm. Let κµ) = µẋṡ /2. Stoer et al. [4] proved that their predictor-corrector algorithm has a complexity bound, which can be expressed in terms of κµ). Theorem 2.. [4] Let the nested neighborhood parameters β 0, β of the MTY predictorcorrector algorithm satisfy β 0 + β < 2. Let N be the number of iterations of the MTY predictor-corrector algorithm to reduce the barrier parameter from µ to. Suppose κµ) ν for some constant ν > 0 on µ [, µ ]. Then for some universal constants C and C 2 that depend only on the neighborhood of the central path, we have κµ) C 3 µ dµ N C κµ) µ dµ + C 2 log Constant C 3 depends on ν as well as the neighborhood of the central path. The following proposition states the basic properties of Sonnevend s curvature. Proposition 2.2. [3] The following holds.. We have κµ) = µṡµ) sµ) µṡµ) sµ) ) 2 2. We have µṡµ) sµ) n and κµ) n implying that 2 κµ) µ dµ = O. n log )). ) + 2. 4) 4

3 KM cube construction First we recall the KM construction in [9] and review its fundamental properties. max y m s.t. 0 y ρy k y k ρy k for k = 2... m. 0 d + y repeated h times 0 d 2 + y 2 repeated h 2 times. 0 d m + y m repeated h m times. 5) Certain variants of the simplex method take 2 m to solve this problem. The simplex path for these variants starts from 0,..., 0, )T, it visit all the vertices ordered by the decreasing value of the last coordinate y m until reaching the optimal point, which is ) the origin. As in [9], we fix ρm) := and d :=,,..., ρ, 0 ρ m ρ m 2 m 2m+). We denote the m-dimensional KM cube by KMm, ρm)). See Figure for KMm, ρm)) with m = 2. Let the slack variables s k = ρy k y k and s k = y k ρy k for k = 2,..., n with the convention s = y and s = y. There is a one-to-one correspondence between the vertices of KMm, ρm)) with the m-tuples v i {0, } m, i =,..., 2 m as follows. Each vertex of KMm, ρm)) is determined by whether exactly one of s i = 0 or s i = 0 for each i =,..., m in 5). If s i = 0, the i-th coordinate of the corresponding m-tuple in {0, } m is 0; if s i =, it is. For our purpose, we describe the relevant terms of KMm, ρm)) inductively as follows: First we describe the order of the set of the vertices Vm) of KMm, ρm)) which the simplex path visits. Note that Vm) is an encoding of the vertices of KMm, ρm)), they are not the actual vertex points in R m. For m = 2, let V2) = {v, v 2, v 3, v 4 } = {0, ),, ),, 0), 0, 0)}. 6) Figure shows the vertices of the KMm, ρm)). Then let Vm + ) = {v 2m, ), v 2m, ),..., v, ), v, 0), v 2, 0),..., v 2m, 0)}. 7) It can be shown [9] that there exists a redundant KMm, ρm)) whose central path, denoted by CPm), visits the vertices in the order given in the set Vm). Figure 2 and 3 show the central path for m = 2 and m = 3. 5

Figure : V2) = {v, v 2, v 3, v 4 } = {0, ),, ),, 0), 0, 0)} shows the vertices of the KMm, ρm)) cube for m = 2. Figure 2: The central path visits the vertices V2) = {v, v 2, v 3, v 4 } of the KMm, ρm)) cube for m = 2 in the given order as µ decreases. Next we define inductively a tube along the edges of the simplex path in KMm, ρm)) as follows. Let δ 4m+). Let T δ U2) = {y : R2 : s 2 δ}, Tδ L2) = {y : R2 : s 2 δ} and C δ m) = {y : R 2 : s m δ, s m δ} for m 2. Note that Tδ U2) and T δ L 2) corresponds to a tube for the upper and lower facets of KM2, ρ2)), respectively, while C δ 2) corresponds to the central part of KM2, ρ2)), see Figure. By T δ m), denote the union Tδ Lm) T δ Um) C δm). Then for m 2, define Tδ Um + ) = {y : Rm+ : s m+ δ, y,..., y m ) T δ m)} and Tδ Lm + ) = {y : Rm+ : s m+ δ, y,..., y m ) T δ m)}. Notice that Tδ U 3) is a tube that corresponds to the upper facet of KM3, ρ3)) where y 3 = ρy 2. Similarly Tδ L3) is a tube that corresponds to the lower facet of KM3, ρ3)) where y 3 = ρy 2. Also these upper and lower facets are KM2, ρ3)) cubes themselves, see Figure 3. Hence by 6

Figure 3: Central path in the redundant cube KMm, ρm)) cube for m = 2 Figure 4: Illustration of the tube T δ m) for m = 3. identifying the first m coordinates of y,..., y m, y m+ ) inside KMm + ), ρm + )) with y,..., y m ) KMm, ρm + )), and considering the assumption that δ is decreasing in m, we can write Tδ Um + ) T δm) and Tδ Lm + ) T δm), see Figure 4. We also define a δ-neighborhood of a vertex of KMm, ρm)) by whether exactly one of s i δ or s i δ for each i =,..., m in 5). Figure displays the δ-neighborhoods of the vertices V2) = {v, v 2, v 3, v 4 } of the KMm, ρm)) cube for m = 2. The following proposition is essentially Proposition 2.2 in [9]. Proposition 3.. In 5), one can choose the parameters in such a way that the central path CPm) in KMm, ρm)) stay inside the tube T δ m). In particular, one can choose ρ = m 2m+), δ 4m+) so that n = Om22m ). As µ decreases, the central path visits the δ-neighborhoods of the vertices given in the order by 7). Moreover, the number of inequalities n is linear in δ. Proof. See Proposition 2.2 in [9]. 7

Now for KMm, ρm)), we identify two regions Rδ U and Rδ L within tube T δm) in such a way that going from Rδ U to RL δ an vice versa) with line segments staying inside tube T δm) requires Ω2 m ) number of iterations. Let Rδ U := {y KMm, ρm)) : s δ, s 2 δ,..., s m δ, s m δ} 8) and R L δ := {y KMm, ρm)) : s δ, s 2 δ,..., s m δ, s m δ}. 9) We have the following. Proposition 3.2. For KMm, ρm)), let y U Rδ U and y L Rδ L. Then staying inside the tube T δ m), one requires at least 2 m line segments to reach y U from y L and vice versa. Proof. With the parameters chosen as in Proposition 3., we first show Tδ U m) and T L δ m) do not intersect for any m. Suppose by contradiction that there is a y Tδ Um) T δ Lm). From the definition of Tδ U m) and T L δ m), we have s m = ρy m y m δ and s m = y m ρy m δ. Adding these two inequalities, we get 2ρy m 2δ. By the choice of ρ and δ, it is easy to see that, this will lead to the contradiction y m >. Hence T U δ m) T δ L m) =. The rest of the proof is by induction on m. For m = 2, let y U R U δ and y L R L δ with δ 4m+). Then, for yu we have s = y δ and s 2 δ which implies that y 2 δ ρδ 2δ = 5 6. Analogously, for yl we have s = y δ and s 2 δ which implies y 2 δ + ρy 2δ = 6. Clearly, staying inside the tube T δ2), it takes at least 2 iterations to reach a point with y 2 6 from a point with y 2 5 6, see Figure. As inductive step, suppose that to reach any point in Rδ L from a point in RU δ with Rδ L KMm, ρm )) and Rδ U KMm, ρm )) one requires at least 2m 2 steps with line segments staying inside T δ m ). Let y U Rδ U and yl Rδ L inside T δm) KMm, ρm)). We distinguish two points p and p 2 such that p {y KMm, ρm)) : s δ, s 2 δ,..., s m δ, s m δ} and p 2 {y KMm, ρm)) : s δ, s 2 δ,..., s m δ, s m δ}. Note that the point p belongs to the δ-neighborhood of the vertex v 2m = 0, 0,..., 0,, ) and the point p 2 belongs to the δ-neighborhood of the vertex point v 2m + = 0, 0,..., 0,, 0). 8

Then, using the inductive definition of Tδ U m) and T L δ m), it is easy to see that yu, p Tδ Um) and p2, y L Tδ Lm). By inductive hypothesis, one needs at least 2m 2 line segments to reach p from y U staying inside the tube Tδ Um) T δm ). Similarly one needs at least 2 m 2 line segments to reach y L from p 2 staying inside the tube Tδ Lm) T δm ). Moreover since by the first part of the proof, we have Tδ Um) T δ L m) =, it follows that to reach y L from y U, one needs to traverse within T δ m ) twice, each time requiring at least 2 m 2 steps. This proves that one requires at least 2 m line segments to reach y U from y L, hence the proof is complete. 4 Neighborhood of the KM cube central path In Section 3, we showed that with n = Om2 2m ) redundant constraints, the central path CPm) stays inside a tube T δ m). Moreover, we proved that it will take at least 2 m line segments to reach a point in Rδ L close to the optimal solution of 5) from a point in RU δ close to the analytic center of KMm, ρm)). However, path-following IPMs algorithms including the MTY predictor-corrector algorithm, use the neighborhood N β) as opposed to the tube neighborhood T δ m) we used in Section 3. In this section we analyze the N β) neighborhood for the cube KMm, ρm)) and prove that for β = Ω m+ ), we have N β) T δm). In other words, with appropriately chosen neighborhood parameters of KMm, ρm)), all the iterates of the MTY predictor-corrector algorithm stay inside the tube T δ m). Hence, we can draw the conclusion that for KMm, ρm)), the MTY predictor-corrector algorithm will require Ω2 m ) iterations with the neighborhood N β), where β = Ω m+ ). In order to find the largest β for which N β) T δ m), we will use weighted paths. The following lemma is essentially Lemma 4. in [4]. Lemma 4.. Fix µ and let w > 0 such that w e ɛ. Let xw), yw), sw)) denote the w-weighted path which is the solution set of 2). Let s i = s i w) s i, where the s i values are the coordinates of the central path point for i =,..., n. Then we have s i s i 2ɛ for i =,..., n. When we apply the information in Lemma 4. to KMm, ρm)), we obtain the following result. Lemma 4.2. There exists a KMm, ρm)) with n = Om2 2m ) such that all the w-weighted paths with w e β := δ 4 stay inside the tube T δm) with δ 4m+). Proof. Let δ 4m+). Then, from Proposition 3., we know that there exists KMm, ρm)) with n = Om2 2m ) so that the central path stays inside the tube T δ m). Choose β = δ 4 for 2 9

KMm, ρm)) so that w e β. Since for all the slacks, we have s i or s i, Lemma 4. implies that s i w) s i + δ 2 and s iw) s i + δ 2. Then whenever s i δ 2 or s i δ 2, we have s i w) δ and s i w) δ. Since a tube T δ m) with a general δ inside KMm, ρm)) is determined by these slacks, it follows that all w-weighted paths stay inside the tube T δ m) with δ 4m+). This concludes the proof. Next lemma proves a result analogous to Lemma 4.2 tailored for R U δ and RL δ. Lemma 4.3. Let δ 4m+) and fix β := δ 4. Suppose that yµ ) R U δ/2 for some µ. Then N β, µ ) R U δ. Similarly if for some, y ) R L δ/2, then N β, ) R L δ. Proof. Suppose that for some µ, yµ ) R U δ/2, i.e., s δ 2, s 2 δ 2,..., s m δ 2, s m δ 2. Let y N β, µ ). Then, for w := xs µ, we have w e β. Since for all the slacks in KMm, ρm)), we have s i or s i, Lemma 4. implies that s i w) s i + δ 2 and s i w) s i + δ 2. Then whenever s i δ 2 or s i δ 2, we have s iw) δ and s i w) δ. This proves y Rδ U, which implies N β, µ ) Rδ U. The proof of the rest of the claim is similar. In the rest of this section, we aim to find an interval [, µ ] and an upper bound for log µ ) such that the neighborhoods N β, µ ) Rδ U and N β, ) Rδ L for some δ and β. Let δ 4m+) and y µ ),..., y m µ )) be a central path CPm) point such that s = δ 2, s 2 δ 2,..., s m δ 2. Note that any point satisfying s = δ 2, s 2 δ 2,..., s m δ 2 is inside the δ 2-neighborhood of the vertex point 0, 0,..., 0, ), hence Proposition 3. guarantees the existence of a central path point y µ ),..., y m µ )). Then, by using Theorem 3.7 in [9], one can show that µ ρm δ 2. Let us fix µ = ρm δ 2 and let β := δ 4. Then Lemma 4.3 implies that the neighborhood N β, µ ) stays inside the region Rδ U. Hence any point inside the neighborhood N β, µ ) also stays inside the region R U δ. Next, we will find a such that the neighborhood N β, ) is within the region R L δ. Let y ),..., y m )) be the central path point such that y m ) = ρm δ 2. Note that since the objective function in 5) is y m, a central point satisfying y m µ) = ρm δ 2 exists and is unique. Since from 5), we have ρy i y i+ for i =,..., m ), we obtain y µ) δ 2, y 2µ) δ 2,..., y mµ) δ 2, which in turn implies that s µ) δ 2, s 2µ) δ 2,..., s mµ) δ 2. Then, using Lemma 4.3 once again, we conclude that the neighborhood N β, ) stays inside the region Rδ L for β = δ 4. For the central path 2), the duality gap ct xµ) b T yµ) = nµ. It is well-known see e.g.,[2]) that b T yµ) is monotonically increasing and c T xµ) is monotonically decreasing along the central path. In our case, b T yµ) = y m µ) is increasing to 0 and c T xµ) is monotonically decreasing to 0, i.e., c T xµ) > 0 for all µ > 0. Then nµ = c T xµ) b T yµ) > 0

y m implies that µ > ym n for any point on the central path. Hence for the central path point for which y m µ) = ρm δ 2, it follows that > ρm δ 2n. Then using the fact that n = Om22m ), we have log µ ) = Om). The following corollary summarizes our findings. Corollary 4.4. Let the neighborhood parameters be given as β 0 < β = 6m+) for the MTY predictor-corrector algorithm. Then there exists a KMm, ρm)) with n = Om2 2m ) for which MTY predictor-corrector algorithm requires at least Ω2 m ) predictor steps to reduce the barrier parameter from µ to where log µ ) = Om). Proof. Let δ := 4m+) and β = δ 4 = 6m+). We know from Lemma 4.2 that, there exists a KMm, ρm)) with n = Om2 2m ) such that N β) T δ m). Lemma 4.3 shows that there is an interval [, µ ] such that the neighborhoods N β, µ ) Rδ U and N β, ) Rδ L. Hence starting from an iterate x, y, s ) and µ such that x, y, s ) N β, µ ) Rδ U, in order to reach an iterate x 0, y 0, s 0 ) and such that x 0, y 0, s 0 ) N β, ) Rδ L ; Proposition 3.2 and Proposition 4.2 imply that one needs Ω2 m ) steps. Since the number of corrector steps is constant, it follows that the number of predictor steps is Ω2 m ). Moreover the discussion after Lemma 4.3 proves that, we can choose the interval [, µ ] so that log µ ) = Om). This completes the proof. 5 A worst-case iteration-complexity lower bound for the Sonnevend curvature In Section 4, we proved that the MTY predictor-corrector algorithm requires Ω2 m ) iterations using the larger neighborhood N β ) with β = Ω m+ ). Our goal, in this section, is to derive a lower bound for the Sonnevend curvature using the tools from the previous section. To this end, we need to examine the constants in Theorem 2. more closely. Lemma 5.. Let β be the large neighborhood constant so that β and N be the number 400 of iterations of the MTY predictor-corrector algorithm to reduce the barrier parameter from µ to. Then N 4 2 κµ) β µ dµ + 2 log + β 4 ) log Proof. See Theorem 2.4 and its proof in [4]. ). 0) The next theorem shows that on the interval [, µ ], the total Sonnevend curvature is in comparable order to the number of sharp turns of the central path.

Theorem 5.2. There is an integer m 0 > 0 such that for any m m 0, there exists a KMm, ρm)) and interval [, µ ] such that the Sonnevend curvature satisfies ) κµ) n µ dµ 8 ) log n + log. log n logn + ) log2) Proof. Let β = 6m+) and choose the parameters of KMm, ρm)) as ρ = m 2m+) δ = 8m+) so that n = Om22m ). Write n = τm2 2m for some constant τ > 0 and we calculate log ) and = log n = log τ + log m + 2m. This shows that for large enough m, log ) = Om). Since we can extend the interval [, µ ] so that it still includes all the ) sharp turns, we will assume that log = Θm). Then Corollary 4.4 applies and we have N 2 m. Now using the bound log + ω) log 2)ω for 0 ω, we get from 0) 2 log + β 4 ) 8 m +. log 2 Using the fact that m log n, a straightforward calculation shows that The proof is complete. κµ) µ dµ log ) = Ω ) n 8 log n +. ) log n logn + ) log2) Corollary 5.3. For any ɛ > 0, there is an integer m 0 > 0 such that for any m m 0, there κµ) exists a KMm, ρm)) and interval [, µ ] such that µ dµ n ɛ) ) 2 log, where ) log = Om). Proof. The claim follows from Theorem 5.2 for large m. Remark 5.4. Corollary 5.3 yields a negative answer to the question raised by [7], i.e., ) ) whether there exists an α < 2 with log κµ) = Ω) such that µ dµ nα log for the class of LO problems. 6 An iteration-complexity lower bound for MTY predictorcorrector algorithm with constant neighborhood opening In practice, the MTY predictor-corrector algorithm operates in a larger neighborhood where β is a constant. In order to conclude an iteration-complexity lower bound for MTY predictorcorrector algorithm with constant neighborhood opening β by using Theorem 2., we need 2

to show that there is a constant ν > 0 with κµ) ν for µ [, µ ] for KMm, ρm)). While this appears to hold numerically, proving it is much more difficult. To go around this difficulty, we exploit a trick introduced by [3]. The idea is to use one dimensional LO problems, where it is easier to calculate the central path and its corresponding κµ); and to use LO problems with scaled objectives with block diagonal constraints. For the details, we refer the reader to Appendix section 8. Recall that by Corollary 5.3, we know there exists a KMm, ρm)) and an interval [, µ ] κµ) such that µ dµ n ɛ) ) 2 log. Here n = Om2 2m ) and µ µ = Olog n). Now by 0 using Lemma 8.4 and Proposition 8.2, we can embed KMm, ρm)) in a block diagonal LO problem at the expense of increasing the size of the problem by at most n := n+om+log m). Denote by KMm) this hybrid construction with KMm, ρm)) embedded in. Since n = On), we have the following: Theorem 6.. For any ɛ > 0, there exists a positive integer m 0 such that for any m m 0, there exists an LO problem KMm) and an interval [, µ ] with the following properties: µ = O m2 2m). Let β 0 < β 400 be the constant neighborhood N β) parameters. Then the MTY predictor-corrector algorithm on this neighborhood requires Ω n ɛ) )) 2 log predictor steps. Proof. Consider the KMm, ρm)) cube from Corollary 5.3. Then by using Lemma 8.4 and Proposition 8.2, we can embed KMm, ρm)) in a block diagonal LO problem with size n := n + Om + log m) and m = Om). Note that since the interval [, µ ] comes from KMm, ρm)), the first claim in the theorem follows from Corollary 5.3. Also, since for KMm), there exists a constant ν > 0 for all µ [, µ ] with the corresponding κµ) ν, Theorem 2. implies the first claim. This completes the proof. 7 Conclusion and future work It is an open question whether there is an interior-point algorithm for LO problems with On α log µ )) iteration-complexity upper bound for α < 2 to reduce the barrier parameter from µ to. In this regard, a related open question raised by Stoer et al. [4], was ) whether there is an α < 2 with κµ) µ dµ nα log for all LO problems. This paper provides a negative answer to the latter question. We also show that for the MTY 3

predictor-corrector algorithm, the classical iteration-complexity upper bound is tight. Future work would be to investigate whether an analogous result could be derived to the case of long step IPMs. In this paper we establish that for the central path of the carefully constructed redundant Klee-Minty cubes, both the geometric curvature and the Sonnevend curvature of the central path are essentially in the order of Ω n). In a recent work, Mut and Terlaky [7] show the existence of another class of LO problems where a large geometric curvature of the central path implies a large Sonnevend curvature. These two important cases suggest that it might be possible to prove this implication in a more general setting. 8 Appendix Lemma 8.. For large enough r, there is -dimensional LO problem with r + ) constraints for which τ r κµ) τ2 r for any µ [α, α 2 ], where α = r r 4 some constants τ, τ 2 0. and α 2 = r r for Proof. Consider the problem min{ y : y and, y 0 counted r times}. The construction is given in [3], p:55. Consider the interval [α, α 2 ], where α = r r 4 s 0 µ) = yµ). Then it is shown in [3], p:55 that, ṡ 0 µ) s 0 µ) and α 2 = r r. Let r2 3 r on [α, α 2 ]. This implies µṡ 0 µ) s 0 µ) = Ω r) on [α, α 2 ]. Then, from Proposition 2.2 part., we have κµ) = Ω r) for all µ [α, α 2 ]. The proof is complete. Proposition 8.2. Consider the LO problems min c ) T x s.t. A x = b and min c 2 ) T x s.t. A 2 x 2 = b 2 2) x 0, x 2 0, with the corresponding κ µ) and κ 2 µ) on the interval [, µ ]. Then for the problem min c T x s.t. Ax = b x 0, [ ] [ c b with the corresponding κµ) where c =, b = we have κµ) κ i µ) for i =, 2. c 2 b 2 ] [ A 0 and A = 0 A 2 ] 3), on [, µ ], 4

Proof. Let x µ), y µ), s µ) ) and x 2 µ), y 2 µ), s 2 µ) ) be the central paths in 2). Then the term κµ) for the combined problem 3) becomes κµ) = [µẋ ṡ, µẋ 2 ṡ 2 ] 2 i =, 2. κ i µ) for Proposition 8.3. Let η > 0 and consider the central path 2) and its κµ). Let Â, ˆb, ĉ) be another problem instance, where Â, ˆb, ĉ) = A, b η, c) with its corresponding ˆκµ). Then, we have ˆκµ) = κηµ), µ [ µ0 η, µ ]. 4) η Proof. Using 2), it is straightforward to verify that the central path ˆxµ), ŷµ), ŝµ)) of the new problem satisfies ˆxµ) = xηµ), ŷµ) = yηµ) and ŝµ) = sηµ). Using the definition of η κµ), we get ˆκµ) = κηµ). Hence the claim follows. Lemma 8.4. Given an interval [, µ ] and a constant ν > 0, there exists an LO problem ) of size n = Θ log µ ) such that κµ) ν for all µ [, µ ]. The hidden constant in ) n = Θ log µ ) depends on ν. Proof. Let a constant ν > 0 and an interval [, µ ] be given. For the given ν > 0, by Lemma 8., there exists an LO problem with its κµ) ν on an interval µ [α, α 2 ]. By applying Proposition 8.3 for η := α α2 α ) iµ0 for i = 0,,..., k, we find k ) scaled LO problems with their corresponding κ i µ), i = 0,,..., k such that κ i µ) = κηµ) on [ µ α 2 α ) i, α 2 α ) i+ ], for i = 0,,..., k. Then by using Proposition 8.2, we can obtain a block diagonal LO problem with its κµ) κ i µ) ν for i = 0,,..., k for any µ [ ) k, α2 α µ0 ]. In order to have κµ) ν for any µ [, µ ], it is then enough to α2 have ) k α µ0 µ. This is true if and only if k log ) α log ). Since by Lemma 8., the ratio α 2 α is a constant depending only on the given ν, the number of blocks k needed is ) Θ log α 2 α ). Also since the size of the LO problem with its κµ) is a constant only determined ) by ν, the size of the problem is n = Θ log µ ) to achieve κµ) ν for all µ [, µ ]. This completes the proof. α2 References [] Antoine Deza, Eissa Nematollahi, Reza Peyghami, and Tamás Terlaky. The central path visits all the vertices of the Klee Minty cube. Optimisation Methods and Software, 25):85 865, 2006. [2] Antoine Deza, Eissa Nematollahi, and Tamás Terlaky. How good are interior point methods? Klee Minty cubes tighten iteration-complexity bounds. Mathematical Programming, 3): 4, 2008. [3] Petra Huhn and Karl Heinz Borgwardt. Interior-point methods: worst case and average case analysis of a phase-i algorithm and a termination procedure. Journal of Complexity, 83):833 90, 2002. 5

[4] B Jansen, C Roos, and T Terlaky. A short survey on ten years interior point methods. Technical Report 95 45, Delft, The Netherlands, 995. [5] N Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 44):373 395, 984. [6] S Mizuno, MJ Todd, and Y Ye. Anticipated behavior of path-following algorithms for linear programming. Technical Report 878, School of Operations Research and Industrial Engineering, Ithaca, New York, 989. [7] Murat Mut and Tamás Terlaky. An analogue of the Klee-Walkup result for Sonnevends curvature of the central path. Technical report, Lehigh University Department of Industrial and Systems Engineering, 203. Also available as http://www.lehigh.edu/ise/documents/3t_006.pdf. [8] Eissa Nematollahi and Tamás Terlaky. A redundant Klee Minty construction with all the redundant constraints touching the feasible region. Operations Research Letters, 364):44 48, 2008. [9] Eissa Nematollahi and Tamás Terlaky. A simpler and tighter redundant Klee Minty construction. Optimization Letters, 23):403 44, 2008. [0] Yurii Nesterov and Arkadii Nemirovskii. Interior-point Polynomial Algorithms in Convex Programming, volume 3. SIAM, 994. [] Florian A Potra. A quadratically convergent predictor corrector method for solving linear programs from infeasible starting points. Mathematical Programming, 67-3):383 406, 994. [2] Cornelis Roos, Tamás Terlaky, and Jean-Philippe Vial. Interior Point Methods for Linear Optimization. New York: Springer, 2006. [3] György Sonnevend, Joseph Stoer, and Gongyun Zhao. On the complexity of following the central path of linear programs by linear extrapolation II. Mathematical Programming, 52:527 553, 99. [4] Joseph Stoer and Gongyun Zhao. Estimating the complexity of a class of path-following methods for solving linear programs by curvature integrals. Applied Mathematics and Optimization, 27:85 03, 993. [5] Michael J Todd. A lower bound on the number of iterations of primal-dual interior-point methods for linear programming. Technical report, Cornell University Operations Research and Industrial Engineering, 993. [6] Michael J. Todd and Yinyu Ye. A lower bound on the number of iterations of long-step primal-dual linear programming algorithms. Annals of Operations Research, 62):233 252, 996. [7] Gongyun Zhao. On the relationship between the curvature integral and the complexity of path-following methods in linear programming. SIAM Journal on Optimization, 6):57 73, 996. 6