Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Similar documents
A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

On Mehrotra-Type Predictor-Corrector Algorithms

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

Lecture 10. Primal-Dual Interior Point Method for LP

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

CCO Commun. Comb. Optim.

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye

Corrector predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

Interior Point Methods in Mathematical Programming

Local Self-concordance of Barrier Functions Based on Kernel-functions

An EP theorem for dual linear complementarity problems

A priori bounds on the condition numbers in interior-point methods

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

A new primal-dual path-following method for convex quadratic programming

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Interior Point Methods for Mathematical Programming

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

Semidefinite Programming

Interior-Point Methods

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

On well definedness of the Central Path

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

New stopping criteria for detecting infeasibility in conic optimization

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques

Interior Point Methods for Nonlinear Optimization

CS711008Z Algorithm Design and Analysis

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

A Simpler and Tighter Redundant Klee-Minty Construction

COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM

Primal-dual IPM with Asymmetric Barrier

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise

New Interior Point Algorithms in Linear Programming

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

Barrier Method. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999.

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Corrector-predictor methods for sufficient linear complementarity problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

c 2005 Society for Industrial and Applied Mathematics

Interior Point Methods for Linear Programming: Motivation & Theory

Interior-Point Methods for Linear Optimization

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

IMPLEMENTATION OF INTERIOR POINT METHODS

Lineáris komplementaritási feladatok: elmélet, algoritmusok, alkalmazások

Contraction Methods for Convex Optimization and monotone variational inequalities No.12

Nonlinear Programming

12. Interior-point methods

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

A variant of the Vavasis-Ye layered-step interior-point algorithm for linear programming

Self-Concordant Barrier Functions for Convex Optimization

On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods

Limiting behavior of the central path in semidefinite optimization

Nonsymmetric potential-reduction methods for general cones

Lecture 17: Primal-dual interior-point methods part II

Introduction to Nonlinear Stochastic Programming

Lecture 14: Primal-Dual Interior-Point Method

א K ٢٠٠٢ א א א א א א E٤

Optimization: Then and Now

Chapter 6 Interior-Point Approach to Linear Programming

Advances in Convex Optimization: Theory, Algorithms, and Applications

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani

Chapter 14 Linear Programming: Interior-Point Methods

On Mehrotra-Type Predictor-Corrector Algorithms


Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization

Conic Linear Optimization and its Dual. yyye

A Smoothing Newton Method for Solving Absolute Value Equations

4TE3/6TE3. Algorithms for. Continuous Optimization

informs DOI /moor.xxxx.xxxx c 200x INFORMS

Convex Optimization and l 1 -minimization

Lecture 3 Interior Point Methods and Nonlinear Optimization

An interior-point gradient method for large-scale totally nonnegative least squares problems

Transcription:

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm for linear programming performs much better in implementation than small-neighborhood counterparts In this paper, we provide a unified way to enlarge the neighborhoods of predictorcorrector interior-point algorithms for linear programming We prove that our methods not only enlarge the neighborhoods but also retain the so-far best known iteration complexity and superlinear or quadratic convergence of the original interior-point algorithms The idea of our methods is to use the global minimizers of proximity measure functions Key words neighborhoods Linear programming, interior-point algorithms, iteration complexity, This work was partially supported by the National Natural Science Foundation of China under Grant No 100103 and Grant No 701001 Part of the work was done while the author was a research staff of The Fields Institute for Research in Mathematical Sciences, University of Toronto, Ontario, Canada in 00 Institute of Applied Mathematics, AMSS, Chinese Academy of Science, Beijing 100080, China ybzhao@amssaccn 1

1 Introduction Consider the canonical linear programming problem: min{c T x : Ax = b, x 0}, where c R n, b R m, A R m n and ranka = m We assume that the problem has an interior-point, ie, there exists an x 0, s 0, y 0 such that Ax 0 = b, A T y 0 + s 0 = c, x 0 > 0, s 0 > 0 This assumption guarantees the existence of the central path on which most interior-point algorithms are based There are two key factors that are closely related to the practical behavior of an interior-point algorithm, ie, the search direction and the neighborhood of the central path A large body of implementation shows that interior-point algorithms using wide neighborhoods perform much better than those counterparts based on small neighborhoods Thus, the enlargement of neighborhoods is an interesting and important aspect for improving the efficiency of interior-point algorithms This is why many authors have been studying the wide-neighborhood interior-point algorithms for optimization problems see, for example, [1, 5, 6, 11, 15, 18,, 3] A neighborhood of the central path is determined by some proximity measure function that is used to measure the distance between the point x, s > 0 and the central path The following proximity measure functions are widely used in the literature of interior-point methods for linear programming [8, 16, 19, 0]: δ x, s, µ = xs µ e, δ x, s, µ = xs µ e, δ log x, s, µ = xt s µ n + n log µ 1 n logx i s i, δ Φ x, s, µ = 1 xs µ µ xs In general cases, the parameter µ can be calculated at x, s, and thus we can write µ as µx, s For any given proximity measure function δx, s, µx, s, the neighborhood of the central path can be defined as N τ = {x, s > 0 : δx, s, µx, s τ}, where τ is a given positive number For most interior-point algorithms, the parameter µ > 0 is set to be the duality gap x T s/n However, we note that most proximity measure functions used in interior-point algorithms have global minimizers with respect to µ For any given x, s > 0, let µ x, s denote the global minimizer of δx, s,, ie, δx, s, µ x, s δx, s, µ for all µ > 0 Then the corresponding neighborhood is given by N τ = {x, s > 0 : δx, s, µ x, s τ}

Clearly, the neighborhood N τ is larger than N τ for any µx, s µ x, s, ie, for a given proximity measure function δ, the neighborhood N τ is the largest neighborhood This suggests that we may use the neighborhood N τ to replace the original neighborhood N τ of interior-point algorithms The purpose of this paper is to adapt the predictor-corrector-type interior-point algorithms to use the new neighborhood N τ so that the algorithms can work in wider neighborhoods We first give here a brief review on predictor-corrector methods The primal and dual predictor-corrector algorithm for linear programming was proposed by Mizuno, Todd and Ye [10], also see Barnes et al [] This method was later extended to complementarity problems and other optimization problems see, for example, [5, 7, 14, 15, 17, 18, 1] Some authors also proposed the infeasible version of this method such that the algorithm can start from an infeasible point see, for instance, [1, 13] A more practical predictor-corrector algorithm was proposed by Mehrotra [9], which was based on the power series algorithm in [3] Other power series high-order predictor-corrector algorithms can be found in [4, 15, 17], etc Two neighborhoods are used in predictor-corrector algorithms: One is used in so-called predictor steps and the other is used in corrector steps Since the original Mizuno-Todd-Ye method is actually working in small neighborhoods, several authors tried to modify this algorithm to work in some wider neighborhoods in order to achieve a faster convergence of the algorithm see, for example, [5, 6, 11, 14, 15] Let µ gap denote the duality gap x T s/n throughout this paper We note that µ gap = x T s/n is the global minimizer of the proximity measure function δ log x, s, µ with respect to µ Therefore, it is natural for δ log -based interior-point algorithms to use the duality gap as the updating rule for µ during the course of iteration However, for other proximity measure functions, µ gap is not necessary to be a global minimizer For example, the global minimizer see [11] of the function δ Φ x, s, µ is x µ x, s = T s n x i s i 1 The predictor-corrector algorithms that use δ Φ x, s, µ as the proximity measure function have been studied by several authors [11, 14] Especially, Peng, Terlaky and Zhao [11] has already studied how designing an interior-point algorithm by using the above-mentioned minimizer µ x, s In this paper, we focus our attention on the cases of -norm and - norm neighborhoods that were commonly used in the literature of interior-point algorithms We note that the global minimizer of the -norm proximity measure function δ x, s, µ is µ x, s = x T s x T s n x i s i = x T, s and the global minimizer see Lemma 31 in this paper of the -norm proximity measure function δ x, s, µ is µ x, s = max 1in x i s i + min 1in x i s i By using the global minimizers of the proximity measure functions δ and δ, we provide in this paper a unified method to enlarge the neighborhood of the central path, and hence we give a new design for predictor-corrector algorithms Also, we prove that 3

our algorithms retain both the so-far best known iteration complexity and the quadratic convergence of the original algorithms This paper is organized as follows In Section, we describe the algorithm based on - norm neighborhoods, and prove that the algorithm retains the best known O n logx T 0 s 0/ε- iteration complexity for small-neighborhood algorithms In Section 3, we study the case of -norm neighborhoods, and prove that the algorithm retains the best known On logx T 0 s 0/ε- iteration complexity for -norm neighborhood algorithms Finally, we point out that both algorithms in the paper are quadratically convergent in the sense that the duality gap sequences converge to zero quadratically We use the following notation throughout the paper e denotes the vector with all components equal to 1 For any positive vectors x, s R n and a real number q, the symbols xs and x q denote the vector whose components are x i s i and x q i i = 1,, n, respectively For any vector x R n, we denote minx and maxx denote the smallest and the largest components of x, respectively, ie, minx = min 1in x i and maxx = max 1in x i Algorithms based on -norm neighborhoods In this section, we consider the predictor-corrector algorithm based on the small neighborhood determined by the -norm proximity measure function: δ x, s, µ = xs µ e, which was used in the original Mizuno-Todd-Ye method As we pointed out in the last section, for a given vector x, s > 0 it is easy to verify that the global minimizer of δ x, s, µ with respect to µ is µ x, s = x T s x T s In what follows, we denote µ x, s by µ for simplicity We first give two constants τ and τ in 0, 1 satisfying 1 + 3 τ r τ τ < τ where r := 1 + τ 3 1 Since τ 0, 1, it is easy to verify that r < 1 The idea of predictor-corrector methods is very natural Roughly speaking, the predictor-corrector method follows the central path of linear programming by alternating predictor steps and corrector steps Starting from a point in certain neighborhood N τ and moving along a predictor search director, the predictor step produces a predicted point in N τ that is larger than N τ At the predicted point, a corrector director is calculated, and a suitable stepsize along the corrector direction yields the next iterate that is back inside the neighborhood N τ In our algorithm, the predictor search direction is the same as the one in Mizuno-Todd-Ye method [10], ie, the solution to the following system: A x = 0, A T y + s = 0, s x + x s = xs 4

We denote by xθ = x + θ x, sθ = s + θ s The steplength θ is determined by the following rule where θ = max{t 0, 1] : xθ, sθ > 0, δ xθ, sθ, µ θ τ, θ 0, t]}, 3 µ θ := µ xθ, sθ = [xθ ] T sθ xθ T sθ Different from traditional methods, our corrector search direction is the solution to the system A x = 0, A T y + s = 0, 4 s x + x s = xs x s µ A characteristic of system 4 is that the duality gap remains invariant along the direction x, s Actually, since x T s = 0, we have x + θ x T s + θ s = x T s + θx T s + s T x + θ x T s = x T s + θx T s + s T x = x T s + θ x T s x T s µ = x T s The last equality above follows from the fact µ algorithm as follows = x T s /x T s We now describe the Algorithm 1 Given two positive numbers τ and τ in 0,1 satisfying 1, and an initial feasible point x 0, s 0 > 0 such that δ x 0, s 0, µ 0 τ where µ 0 = x0 T s 0 Set s 0 T s 0 k := 0, and do the following steps: Step 1 Predictor Solve system with x, s := x k, s k for the search direction x k, s k Compute the steplength θ k according to 3, and set x k, s k = x k + θ k x k, s k + θ k s k, and µ k = [ xk ] T s k x k T s k Step Corrector Solve system 4 with x, s, µ = x k, s k, µ k for the corrector direction x k, s k Set x k+1, s k+1 = x k + α k x k, s k + α k s k, where α k = min x k s k 1 1 1 r µk max x k s k max x k s k 5 min x k s k

Set k := k + 1 Repeat the above steps until certain stopping criterion, for instance µ k gap = x k T s k /n ε, is satisfied At each step if x k, s k > 0, the positivity of x k, s k follows directly from the choice of θ k Later, we will see that the positivity of x k+1, s k+1 follows from the choice of α k see the proof of Theorem 1 Thus, Algorithm 1 is well-defined We now aim at proving that the algorithm has an O n log x 0 T s 0 ε -iteration complexity First, we have several technical results Lemma 1 [10] Let x, s > 0 be given For any given u R n, if x, s is the solution to the system A x = 0, A T y + s = 0, s x + x s = u, then x, s satisfies the following inequalities x s i i = x s 1 i i 4 xs 1/ u, i I + i I where I + = {i : x s i i > 0} and I = {i : x s i i < 0} Hence, x s 1 4 xs 1/ u For the sake of convenience, we suppress the iteration index k and denote x k, s k, x k, s k by x, s and x, s, respectively, if there is no confusion arising Similarly, denote x k, s k and x k, s k by x, s and x, s, respectively We also denote by p = xs µ and w = x s µ Consider the interval [0, ˆη where ˆη = min wi <0 p i w i Clearly, there exists a unique ˆθ 0, 1 such that the transformation η = θ /1 θ is a bijection from [0, ˆθ to [0, ˆη For any given x, s > 0, since µ = x T s /x T s, it is easy to verify that and where p i = x i s i /µ for i = 1,, n We recall that [δ x, s, µ ] = n xt s µ = n [xt s] x T s, 5 n n p i = p i, 6 xθ, sθ = x + θ x, s + θ s and µ θ = xθ T sθ xθ T sθ From system, it is evident that xθ T sθ = 1 θx T s and x i θs i θ = 1 θx i s i + θ x i s i 7 6

We now prove the next result Lemma Let η = θ /1 θ Then for any η [0, ˆη, we have [δ xθ, sθ, µ θ] [δ x, s, µ ] fη where fη = 1 η1 + η w n 1 p i + ηw i Proof By 5, 6 and 7, we have [δ xθ, sθ, µ θ] [δ x, s, µ ] = n xθt sθ n xt s µ θ µ [ ] = xt s xθ T sθ µ xθ T sθ [ ] = xt s 1 θx T s µ n [1 θx i s i + θ x i s i ] = xt s x T s µ n x i s i + η x i s i n n p i = p i 1 n p i + ηw i = n p i n ηp i w i + η w i n p i + ηw i 8 Since x i s i + s i x i = x i s i for all i = 1,, n, we have x i s i + s i x i + x i s i x i s i = x i s i for all i = 1,, n Thus, 4x i s i x i s i x i s i which implies that n x i s i x i s i 1 4 x T s Note that equality 5 implies that x T s/µ n, ie, n p i n Dividing both sides of the inequality above by µ, we have n p i w i x T s 4µ Since x T s = 0, we have n w i = 0, which implies that w i = w i, i I + i I 7 = xt s 4µ n 4 9

where I + = {i : w i > 0} and I = {i : w i < 0} Setting u = xs in Lemma 1 and by using 9, we can easily see that Therefore, by 9 and 10, we have By harmonic inequality, we have Thus, by 8, 11 and 1, we have i I + w i xt s 4µ n 4 and w xt s 4µ n 4 10 n n p i w i + ηwi n p i w i + η w w i n n + η w w i i I + n 1 + η w 11 n p i + ηw i n 1 p i + ηw i n 1 [δ xθ, sθ, µ θ] [δx, s, µ ] η n p i [ n 1 + η w ] n 1 n p i + ηw i n ηn 1 + η w n = 1 η1 + η w =: fη n 1 p i + ηw i 1 p i + ηw i The second inequality above follows from the fact n p i n Let θ be the steplength given by 3, and let η = θ 1 θ Therefore, θ = η η + η + 4η 13 To prove the iteration complexity of Algorithm 1, we need an estimate of the lower bound of θ From 13, it is sufficient to estimate the lower bound of η since the function ζt = is an increasing function in 0, t t+ t +4t Lemma 3 Let θ be the steplength in predictor step, determined by 3, and let η = θ 1 θ Then we have { β minp η min, 1 β minp τ τ }, w n1 + β minp 8

where β 0, 1 is a given number independent of n, for instance, β = 09 Proof We first note that fη satisfies the properties: f0 = 0 and fη as η ˆη Thus, there exists an η such that f η = τ τ > 0 Let η be the smallest solution to the above equation If η β minp w, then we have ie, Since either η β minp w τ τ = f η = 1 η1 + η w 1 η1 + β minp n n 1 p i + ηw i 1 minp η w 1 η1 + β minp n 1 β minp, η 1 β minp τ τ n1 + β minp or η β minp w, we conclude that { β minp, 1 β minp τ τ } w n1 + β minp η min To prove the desired result, it suffices to show that η η Actually, since f0 = 0 and since η is the smallest solution to the equation ft = τ τ > 0 in the interval 0, ˆη, we deduce that fη τ τ for all η [0, η] Thus, for any θ [0, θ] where θ satisfies that θ /1 θ = η, it follows from Lemma that [δ xθ, sθ, µ θ] τ + [δ x, s, µ ] τ τ The last inequality follows from that δ x, s, µ τ Thus, for any θ [0, θ], we have δ xθ, sθ, µ θ τ < 1 which also implies that xθ, sθ > 0 By definition of θ, we conclude that θ θ, and hence η η The proof is complete We are ready to prove the following result Theorem 1 Let x k, s k, µ k be generated by Algorithm 1 Then the following properties hold i δ x k, s k, µ k τ for all k 0; δ x k, s k, µ k τ for all k 0 ii The duality gap µ k+1 gap = 1 θ µ k k gap for all k 0 9

iii There exists a constant σ 0, 1 independent of n such that θ k σ n for all k 0, which implies that Algorithm 1 has an O n log x 0 T s 0 -iteration complexity Proof Result i can be proved by deduction We assume that δ x k, s k, µ k τ By the choice of θ, k the inequality δ x k, s k, µ k τ holds trivially It suffices to prove that δ x k+1, s k+1, µ k+1 τ We first note that for any constant τ 0, 1, we have r := ε 1 + 3 τ 1 + τ 3 < 1 Since δ x k, s k, µ k τ, we have 1 τ µ k x k i sk i 1 + τ µk Thus, [min x k s k ] τ max x k s k µ k 1 1 + τ The above inequality can be written as 1 + τ 1 1 τ τ 1 1 + τ 1 + τ 1 + τ1 τ = 1 + τ 3 = 1 r 1 1 r µk max x k s k [min x k s k ] 1 Therefore, α k = min x k s k 1 1 1 r µk max x k s k [min x k s k ] µ k 1 + τ 1 µ k max x k s k max x k s k 1 1 r µk max x k s k [min x k s k ] max x k s k µ Noting that for any 0 < t k max x k s k 1 t xk s k µ k = max 1in 1 t xk i sk i µ k = 1 tmin xk s k µ k In particular, setting t = α k, we have xk s k 1 αk = 1 α k min xk s k 14 µ k µ k 10

On the other hand, by Lemma 1, we have w k 1 4 µ k xk s k 1 x k s k xk s k µ k 1 µ k xk s k 1 e xk s k µ k max xk s k µ k e xk s k µ k 15 We also note that α k is the solution to the following quadratic equation with respect to t : 1 t min xk s k µ k + t max xk s k µ k = r Therefore, by 14 and 15 and noting that x k s k / µ k e τ, we have x k+1 s k+1 µ k e = = = x k s k + α k x k s k x k s k / µ + α k x k s k µ k e xk s k µ k e e α k xk s k µ k + α k w k xk s k µ k e e α k xk s k µ k + α k w k xk s k x k s k e αk µ k µ k e + α k w k 1 α k min xk s k x k s k µ k µ k e + α k w k [ 1 α k min xk s k µ k + αk max x k s k x k s k ] x k s k µ k µ k e µ k [ 1 α k min xk s k µ k + αk max x k s k ] x k s k µ k µ k e r τ τ e Since τ < 1, the inequality above also implies that x k+1, s k+1 > 0 Since µ k+1 is the global minimizer of the function δ x k+1, s k+1, µ with respect to µ > 0, we conclude that Result i is proved δ x k+1, s k+1, µ k+1 = x k+1 s k+1 x k+1 s k+1 µ k+1 e µ k e τ 11

Note that µ k gap = 1 θ k µ k gap and It follows that x k+1 T s k+1 = x k + α k x k T s k + α k s k = x k T s k + α k e T x k s k xk s k = x k T s k µ k µ k+1 gap = µ k gap = 1 θ k µ k gap + α k x k T s k Result ii follows We now prove iii Since n p i n and δ x, s, µ τ, it follows that 1 minp 1 τ We also note that w n/4 From Lemma 3, we conclude that η k ρ n, where ρ = min Since the function ξt := { 4β1 τ, 1 β 1 τ τ τ 1 + β t t+ t +4t is an increasing function on 0,, we obtain } θ k = η k + η k η k + 4η k ρ/n ρ/n + ρ/n + 4ρ/n σ n where 0 < σ = < 1 Therefore, the iteration complexity of the algorithm is 1+ 1+4/ρ O n log x 0 T s 0 ε 3 Algorithms based on -norm neighborhoods Since the original Mizuno-Todd-Ye method is actually working in small neighborhoods of the central path, many investigators tried to adapt this method to work in some wider neighborhoods such as the -norm neighborhood in order to achieve a faster convergence of the algorithms See, for example, [5, 11, 14, 17] In this section, we consider the predictorcorrector algorithm working in -norm neighborhoods We discuss how to enlarge further the -norm neighborhood by using the global minimizer of the -norm proximity measure function, and prove that the proposed algorithm here retains the best known iteration complexity for -norm-neighborhood interior-point algorithms First, we have the following result Lemma 31 For any given x, s > 0, the unique global minimizer of the function δ x, s, µ = xs µ e with respect to µ is given by µ = and the least value of δ x, s, is given by δ x, s, µ = maxxs + minxs, maxxs minxs maxxs + minxs 1

Proof For any given vector x, s > 0 and scalar t > 0, it is easy to verify that δ x, s, t = max x i s i 1in 1 t { } maxxs = max 1 t, minxs 1 t { maxxs = max 1, 1 minxs } t t When 0 < t maxxs+minxs, it follows that δ x, s, t = maxxs 1, t ] which is decreasing in the interval 0, maxxs+minxs When t maxxs+minxs, it follows that δ x, s, t = 1 minxs, t [ which is increasing in the interval maxxs+minxs, Thus for any given x, s > 0, the global minimizer of δx, s, in 0, is t = maxxs+minxs, at which the least value of the proximity measure function is given by δ x, s, t = maxxs t 1 = 1 minxs t = maxxs minxs maxxs + minxs The proof is complete In this section, all notation is similar to those in Section except for µ being replaced by µ For instance, we still use θ to denote the stepsize in predictor steps, and x, s is the search direction in the predictor step θ is given by θ = max{t 0, 1] : xθ > 0, sθ > 0, δ xθ, sθ, µ θ τ, θ 0, t]}, 16 where xθ = x + θ x, sθ = s + θ s and Denote by µ θ = maxxθsθ + minxθsθ θ max = max{t 0, 1] : xθ > 0, sθ > 0, θ 0, t]} 17 Clearly, θ θ max We now specify the algorithm as follows Algorithm 31 Given a constant τ such that 0 < τ 1, and given τ such that τ 1 n 1 τ = τ < τ, and given an initial feasible point x0, s 0 > 0 such that δ x 0, s 0, µ 0 τ where µ 0 = maxx0 s 0 +minx 0 s 0 Set k := 0, and do the following steps: 13

Step 1 Predictor Solve system with x, s := x k, s k for the search direction x k, s k Compute the damping parameter θ k according to 16, and set x k, s k = x k + θ x k k, s k + θ s k k, µ k = max xk s k + min x k s k Step Corrector Solve the following system for the corrector direction x k, s k : A x k = 0, A T ȳ k + s k = 0, s k x k + x k s k = µ k e x k s k Set x k+1, s k+1 = x k + α k x k, s k + α k s k, where α k = τ τ n 1 + 1 τ τn µ k / min x k s k Set k := k + 1 Repeat the above steps until certain stoping criterion, for instance µ k gap ε, is satisfied Later, we will see that the stellength α k is well-defined We now start to analyze the algorithm and prove that the algorithm has an O n log x0 T s 0 ε -iteration complexity For simplicity, we suppress the iteration index k when there is no confusion arising Lemma 3 Let x, s be the predictor search direction Let θ max be defined as 17 and x i s i ˆη = min x i s i <0 x i s i = min p i w i <0 w i, where p := xs µ and w := x s µ If θ max < 1, then ˆη η max := θ max /1 θ max Proof By definition of θ max, it follows that { θ max = min min x i <0 x i x i, min s i <0 } s i s i Thus, there exists some i 0 such that either x i0 +θ max x i0 = 0 or s i0 +θ max s i0 = 0 Hence, 0 = x i0 + θ max x i0 s i0 + θ max s i0 = 1 θ max x i0 s i0 + θ max x i0 s i0, 18 Since θ max < 1 and x i0 s i0 > 0, equality 18 implies that x i0 s i0 < 0 Dividing both sides of 18 by 1 θ max, we have x i0 s i0 + η max x i0 s i0 = 0, where η max = θ max /1 θ max By definition of ˆη, we conclude that η max ˆη We now prove that η = θ /1 θ has a lower bound 14

Lemma 33 Let η = θ /1 θ, where θ is the steplength in the predictor step Then { } 41 τ09 4 τ τ 1 η min, 1 + τ 1 + τ n Proof Noting that the predictor step starting from the point x, s that satisfies δ x, s, µ τ There are only two cases Case 1 δ xθ, sθ, µ θ δ x, s, µ < τ τ for all 0 < θ < θ max In this case, by definition of θ, we have that θ = θ max There are only two sub-cases Sub-case 1 θ max = 1 For this case, η = θ /1 θ as θ θ max = 1 Thus, η > 09 minp/ w 19 holds trivially, since η = in this case Sub-case θ max < 1 Since θ = θ max < 1, it follows that η = η max := θ max /1 θ max By Lemma 3, we have η max ˆη 09 minp/ w Therefore, for this sub-case, inequality 19 remains valid Case There exists a point θ 0, θ max such that δ x θ, s θ, µ θ δx, s, µ τ τ > 0 Since δ x0, s0, µ 0 δx, s, µ = 0, by continuity, there must exist a point θ 0, θ max such that δ xθ, sθ, µ θ δx, s, µ = τ τ Let θ be the smallest solution to the above equation Let η = θ /1 θ We now prove that η is bounded from below Since either η 09 minp/ w or η 09 minp/ w, it is sufficient to prove that η has a lower bound if η 09 minp/ w We first note that maxxs + η x s maxxs + η x s, 0 and that for all η 09 minp/ w we have minxs + η x s minxs η x s > 0 1 It is easy to see that that for any given number c > 0, the function ϕt = t c t+c is increasing with respect to t, and the function χt = c t c+t is decreasing with respect to t Thus, by 0, 1 and Lemma 31, for any η 09 minp/ w we have δ xθ, sθ, µ θ = maxxθsθ minxθsθ maxxθsθ + minxθsθ = max1 θxs + θ x s min1 θxs + θ x s max1 θxs + θ x s + min1 θxs + θ x s maxxs + η x s minxs + η x s = maxxs + η x s + minxs + η x s maxxs + η x s minxs + η x s maxxs + η x s + minxs + η x s 15

maxxs + η x s minxs η x s maxxs + η x s + minxs η x s = maxxs minxs maxxs + minxs + η x s maxxs + minxs = δ x, s, µ + η w where w = x s/µ The first inequality above follows from 0 and the increasing property of ϕt, and the second inequality follows from 1 and decreasing property of χt The last equality above follows from the fact maxxs + minxs = µ Since δ x, s, µ τ, we have 1 τ p i = x i s i /µ 1 + τ Thus, by Lemma 1, we have w 1 xs 1/ xs = xt s n1 + τ 3 4µ 4µ 4 Noting that holds for any η 09 minp/ w Thus, if η 09 minp/ w, from and 3, it follows that Therefore, we have Since η δ xθ, sθ, µ θ δ x, s, µ 4 τ τ w n1 + τ { } 09 minp η 4 τ τ min, w n1 + τ δ xθ, sθ, µ θ τ + δx, s, µ τ τ, which also implies that xθ, sθ > 0 By definition of θ, we conclude that θ θ This in turn implies that η η Therefore, both case 1 and case imply that { } 09 minp 4 τ τ η min, w n1 + τ { } 41 τ09 4 τ τ 1 min, 1 + τ 1 + τ n The last inequality follows from 3 and 1 + τ minp 1 τ In what follows, we only consider the case n Lemma 34 Suppose that n There exists a constant γ 0, 05] independent of n such that 1 θ 1 α + α µ 1 γθ, µ gap where θ and α are the steplength in predictor and corrector steps in Algorithm 31, respectively 16

Proof By choice of τ and τ, we have τ τ < 1 τ n min x s n µ, ie, 1 τ τn µ min x s This implies that the steplength α used in the corrector step is well-defined First we note that τ τ τ τ α = n 1 + 1 τ τn µ / min x k s k n On the other hand, since n and τ 1 τ n τ τ As a result, by Lemma 33 we have Since 0 < τ 1, we have 1 η = τ < τ < 1, it is easy to see that 091 τ 4 τ τ n1 + τ τ 1 + τ τ > 1 Therefore, there exists a constant γ 0, 05] such that Since τ < τ, it follows that Thus, we have 1 τ1 γ 1 + τ τ 1 τ1 γ 1 + τ τ 1 > 1 ie, α < τ τ n η 1 τ1 γ τ = θ 1 τ1 γ 1 θ τ 1 τ1 γ 1 + τ τ θ 1 τ1 γ, 1 θ τ 1 θ α τ 1 τ θ 1 γ Adding both sides of the above inequality by 1 θ yields 1 θ 1 + α τ 1 γθ 1 τ 17

Since x s/ µ e τ, which implies that 1 τ x i s i / µ 1 + τ, we have ie, µ / µ gap 1/1 τ Therefore, we have 1 θ The proof is complete µ gap µ = xt s n µ min x s µ 1 τ, 1 α + α µ µ gap We now prove the main result in this section 1 θ 1 α + α = 1 θ 1 + α τ 1 τ 1 τ 1 γθ Theorem 31 Suppose that n Let τ and τ be given as in Algorithm 31, and let x k, s k, µ k and x k, s k, µ k be generated by Algorithm 31 Then the following properties hold i δ x k, s k and µ k τ, δ x k, s k and µ k τ for all k ii µ k+1 gap 1 γθ µ k k gap for all k, where γ is given as in Lemma 34 iii There exists a constant σ 0, 1 independent of n such that θ k σ n, and hence the algorithm has an O n log x0 T s 0 ε -iteration complexity Proof Item i can be proved by deduction It is sufficient to show that if δ x k, s k, µ k τ, then δ x k, s k, µ k τ and δ x k+1, s k+1, µ k+1 τ We now assume that δ x k, s k, µ k τ The fact δ x k, s k, µ k τ follows immediately from the choice of the steplength θ k Since τ < 1, this also implies that x k, s k remains positive We now prove that the next iterate x k+1, s k+1 is back inside the original neighborhood, ie, δ x k+1, s k+1, µ k+1 τ We first note that x k s k 1 µ k 4 µ k x s 1/ µ k e x k s k µk x s 1/ 4 e xk s k µ k µk n 4 min x k s k e xk s k Therefore, x k+1 s k+1 e = µ k = µ k x k s k + α k µ k e x k s k + α k x k s k µ k e xk s k 1 αk µ k e + α k xk s k µ k 18

1 α k x k s k µ k e + α k x k s k µ k 1 α k x k s k µ k e + αk µ k n 4 min x k s k e xk s k µ k = 1 α k + α k n µ k 4 min x k s x k s k x k s k k µ k e µ k e 1 α k + α k n τ µ k 4 min x k s k From the beginning of the proof of Lemma 34, we have 1 τ τn µ k / min x k s k Thus, the following quadratic equation in t has at least one solution It is easy to see that τ 4 1 t + t n τ µ k 4 min x k s k = 5 τ τ α k = τ τ n1 + 1 τ τn µ k / min x k s k is the least solution to equation 5 Thus, it follows from 4 that x k+1 s k+1 e τ = τ τ τ µ k This also implies that x k+1, s k+1 > 0 Since µ k+1 = maxx k+1 s k+1 + minx k+1 s k+1 / is the global minimizer of the proximity measure function δ x k+1, s k+1,, we conclude that δ x k+1, s k+1, µ k+1 x k+1 s k+1 = e τ, µ k+1 as desired We now prove ii Notice that x k T s k = 0 By using that fact that µ k gap = 1 θ k µ k gap and by Lemma 34, we have µ k+1 gap = xk+1 T s k+1 n = xk + α k x k T s k + α k s k n = xk T s k + α k x k T s k + s k T x k n = 1 α k µ k gap + α k µ k 19

= µ k gap 1 θ k 1 α k + α k µk 1 γθ k µ k gap µ gap 1 α k + α k µk µ gap Finally, we prove that θ k σ n where σ 0, 1 is a constant independent of n Actually, in the proof of Lemma 34, we have pointed out that η k 4 τ τ/n1 + τ Notice that τ τ = 1 τ n We conclude that ηk ν for some constant ν independent of n Observe that n the function ξt := is increasing function on 0, Thus, we obtain t t+ t +4t θ k = η k + η k η k + 4η k µ k gap ν/n ν/n + ν/n + 4ν/n σ n ν where 0 < σ = ν+ ν+4 < 1 Therefore, the algorithm has an O n log x0 T s 0 ε -iteration complexity Before closing this section, we point out that both Algorithms 1 and 31 are quadratically convergent in the sense that the duality gap sequences generated by these algorithms converge to zero quadratically Actually, by a proof similar to that of Theorem 41 in [11], we can easily obtain the following result Theorem 3 Let x k, s k be generated by Algorithm 1 or Algorithm 31 Then the algorithm is quadratically convergent in the sense that µ k+1 gap = Oµ k gap Moreover, every accumulation point of the sequence x k, s k is a strictly complementary solution of the problem 4 Conclusions and future work Most numerical experiments demonstrate that interior-point algorithms working in wider neighborhoods perform better than those counterparts using smaller neighborhoods Inspired by this fact, we present in this paper a unified method to enlarge the neighborhoods of interior-point algorithms As an example, we consider so-called predictor-corrector methods, and show how to use the least value of a proximity measure function to enlarge the neighborhoods of the original methods We also prove that the proposed algorithms in this paper retain the best known iteration complexity and local superlinear convergence of the original algorithms Our methods can be viewed as a new design for interior-point methods It is worth mentioning that the following proximity function is also widely used in interior-point algorithms: δx, xs s, µ = µ e, where the operation performs componentwise, ie, for any vector y R n, the ith component of the vector y is min0, y i, i = 1,, n In R n ++, we note that δ x, s, µ = 0 0

for all µ 0, minxs], and δ x, s, µ > 0 for all µ > minxs This implies that the global minimum point of δ x, s, µ with respect to µ is not unique In fact, any µ 0, minxs] is the global minimum point and the least value of the proximity function is zero Therefore, if we take µ to be one of these minimum points, the neighborhood is enlarged to be the whole positive orthant, ie, R n ++ In this case, we can say that the interior-point algorithms do not need a particular neighborhood of central path, and thus the algorithms do not require any proximity measure function The analysis for such an interior-point algorithm is a worth and interesting future work References [1] KM Anstreicher and R A Bosch, A new infinity-norm path following algorithm for linear programming, SIAM J Optim, 5 1995, pp 36-46 [] ER Barnes, S Chopra and D J Jensen, The affine scaling method with centering, Technical Report, Department of Mathematical Sciences, IBM T J Watson Research Center, Yorktown Heights, NY, 1988 [3] DA Bayer and JC Lagarias, Karmarkar s linear programming algorithm and Newton s method, Math Programming, 50 1991, pp 91-330 [4] TJ Carpenter, IJ Lustig, JM Mulvey and DF Shanno, Higher-order predictorcorrector interior point methods with application to quadratic objectives, SIAM J Optim, 3 1993, pp 696-75 [5] CC Gonzaga, Complexity of predictor-corrector algorithms for LCP based on a large neighborhood of the central path, SIAM J Optim, 10 000, pp 183-194 [6] PF Huang and Y Ye, An asymptotical O nl-iteration path-following linear programming algorihtm that used wide neighborhoods, SIAM J Optim, 6 1996, pp 570-586 [7] J Ji, FA Potra and R Sheng, On the local convergence of a predictor-corrector method for semidefinite programming, SIAM J Optim, 10 000, pp 195-10 [8] M Kojima, N Megiddo, T Noma and A Yoshise, A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems, Lecture Notes in Computer Science, 538, Springer-Verlag, Berlin, 1991 [9] S Mehrotra, On the implemention of a primal-dual interior-point method, SIAM J Optim, 199, pp 575-601 [10] S Mizuno, MJ Todd and Y Ye, On adaptive step primal-dual interior-point algorithms for linear programming, Math Oper Res, 18 1993, pp 964-981 [11] JM Peng, T Terlaky and YB Zhao, A self-regularity based predictor-corrector algorithm for linear optimization, Technical Report, Department of Computing and Software, McMaster University, Canada, 00 [1] FA Potra, A quadratically convergent predictor-corrector method for solving linear programs from infeasible starting points, Math Programming, 67 1994, pp 383 406 1

[13] FA Potra, An infeasible-interior-point predictor-corrector algorithm for linear programming, SIAM J Optim, 6 1996, pp 19 3 [14] FA Potra, The Mizuno-Todd-Ye algorithm in a large neighborhood of the central path, European J Oper Res, 143 00, pp 57 67 [15] FA Potra, A superliner covergent predictor-corrector method for degenerate LCP in a wide neighborhood of the central path with O nl-iteration complexity, Math Program, 100 004, no, Ser A, 317 337 [16] C Roos, T Terlaky and J-Ph Vial, Theory and Algorithms for Linear Optimization An Interior Point Approach, Wiley-Interscience Series in Discrete Mathematics and Optimization, John Wiley & Sons, Ltd, Chichester, 1997 [17] J Stoer and M Wechs, The complexity of high-order predictor-corrector methods for solving sufficient linear complementarity problems, Optim Methods Softw, 10 1998, pp 393-417 [18] J Sun and G Zhao, Global linear and local quadratic convergence of a long-step adaptive-mode interior point method for some monotone variational inequality problems, SIAM J Optim, 8 1998, pp 13-139 [19] S J Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, USA, 1997 [0] Y Ye, Interior-Point Algorithms: Theory and Analysis, John Wiley & Sons, Inc, New York, NY, 1997 [1] Y Ye and K Anstreicher, On quadratic and O nl convergence of predictor-corrector algorithm for LCP, Math Programming, 6 1993, pp 537-551 [] G Zhao, Interior-point algorithms for linear complementarity problems based on large neighborhoods of the central path, SIAM J Optim, 8 1998, pp 397-413 [3] YB Zhao and J Han, Two interior-point methods for nonlinear P -complementarity problems, J Optim Theory Appl, 10 1999, pp 659-679