An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Similar documents
A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

CCO Commun. Comb. Optim.

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Local Self-concordance of Barrier Functions Based on Kernel-functions

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

A new primal-dual path-following method for convex quadratic programming

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Interior Point Methods in Mathematical Programming

On Mehrotra-Type Predictor-Corrector Algorithms

Interior Point Methods for Nonlinear Optimization

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

Interior Point Methods for Mathematical Programming

New stopping criteria for detecting infeasibility in conic optimization

Infeasible Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Lecture 10. Primal-Dual Interior Point Method for LP

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

1. Introduction and background. Consider the primal-dual linear programs (LPs)

Interior-Point Methods

Interior Point Methods for Linear Programming: Motivation & Theory

Nonsymmetric potential-reduction methods for general cones

A priori bounds on the condition numbers in interior-point methods

New Interior Point Algorithms in Linear Programming

A Simpler and Tighter Redundant Klee-Minty Construction

IMPLEMENTATION OF INTERIOR POINT METHODS

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

c 2005 Society for Industrial and Applied Mathematics


On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Introduction to Nonlinear Stochastic Programming

Optimization: Then and Now

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

A Polynomial Column-wise Rescaling von Neumann Algorithm

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

An EP theorem for dual linear complementarity problems

On well definedness of the Central Path

4TE3/6TE3. Algorithms for. Continuous Optimization

Chapter 1. Preliminaries

c 2002 Society for Industrial and Applied Mathematics

Lecture 17: Primal-dual interior-point methods part II

The Q Method for Symmetric Cone Programmin

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

On implementing a primal-dual interior-point method for conic quadratic optimization

Second-order cone programming

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path

Largest dual ellipsoids inscribed in dual cones

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

4. Algebra and Duality

Barrier Method. Javier Peña Convex Optimization /36-725

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

CS711008Z Algorithm Design and Analysis

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS

Interior-Point Methods for Linear Optimization

Convex Optimization : Conic Versus Functional Form

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

5 Handling Constraints

Primal-dual IPM with Asymmetric Barrier

Duality revisited. Javier Peña Convex Optimization /36-725

Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

On self-concordant barriers for generalized power cones

Approximate Farkas Lemmas in Convex Optimization

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Lecture 5. The Dual Cone and Dual Problem

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

An E cient A ne-scaling Algorithm for Hyperbolic Programming

Semidefinite Programming

Numerical Optimization

18. Primal-dual interior-point methods

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Using Schur Complement Theorem to prove convexity of some SOC-functions

15. Conic optimization

Transcription:

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord, Iran Department of Mathematics, Shanghai University, Shanghai 00436, China e-mail: yqbai@staff.shu.edu.cn Department of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, P.O. Box 5031, 600 GA Delft, The Netherlands e-mail: [H.Mansouri, M.Zangiabadi, C.Roos]@tudelft.nl Abstract In this paper we present an infeasible interior-point algorithm for solving linear optimization problems. This algorithm is obtained by modifying the search direction in the algorithm [8]. The analysis of our algorithm is much simpler than that of the algorithm [8] at some places. The iteration bound of the algorithm is as good as the best known iteration bound O n log 1 ε) for IIPMs. Keywords: Linear optimization, infeasible interior-point method, primal-dual method, polynomial complexity. AMS Subject Classification: 90C05, 90C51 1 Introduction Interior-point Methods IPMs) are now among the most effective methods for solving linear optimization LO) problems. For a survey we refer to recent books on the subject [9, 11, 13]. One may distinguish between IPMs according to wether they are feasible IPMs or infeasible IPMs IIPMs). Feasible IPMs start with a strictly feasible interior point and maintain feasibility during the solution process. It is not trivial to find an initial feasible interior point. One method to overcome this problem is to use the homogeneous embedding model by introducing artificial variables. Such a homogenous self-dual was presented first by Ye et al.[14] for LO, and further developed by Andersen and Ye, etc. in [1, 9, 1]. The research of the third author is supported by National Natural Science Foundation of China No. 10117733) and Shanghai Leading Academic Discipline Project No. J50101). 1

IIPMs start with an arbitrary positive point and feasibility is reached as optimality is approached. The choice of the starting point in IIPMs is crucial for the performance. Lustig [3] and Tanabe [10] were the first to present IIPMs for LO. The first theoretical result on primaldual IIPMs was obtained by Kojima, Meggido and Mizuno []. They showed that an infeasibleinterior-point variant of the primal-dual feasible IPM studied in [6] is globally convergent. The first polynomial-complexity result was obtained by Zhang [15] who proved that, with proper initialization, an IIPM has O n log 1 ε) -iteration complexity. Shortly after that, Mizuno [5] proved that the Kojima-Meggido-Mizuno algorithm also has O n log 1 ε) -iteration complexity. Mizuno [5] and Potra [7] presented two primal-dual IIPMs with O n log 1 ε) -iteration complexity which is the best known iteration bound for IIPMs. Roos [8] presented the first primal-dual IIPM that uses full-newton steps for solving the LO problem. He also proved that the complexity of his algorithm coincides with the best known iteration bound for IIPMs. In this paper we consider primal-dual LO problems in the following the standard form: and the dual problem is given by P) min { c T x : Ax = b, x 0 }, D) max { b T y : A T y + s = c, s 0 }, where A R m n,b,y R m and c,x,s R n and w.l.o.g ranka) = m. The vectors x, y and s are the vectors of variables. As usual for IIPMs we assumed that the initial iterates x 0, y 0, s 0) are as follows: where µ 0 is the initial parameter and ζ > 0 is such that x 0 = s 0 = ζ e, y 0 = 0, µ 0 = ζ, 1) x + s ζ, ) for some optimal solution x,y,s ) of P) and D). In the rest of this paper we use some notations like rb 0 and r0 c which defined in [4, 8] as the initial residual vectors: r 0 b = b Ax 0 = b ζae 3) r 0 c = c A T y 0 s 0 = c ζe. 4) Using x 0) T s 0 = nζ, the total number of iterations in the algorithm of [8] is bounded above by 4n log max { nζ, r 0 b, r 0 c } ε, 5) Up to a constant factor, the iteration bound 5) was first obtained by Mizuno [5] and it is still the best known iteration bound for IIPMs. To describe the motivation and contribution of this paper we need to recall the main ideas underlying the algorithm in [8]. For any ν with 0 < ν 1 we consider the perturbed problem P ν ), defined by { c ) } P ν ) min νr 0 T c x : Ax = b νr 0 b, x 0,

and its dual problem D ν ), which is given by { b ) } D ν ) max νr 0 T b y : A T y + s = c νrc 0, s 0. Note that if ν = 1 then x = x 0 yields a strictly feasible solution of P ν ), and y,s) = y 0,s 0 ) a strictly feasible solution of D ν ). Due to the choice of the initial iterates we may conclude that if ν = 1 then P ν ) and D ν ) each have a strictly feasible solution, which means that both perturbed problems then satisfy the well known interior-point condition IPC). More generally one has the following lemma see also [8, Lemma 3.1]). Lemma 1.1 Theorem 5.13 in [13]) The perturbed problems P ν ) and D ν ) satisfy the IPC for each ν 0,1], if and only if the original problems P) and D) are feasible. We assume that problems P) and D) are feasible. By this assumption, Lemma 1.1 implies that the perturbed problem pair P ν ) and D ν ) satisfy the IPC, for each ν 0,1]. This guarantees that the following system b Ax = νr 0 b, x 0 6) c A T y s = νr 0 c, s 0 7) xs = µe. 8) has a unique solution, for every µ > 0. If ν 0,1] and µ = νζ we denote this unique solution in the sequel as xν), yν), sν)). As a consequence, xν) is the µ-center of P ν ) and yν), sν)) the µ-center of D ν ). Due to this notation we have, by taking ν = 1, x1), y1), s1)) = x 0,y 0,s 0 ) = ζe,0,ζe). Like [4, 8] we need to measure proximity of iterates x,y,s) to the µ-center of the perturbed problems P ν ) and D ν ). To this end we use δx,s;µ) as the quantity to measure closeness to µ-centers, which is defined as follows. δx,s;µ) := δv) := 1 v v 1 xs where v := µ. 9) Initially we have x = s = ζe and µ = ζ, whence δx,s;µ) = 0. In the sequel we assume that at the start of each iteration, δx,s;µ) is smaller than or equal to a small) threshold value τ > 0. So this is certainly true at the start of the first iteration. Now we describe one iteration of our algorithm. Suppose that for some ν 0, 1] we have x, y and s satisfying the feasibility conditions 6) and 7) and such that x T s = nµ and δx,s;µ) τ, 10) where µ = νζ. First we reduce ν to ν + = 1 θ)ν, with θ 0,1), and find new iterates x f, y f and s f that satisfy 6) and 7), with ν replaced by ν +. As we will see, by taking θ small enough this can be realized by one so-called feasibility step, to be described below soon. So, as a result of the feasibility step we obtain iterates that are feasible for P ν +) and D ν +). Then we apply a limited number of centering steps with respect to the µ + -centers of P ν +) and D ν +). The centering steps keep the iterates feasible for P ν +) and D ν +); their purpose is to get iterates x +, y + and s + such that x + ) T s + = nµ +, where µ + = ν + ζ and δx +,s + ;µ + ) τ. This process is repeated until the duality gap and the norms of the residual vectors are less than some prescribed accuracy parameter ε. 3

Primal-Dual Infeasible IPM Input: Accuracy parameter ε > 0; barrier update parameter θ, 0 < θ < 1 threshold parameter τ > 0 parameter ζ > 0. begin x := ζe; y := 0; s := ζe; ν = 1; while max x T s, b Ax, c A T y s ) ε do begin end end feasibility step: x, y, s) := x, y, s) + f x, f y, f s); µ-update: µ := 1 θ)µ; centering steps: while δx,s;µ) τ do x, y, s) := x, y, s) + x, y, s); endwhile Figure 1: Algorithm Before describing the search directions used in the feasibility step and the centering step we give a more formal description of the algorithm in Figure 1. For the feasibility step in [8] search directions f x, f y and f s are uniquely) defined by the system A f x = θνr 0 b 11) A T f y + f s = θνr 0 c 1) s f x + x f s = µe xs. 13) It can easily be understood that if x,y,s) is feasible for the perturbed problems P ν ) and D ν ) then after the feasibility step the iterates satisfy the feasibility conditions for P ν +) and D ν +), provided that they satisfy the nonnegativity conditions. Assuming that before the step δx,s;µ) τ holds, and by taking θ small enough, it can be guaranteed that after the feasibility step the iterates x f, y f and s f are nonnegative and moreover δx f,s f ;µ + ) 1/, where µ + = 1 θ)µ. So, after the µ-update the iterates are feasible for P ν +) and D ν +) and µ is such that δx,s;µ) 1/. In a centering step the search directions x, y and s are the usual primal-dual Newton 4

directions, uniquely) defined by A x = 0, 14) A T y + s = 0, 15) s x + x s = µe xs. 16) Denoting the iterates after a centering step as x +, y + and s +, we recall from [9] the following result. Lemma 1. If δ := δx,s;µ) 1, then the primal-dual Newton step is feasible, i.e., x + and s + are nonnegative, and x + ) T s + = nµ. Moreover, if δ := δx,s;µ) 1, then δx +,s + ;µ) δ. As discussed in [4, 8], by using centring steps we get iterates that satisfy x T s = nµ and δx,s;µ) τ, where τ is much) smaller than 1/. By using Lemma 1., the required number of centering steps can easily be obtained. Because after the µ-update we have δ = δx,s;µ) 1/, and hence after k centering steps the iterates x,y,s) satisfy δx,s;µ) 1 ) k. This implies that at most ) 1 log log τ = log log 64) 3. 17) centering steps are needed. In this paper, we modify the feasibility step by replacing the equation 13) by the equation s f x + x f s = 1 θ)µe xs. 18) This modification makes the analysis new and much simpler than the analysis of the algorithm in [4, 8]. The iteration bound is as good as that in [4, 8] which is essentially the same as the best iteration bound for IIPMs. To conclude this section we briefly describe how the paper is organized. Section is devoted to the analysis of the feasibility step, which is the main part of the paper. The analysis presented in this section differs from the analysis in [4, 8]. The final iteration bound is derived in Section 3. Some concluding remarks can be found in Section 4. Some notations used throughout the paper are as follows. denotes the -norm of a vector. For any x = x 1 ; x ; ; x n ) R n, x min denotes the smallest and x max the largest value of the components of x. Furthermore, e denotes the all-one vector of length n. We write fx) = Ogx)) if fx) γ gx) for some positive constant γ. Analysis of the feasibility step Let x, y and s denote the iterates at the start of an iteration, and assume δx,s;µ) τ. Recall that in the first iteration we have δx, s; µ) = 0. 5

.1 Effect of the feasibility step; choice of θ As established in Section 1, the feasibility step generates new iterates x f, y f and s f that are feasible for the new perturbed problem pair P ν +) and D ν +). A crucial element in the analysis is to show that after the feasibility step δx f,s f ;µ + ) 1/, i.e., that the new iterates are within the region where the Newton process targeting at the µ + -centers of P ν +) and D ν +) is quadratically convergent. Defining d f x := v f x x, df s := v f s, 19) s with v as defined in 9). Now using 18) and xs = µv we may write ) ) x f s f = xs + s f x + x f s + f x f s = µ + e + f x f s = µ 1 θ)e + d f xd f s. 0) Lemma.1 The new iterates are certainly strictly feasible if 1 θ)e + d f xd f s > 0. Proof: Note that if x f and s f are positive then 0) makes clear that 1 θ)e + d f xd f s > 0. In the same way as Lemma 4.1 in [8] the converse can be proved. Thus we have that x f and s f are positive if and only if 1 θ)e + d f xd f s > 0. Thus the lemma follows. Corollary. The iterates x f, y f, s f) are certainly strictly feasible if < 1 θ). Using 19) we may also write d f x df s x f = x + f x = x + xdf x v = x v v + df x) 1) s f = s + f s = s + sdf s v = s v v + df s ). ) To simplify the presentation we will denote δx,s;µ) below simply as δ. Recall that we assume that before the feasibility step one has δ τ. In the sequel we denote ωv) = 1 d f x + d f s. 3) This implies d f x ωv) and d f s ωv), and moreover, T dx) f d f s d f x d f s 1 d f x + d f s ) ωv) 4) ωv). 5) Lemma.3 Let θ = d f x df s d f x d f s α n, α 1 for n. The iterates x f, y f, s f) are strictly feasible if ω v) 1. Proof: Let ω < 1 and θ = α n, α 1 for n. Then 5) implies that d f xd f s 1 1 θ. By Corollary. this implies that the iterates x f, y f, s f) are strictly feasible. 6

Lemma.4 One has δv f ) ωv) 4 1 θ)1 θ ωv) ) 6) Proof: By definition 9), δx f,s f ;µ + ) = δv f ) = 1 v f e, where v f x v f = f s f µ +. After division of both sides in 0) by µ + we get ) v f) µ 1 θ)e + d f xd f s = µ + = e + df xd f s 1 θ. 7) By using the definition of the δ v f) we have δv f ) = 1 4 vf v f) 1 = 1 4 v f) 1 e v f) ) 1 v f ) 1 4 e v f ). We proceed by deriving bounds for the last two norms. First we consider the second norm: v e f) d f xd f s = 1 θ 1 d f x d f s 1 θ ωv) 1 θ, where we used 7) for equality and 4) for the second inequality. For estimate of we may write, ) ) ) d f v f x d f s i i i = 1 + 1 θ 1 ωv) 1 θ, where we used 5) for inequality. We therefore have, using the last inequality, ) v f 1 θ i 1 θ ω v). v f) 1 Hence, which completes the proof. v f i ) 1 1 θ 1 θ ω v) 7

Since we need to have δ v f) 1, it follows from Lemma.4 that it suffices if Due to Lemma.3 we decide to choose ωv) 4 1 θ)1 θ ωv) ) 1. θ = Then, for n 5, one may easily verify that α n, α 1. 8) ωv) 1 δv f ) 1. 9) We proceed by considering the vectors d f x and d f s more in detail.. An Upper bound for ω v) One may easily check that the system 11)-13), which defines the search directions f x, f y and f s, can be expressed in terms of the scaled search directions d f x and d f s as follows. where Ād f x = θνr 0 b, 30) Ā T f y µ + df s = θνvs 1 r 0 c, 31) d f x + d f s = 1 θ)v 1 v, 3) Ā = AV 1 X, V = diag v) and X = diag x). 33) Let us denote the null space of the matrix Ā as L. So, L := { ξ R n : Āξ = 0 }. Obviously, the affine space { ξ R n : Āξ = θνr0 b} equals d f x + L. Note that due to a wellknown result from linear algebra the row space of Ā equals the orthogonal complement L of L. Therefore, 31) shows that the affine space { θνvs 1 r 0 c + ĀT ξ : ξ R m} equals d f s + L. Since L L = {0}, it follows that the affine spaces d f x + L and d f x + L meet in a unique point. This point is denoted below by q. We now recall a lemma from [8] which gives an upper bound for ω v). Lemma.5 lemma 4.4 in [8]) Let q be the unique) point in the intersection of the affine spaces d f x + L and d f s + L. Then ωv) q + q + δv)). From 9) we know that in order to have δ v f) 1, we should have ω v) 1. So, due to Lemma.5 this will hold if q satisfies q + q + δ v)) 1 4. 34) 8

.3 Upper bound for q From Lemma.5 we know that q is the unique) solution of the system Āq = θνr 0 b, Ā T ξ + q = θνvs 1 r 0 c. We proceed to derive an upper bound for q. Before doing this we choose the initial point in the usual way as defined in 1) and ). Lemma.6 Let x 0, y 0, s 0) be an initial point as defined in 1) and ), we have q θ ζ v min x 1 + s 1 ) 35) Proof: By using similar arguments as in Lemma 4.7 in [8] we obtain the following result: µ q θν D s s 0 ) + D 1 x x 0 ), 36) where x, ȳ and s satisfy and A x = b, A T ȳ + s = c, D = diag 37) xv 1 µ ). 38) We are still free to choose x and s such that they satisfy in system 37). We use x = x and s = s with x and s as defined in ). Then we have It follows that 0 x 0 x = x 0 x ζe, 0 s 0 s ζe. D s s 0 ) ζ De ζ xv 1 = ζ x µ µ v ζ x µ v min = ζ µ vmin x. 39) where we used matrix D as defined in 38). In the same way it follows that D 1 x x 0) Substitution 39), 40) and µ = νµ 0 = νζ into 36) implies that q θ x + s. ζ v min Using x + s x 1 + s 1 ) in the last inequality we have proving the lemma. q ζ µ vmin s. 40) θ ζ v min x 1 + s 1 ), 9

.4 Some bounds for x 1 and s 1 and v min ; choice of α and τ Let x and y, s) be feasible for P ν ) and D ν ), respectively. We need to find an upper bound for x 1 + s 1 and lower bound for smallest component, named v min, of vector v as defined in 9). For finding an lower bound on v min we recall Lemma II.60 from [9] without further proof. Lemma.7 Cf. Lemma II.60 in [9]) Let δ = δ v) be given by 9). Then where 1 ρδ) v i ρδ), 41) ρδ) := δ + 1 + δ. 4) Lemma.8 Let x and y, s) be feasible for the perturbed problems P ν ) and D ν ) respectively and x 0, y 0, s 0) as defined in 1). Then for any primal-dual optimization solution x, y, s ), we have ν x T s 0 + s T x 0) = s T x + ν s 0) T x 0 + ν 1 ν) s 0 ) T x + x 0) T s ) 1 ν) s T x + x T s ). 43) Proof: Let x = x νx 0 1 ν)x, y = y νy 0 1 ν)y, s = s νs 0 1 ν)s. From 3), 4) and definition of the perturbed problems P ν ) and D ν ), we can see easily that Ax = 0 A T y + s = 0, which shows that x belongs in the null-space and s is in row-space of matrix A which implies that x and s are orthogonal, i.e., x ) T s = x νx 0 1 ν) x ) T s νs 0 1 ν)s ) = 0. By expanding the last equality and using the fact x ) T s = 0 we obtain the desired result. Lemma.9 Let x and y, s) be feasible for the perturbed problems P ν ) and D ν ) respectively and δ v) be given as 9) and x 0 = s 0 = ζe, where ζ > 0 is a constant such that x + s ζ for some primal-dual optimal solution x, y, s ). Then we have ) x 1 + s 1 ρδ) + 1 nζ, 44) where ρδ) is as defined in 4). 10

Proof: Since x, s, x and s are nonnegative, Lemma.8 implies that x T s 0 + s T x 0 st x ν + ν s 0) T s x 0 0 + 1 ν) ) T x + x 0) ) T s. 45) Since x 0 = s 0 = ζe and x + s ζ, we have x 0 ) T s + s 0) T x = ζe T x + s ) ζe T x + s e) = ζ x + s e T e ) nζ. Also by using x 0) T s 0 = nζ in 45) we get x T s 0 + s T x 0 st x ν + νζ = µ e T v ) + nζ = ζ e T v ) + nζ, ν where for the last equality we used ν = µ and µ 0 = ζ. By using Lemma.7 in the last µ 0 inequality we obtain ) x T s 0 + s T x 0 ρδ) + 1 nζ. Since x 0 = s 0 = ζe we have x T s 0 + s T x 0 = ζ e T x + e T s ) = ζ x 1 + s 1 ). Hence it follows that x 1 + s 1 ) ρδ) + 1 nζ, which proves the lemma. Substituting 41) and 44) into 35) we obtain Now we choose q n θ ρδ) 1 + ρδ) ). τ = 1 8. 46) Since δ τ = 1 8 and ρδ) is monotonically increasing with respect to δ, we have q n θ ρδ) 1 + ρδ) ) ) )) 1 1 n θ ρ 1 + ρ =.586nθ. 8 8 Using θ = α n in the last inequality we obtain q.586nα n =.586 n α. In order to have δ v f) 1, by 34) we should have q + q + δ v)) 1 4. Therefore, since δ v) τ = 1 8, it suffices if q satisfies q + q + 1 4) 1 4. So we have δ v f) 1 if q 0.455. Since q.586 n α n, the latter inequality is satisfied if we take α = 1 5 n, 47) 11

because 0.91 3.6571 1 5. According to 8) this gives the following value for θ: 3 Iteration bound θ = 1 5 n. 48) In the previous sections we have found that if at the start of an iteration the iterates satisfy δx,s;µ) τ, with τ and θ as defined in 46) and 48), then after the feasibility step and the µ-update the iterates satisfy δx,s;µ + ) 1/. According to 17), at most ) 1 log log τ = log log 64) = 3 centering steps suffice to get iterates that satisfy δx,s;µ + ) τ. So each iteration consists of one feasibility step and 3 centering steps. In each iteration both the duality gap and the norms of the residual vectors are reduced by the factor 1 θ. Hence, using x 0) T s 0 = nζ, the total number of iterations is bounded above by 1 θ { max nζ, } r 0, r 0 log b c. ε Since θ = 1 5 n, the total number of inner iterations is bounded above by 0 n log max{ nζ, r 0 b, r 0 c } Note that the order of this bound is exactly the same as the bound in [4, 8]. In the following we state without further proof our main result. Theorem 3.1 If P) and D) have optimal solutions x and y, s ) such that x + s ζ, then after at most 0 n log max { nζ, rb 0, r c 0 }, ε iterations the algorithm finds an ε-solution of P) and D). Due the theorem above we know that if there exist x and y, s ) such that satisfy ), the algorithm finds an ε-solution. One might ask what if this condition is not satisfied. From Lemma.6 we have that under assumptions 1) and ) during the course of the algorithm q 0.455. So, if during the excursion of the algorithm q > 0.455, then we may conclude that there exist no optimal solutions x, y, s ) such that satisfy ε x + s ζ.. 1

4 Concluding remarks We analyzed an algorithm with full-newton steps for LO which differs from the algorithm presented in [4, 8] in the definition of the feasibility step. In the system of the feasibility step the equation 13) is replaced with: s f x + x f s = 1 θ)µe xs, whereas the feasibility step in [8] was determined by and in [4] was s f x + x f s = µe xs, s f x + x f s = 0. The analysis for the feasibility step presented in Section differs from the analysis in [4, 8]. The iteration bound of the algorithm is as good as the best known iteration bound for IIPMs. Another topic for further research is the extension of the algorithm presented in this paper to symmetric cone optimization. References [1] E. D. Andersen and Y. Ye. A computational study of the homogeneous algorithm for large-scale convex optimization. Computational Applications and Optimization, 10 1998) 43 69. [] M. Kojima, N. Megiddo, and S. Mizuno. A primal-dual infeasible-interior-point algorithm for linear programming. Mathematical Programming, 61 1993) 63 80. [3] I. J. Lustig. Feasibility issues in aprimal-dual interior point method for linear programming. Mathematical Programming, 49 1990/91) 145 16. [4] H. Mansouri and C. Roos. Simplified On) infeasible interior-point algorithm for linear optimization using full-newton step. Optimization Methods and Software, 3) 007) 519 530. [5] S. Mizuno. Polynomiality of infeasible interior point algorithms for linear programming. Mathematical Programming, 67 1994) 109 119. [6] S. Mizuno, M. J. Todd, and Y. Ye. On adaptive-step primal-dual interior-point algorithms for linear programming. Mathematics of Operations Research, 18 1993) 964 981. [7] F. A. Potra. An infeasible-interior-point predictor-corrector algorithm for linear programming. SIAM Journal on Optimization, 61) 1996) 19 3. [8] C. Roos. A full-newton step On) infeasible interior-point algorithm for linear optimization. SIAM Journal on Optimization, 164) 006) 1110 1136. [9] C. Roos, T. Terlaky, and J.-Ph. Vial. Theory and Algorithms for Linear Optimization. An Interior-Point Approach. John Wiley & Sons, Chichester, UK, 1997 nd Edition, Springer, 006). 13

[10] K. Tanabe. Centered Newton method for linear programming: Interior and exterior point method in Janpanese). In: New Methods for Linear Programming. K. Tone Ed.) 31990) 98 100. [11] S. J. Wright. Primal-dual Interior-Point Methods. SIAM, Philadelphia, 1996. [1] F. Wu, S. Wu, and Y. Ye. On quadratic convergence of the O nl)-iteration homogeneous and self-dual linear programming algorithm. Annals of Operations Research, 87 1999) 393 406. [13] Y. Ye. Interior Point Algorithms, Theory and Analysis. John Wiley & Sons, Chichester, UK, 1997. [14] Y. Ye, M. J. Todd, and S. Mizonu. An O nl)-iteration homogenous and self-dual linear programming algorithm. Mathematical of Operations Research 191994) 53 67. [15] Y. Zhang. On the convergence of a class of infeasible-interior-point methods for the horizantal linear complementarity problem. SIAM Journal on Optimization, 4 1994) 08 7. 14