A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

Similar documents
from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Lecture 10. Primal-Dual Interior Point Method for LP

On well definedness of the Central Path

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

Chapter 6 Interior-Point Approach to Linear Programming

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Optimization: Then and Now

A Strongly Polynomial Simplex Method for Totally Unimodular LP

Optimality, Duality, Complementarity for Constrained Optimization

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

Interior-Point Methods

Interior Point Methods for Mathematical Programming

Semidefinite Programming

Linear Programming: Simplex

Interior Point Methods in Mathematical Programming

A priori bounds on the condition numbers in interior-point methods

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

On Mehrotra-Type Predictor-Corrector Algorithms

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

CS711008Z Algorithm Design and Analysis

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

Abstract. A new class of continuation methods is presented which, in particular,

On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

Lecture: Algorithms for LP, SOCP and SDP

On implementing a primal-dual interior-point method for conic quadratic optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Introduction to optimization

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

Conic Linear Optimization and its Dual. yyye

Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University,

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Local Self-concordance of Barrier Functions Based on Kernel-functions

Some Perturbation Theory. James Renegar. School of Operations Research. Cornell University. Ithaca, New York October 1992

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-dual IPM with Asymmetric Barrier

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

New Infeasible Interior Point Algorithm Based on Monomial Method

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

More First-Order Optimization Algorithms

Advances in Convex Optimization: Theory, Algorithms, and Applications

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

4TE3/6TE3. Algorithms for. Continuous Optimization

Introduction to Mathematical Programming

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

10 Numerical methods for constrained problems

A class of Smoothing Method for Linear Second-Order Cone Programming

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999.

Analytic Center Cutting-Plane Method

informs DOI /moor.xxxx.xxxx c 200x INFORMS

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel

2.098/6.255/ Optimization Methods Practice True/False Questions

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Lecture 6: Conic Optimization September 8

CCO Commun. Comb. Optim.

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

15-780: LinearProgramming

A Simpler and Tighter Redundant Klee-Minty Construction

Conic Linear Programming. Yinyu Ye

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

א K ٢٠٠٢ א א א א א א E٤

An E cient A ne-scaling Algorithm for Hyperbolic Programming

Convex Optimization : Conic Versus Functional Form

IMPLEMENTATION OF INTERIOR POINT METHODS

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Lecture 14: Primal-Dual Interior-Point Method

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization

that nds a basis which is optimal for both the primal and the dual problems, given

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

Solving Obstacle Problems by Using a New Interior Point Algorithm. Abstract

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Transcription:

A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point algorithm for linear programming (LP) is proposed in this paper. The algorithm does not need to start from a big M initial point, while achieving O( p n (?) L)-iteration complexity by following a certain central path on a central surface in a neighborhood N (), where can be any number between 0 and, n is the number of variables and L is the data length of the LP problem. In particular, an algorithm is developed, where the searching direction is obtained by solving a Newton equation system without infeasible residual terms on its right hand side. Key words: Linear programming, homogeneous and self-dual linear feasibility model, interior-point algorithm Institute of Systems Science, Academia Sinica, Beijing 00080, China, and currently visiting at Department of Management Sciences, The University of Iowa, Iowa City, Iowa 52242, USA. Research supported in part by NSF Grant DDM-9207347. y Department of Management Sciences, The University of Iowa, Iowa City, Iowa 52242, USA. Research supported in part by NSF Grant DDM-9207347. 0

Introduction Consider a linear programming (LP) problem in the standard form: (LP ) minimize c T x subject to Ax = b; x 0; where c 2 R n, A 2 R mn and b 2 R m are given, x 2 R n, and T denotes transpose. (LP) is said to be feasible if and only if its constraints are consistent; it is called unbounded if there is a sequence fx k g such that x k is feasible for all k but c T x k!?. (LP) has a solution if and only if it is feasible and bounded. The dual problem of (LP) can be written as (LD) maximize b T y subject to A T y c; where y 2 R m. We call z = c? A T y 2 R n dual slacks. Denote by F the set of all x and (y; z) that are feasible for the primal and dual, respectively. Denote by F 0 the set of points with (x; z) > 0 in F. Assuming that the LP problem has a feasible interior point, Megiddo [8] and Bayer and Lagarias [] dened the central path for a feasible LP problem as C() = (y; x; z) 2 F 0 : Xz = e; = xt z n where X = diag(x). As! 0, this path goes to a strictly complementary solution of LP. Based on following the central path, Kojima et al. [4] developed a primal-dual interior-point algorithm in which the searching direction is generated by solving the following Newton equation system in iteration k: A d x = 0;?A T d y?d z = 0; Z k d x +X k d z = k e? X k z k ; where k = ((x k ) T z k )=n, X k = diag(x k ), Z k = diag(z k ), and is a scalar parameter. Kojima et al. [4] proved that their algorithm is O(nL)-iteration bounded, where L is the data length of (LP) with integer numbers. Later, Kojima et al. [5] and Monteiro and Adler [2] gave an O( p nl)-iteration bound for such a primal-dual interior-point algorithm by restricting all iterates in a 2-norm neighborhood of the central path, i.e. (y k ; x k ; z k ) 2 N () = (y; x; z) 2 F 0 : kxz? ek for some 2 (0; =2]. (Typically, = =4, or for a predictor-corrector algorithm = =2 in the predictor step, see [].) Throughout the paper, k:k represents 2-norm. Unless a LP problem has a feasible interior point and such a point is given, an interior-point algorithm has to start from an infeasible point, or from an interior point feasible for an articial problem. In theory,

a big M interior point suces for establishing complexity result. However, such a big M approach is not practical at all. Furthermore, a robust algorithm has to be able to correct possible error accumulated from computations even starting from a feasible interior point. Algorithms, which allow to start from a nonbig M initial point in both theory and practice, are called infeasible interior-point algorithms and reported perform very well in practice (see [6], [7], [9], [0], [4], [6], and [9]). Unlike for feasible algorithms (in which a feasible interior point is given as the initial point), the best-to-date O( p nl)-iteration complexity for infeasible interior-point algorithms was not established until Ye et al. [8] proposed an homogeneous and self-dual (HSD) algorithm. Recently Mizuno et al. [] studied the trajectories followed by many primal-dual infeasible interior-point algorithms. For given (y 0 ; x 0 > 0; z 0 > 0), they dened two dimensional central surface fq(; ) : 0; 0g with 8 0 0 9 < T x z Q(; ) = (y; x > 0; z > 0) : Xz = e; = : n ; @ r P A = @ r0 = P A r D rd 0 ; where rp 0 = b? Ax 0 and rd 0 = c? A T y 0? z 0 ; r P = b? Ax and r D = c? A T y k? z k are primal and dual residuals respectively. If the LP problem possesses a solution, many primal-dual infeasible interior-point algorithms (e.g., Kojima et al. [6], Lustig [7], Mehrotra [9]) follow some paths on this central surface and approach optimality and feasibility simultaneously: for t! 0 : (t)! 0; (t)! 0: Mizuno et al. [] also discussed in detail the boundary behavior of central surface for primal-dual type infeasible interior-point algorithms. Very recently, Xu et al. [7] proposed a simplied version of the HSD Algorithm of Ye et al. [8]. The algorithm deals with a homogeneous and self-dual linear feasibility model (HLF ) Ax?b = 0;?A T y +c 0; b T y?c T x 0; () y free x 0 0: Denote by z the slack vector for the second (inequality) constraint and by the slack scalar for the third (inequality) constraint. Then, the problem is to nd a strictly complementary point such that x T z = 0 and = 0: 2

The kth iteration of the HSD algorithm solves the following system of linear equations for direction (d y ; d x ; d ; d z ; d ) and A d x?b d = r k P ;?A T d y +c d?d z =? r k D ; b T d y?c T d x?d = r k G ; (2) X k d z + Z k d x = k e? X k z k ; k d + k d = k? k k ; (3) where > 0, are scalar parameters, and k = ((x k ) T z k + k k )=(n + ); (4) r k P = b k? Ax k ; r k D = c k? A T y k? z k ; r k G = c T x k? b T y k + k : (5) Xu et al. [7] showed that if we set =? in each iteration, then the algorithm becomes the HSD algorithm of Ye et al. [8], which follows a path fq( 0 t; 0 t) : 0 t g on the central surface. More precisely, Xu et al. [7] set ) k 0 k ( 0 O( 0 0 ): The limit points of these paths are strictly complementary point for (HLF), according to Mizuno et al. []. If setting <? or >? at each iteration, then the algorithm generates iterates converging to the all-zero solution or diverging, respectively. In this paper by introducing a simple update, we generalize the HSD algorithm of Xu et al. [7] so that a strictly complementary solution is obtained even when 6=?. In section 3, we prove that the generalized algorithm achieves O( p n (?) L)-iteration complexity by following certain central path on the central surface in a neighborhood N (), where can be any number 2 (0; ). By setting = 0, we get an interesting algorithm in which the searching direction is obtained by solving a Newton equation system without infeasible residual terms on its right hand side, as rst proposed by de Ghellinck and Vial [2] and later by Nesterov [3]. This approach obviously saves the computation of these residual terms. 2 Generalized HSD algorithms Generic HSD algorithm Given initial point y 0 ; x 0 > 0; 0 > 0; z 0 > 0; 0 > 0, k 0. While ( stopping criteria not satised ) do. Let r k P = b k? Ax k ; r k D = c k? A T y k? z k ; r k G = ct x k? b T y k + k : 3

2. Solve (2) and (3) for d y ; d x ; d ; d z ; d. 3. Let x = d x + (?? )x k ; y = d y + (?? )y k ; z = d z + (?? )z k ; (6) = d + (?? ) k ; = d + (?? ) k : 4. Choose a step size k > 0 and update x k+ = x k + k x > 0; y k+ = y k + k y ; z k+ = z k + k z > 0; (7) k+ = k + k > 0; k+ = k + k > 0: 5. k k +. Note that we have X k z + Z k x = X k d z + Z k d x + (?? )(X k z k + Z k x k ) = k e? X k z k + 2(?? )X k z k = k e? (2 + 2? )X k z k ; (8) k + k = k? (2 + 2? ) k k : Similar to the proofs in Xu et al. [7], we rst establish the following lemmas: Lemma. The direction resulting from (6) satises ( x ) T z + = 0: (9) Proof. Xu et al. [7] established following result for the solution of system (2) and (3): d T x d z + d d = (?? )(n + ): Thus ( x ) T z + = d T x d z + d d + (?? )((x k ) T d z + (z k ) T d x + k d + k d )+ (?? ) 2 ((x k ) T z k + k k ) = [(?? ) + (?? )(? ) + (?? ) 2 ](n + ) = 0: Q.E.D. 4

Lemma 2. The generic algorithm generates f k g and f k g satisfying 0 = [(x 0 ) T z 0 + 0 0 ] = (n + ); k+ = ( + k (?? 2)) k (0) and 0 = ; k+ = ( + k (?? 2)) k () such that r k P = k r 0 P ; rk D = k r 0 D ; rk G = k r 0 G : Proof. By (9) and (8), we have k+ = [(x k+ ) T z k+ + ( k+ ) T k+ ] = (n + ) By (2), (6) and (7), we also have = f [(x k ) T z k + k k ] + k [(x k ) T z + ( x ) T z k + k + k ] g = (n + ) = [ + k (? 2? 2 + )] k = [ + k (?? 2)] k : Similarly, we have this relation for r k+ D r k+ P = [( k + k )b? A(x k + k x )] = [r k P + k ( b? A x )] = [ + k (?? 2)]r k P : and rk+ G as well. From Lemma 2, for any choice of and, our algorithm ensures k and k having a nice xed ratio Q.E.D. k = k = 0 = 0 : The non-negativity of (x k+ ; k+ ; z k+ ; k+ ) results in k+ 0, which implies that the step size k must satisfy + k (?? 2) 0: Therefore, letting + 2? > 0; (2) yields 0 + k (?? 2) < : According to Mizuno et al. [], we have the following corollary. 5

Corollary 3. If the generic algorithm generates f(y k ; x k ; k ; z k ; k )g satisfying k! 0 and min[min(x k i zk i ); k k ] k i for certain > 0, then every limit point of the sequence is a strictly complementary solution of (HLF). p 3 n O( (?) L)-iteration HSD algorithms For (HLF) the two dimensional central surface and its neighborhood are dened as 8 0 0 9 0 r >< Q(; ) = (y; x > 0; > 0; z > 0; > 0) : @ Xz P r P 0 >= A = e; B r @ D C A = B r 0 @ D C A >: r G rg 0 >; 8 0 0 9 0 r >< N () = (y; x > 0; > 0; z > 0; > 0) : k @ Xz P r P 0 >= A? ek ; B r @ D C A = B r 0 @ D C A >: r G rg 0 >; for some 2 (0; ), respectively. Theorem 4. For a given 0 < < and (y k ; x k ; k ; z k ; k ) 2 N (), if k minf 2 + 2? ; 2(? ) [( + 2? ) p n + = + (2 + 2? )] 2 g (3) then 0 k @ Xk+ z k+ k+ k+ A? k+ ek k+ : Proof. To simplify the notation, we use x and z to represent ( x ) and ( z ). Therefore kek = p n +. Note that this notation is only employed in the proof of this theorem. As usual, the capital expression denotes the diagonal matrix of a vector. Thus x = diag( x ), z = diag( z ). Consider kx k+ z k+? k+ ek = kx k z k + k (X k z + Z k x ) + ( k ) 2 x z? [ + k (?? 2)] k ek kx k z k + k (X k z + Z k x )? [ + k (?? 2)] k ek + ( k ) 2 k x z k = j? k (2 + 2? )j kx k z k? k ek + ( k ) 2 k x z k: 6

Using T x z = 0, we have k x z k = k[(x k )? Z k ] =2 x [X k (Z k )? ] =2 z ek 2 k[(xk )? Z k ] =2 x e + [X k (Z k )? ] =2 z ek 2 = 2 k(xk Z k )?=2 (Z k x + X k z )k 2 2 mini xizi kzk x + X k z k 2 = 2 mini xizi kk e? (2 + 2? )X k z k k 2 2 mini xizi [k( + 2? )k ek + k(2 + 2? )(X k z k? k e)k] 2 ( k ) 2 2 mini x k i zk [( + 2? ) p n + + (2 + 2? )] 2 : i By min i x k i zk i (? )k ; we have kx k+ z k+? k+ ek j? k (2 + 2? )j k + (k ) 2 ( k ) 2 [( + 2? ) p n + + (2 + 2? )] 2 2 mini x k i zk i = fj? k (2 + 2? )j + (k ) 2 2(?) [( + 2? )p n + = + (2 + 2? )] 2 g k : Again, Lemma 2 tells us k+ = [? k ( + 2? )] k : (4) Therefore, kx k+ z k+? k+ ek k+ if or j? k (2 + 2? )j + (k ) 2 2(? ) [( + 2? )p n + = + (2 + 2? )] 2 [? k ( + 2? )]; 2(? ) [( + 2? )p n + = + (2 + 2? )] 2 ( k ) 2 [? k ( + 2? )]? j? k (2 + 2? )j: If we further assume then it becomes k Thus we have proved the theorem.? k (2 + 2? ) 0; 2(? ) [( + 2? ) p n + = + (2 + 2? )] 2 : Q.E.D. Using the simple continuarity argument ([]), we see from Theorem 4 that, as long as the step size k satises (3), the resulting point is still in the neighborhood of the central path (y k+ ; x k+ ; k+ ; z k+ ; k+ ) 2 N (). 7

Let us now consider the following optimization problem for a given 0 < < minimize k+ = k subject to k ; ; satisfy (2); (3): (5) Setting step size according to (3), we obtain k+ = k =? k ( + 2? ) =? minf 2+2? ; Letting! = + 2?, we can rewrite problem (5) as (0 < < ) minimize? minf +! ; subject to! > 0; > 0: By setting! =!, problem (5) further becomes 2(?) [(+2?) p n+=+(2+2?)] 2 g ( + 2? ): 2(?) (! p n+=++!) 2 g! minimize? minf! +! ; subject to! > 0: 2(?)! [( p n+=+)!+] 2 g (6) It is easy to verify that the problem minimize? 2(?)! [( p n+=+)!+] 2 has the optimal value with the optimizer?! =? 2( p n + + ) p n + = + : This implies that and satisfy Therefore, the optimal solution of (6) is clearly bounded by maxf? p n + + 2 ;? = ( p n + = + )( + 2? ): (7)? 2( p n + + ) g k+ = k?? 2( p n + + ) : (8) Above analysis points out that for a given 0 < <, using (3), the best reduction rate for that the algorithm can achieve is? O( p n ). This results in the O( p nl)-iteration complexity. It also implies that a better complexity in worst case is very hard to achieve if the 2-norm neighborhood is used. By setting and according to (7), the reduction rate is? p n++2 when is near 0, or? near, respectively. Therefore, the algorithm achieves O( p n (?) L)-iteration complexity.? 2( p n++) when is A simple choice (y 0 = 0; x 0 = e; 0 = ; z 0 = e; 0 = ) ensures that the initial point is Q(; ) on the central surface. In summary, we have the following theorem. 8

Theorem 5. Let (LP) have integer data with a total bit length L. Then, (HLF) has integer data with a bit length O(L). Furthermore, let 0 < < and (y 0 ; x 0 ; 0 ; z 0 ; 0 ) 2 N () (for instance (y 0 ; x 0 ; 0 ; z 0 ; 0 ) = (0; e; ; e; )) and set = ( p n + = + )( + 2? ) > 0 p k n + + = minf ( p n + + 2) ;? g 2 > 0: Then, the generalized HSD algorithm generates a strictly complementary optimal solution of (HLF) in O( p n (?) L) iterations. As showed in Goldman and Tucker [3][5], Ye et al. [8], we have the following corollary. Corollary 6. The algorithm specied in Theorem 5 obtains a strictly complementary optimal solution of (LP) and (LD) or detects infeasibility of either (LP) or (LD) in O( p n (?) L) iterations. 4 An HSD algorithm In this section, we consider a special case of the generalized HSD algorithms. Let = 0. The modied Newton equation system becomes A d x?b d = 0;?A T d y +c d?d z = 0; b T d y?c T d x?d = 0; (9) Z k d x +X k d z = k e? X k z k ; k d + k d = k? k k : We observe that r P ; r D ; r G disappear in (9). From (2), we have to set >. Thus, we can avoid computing these residuals. Designing algorithms basing on the homogeneous and self-dual linear feasibility model (HLF) seems to have exploited the special properties of linear programming better than on original model. We observe that the introduction of the homogeneous variable and the updating of solutions (6) plays an important role in this algorithm. It makes the feasible and infeasible interior-point algorithms with no dierence at all. From Lemma 2, clearly, a large is desired for this new algorithm, since is xed at zero now. In practice, we can make use of a predictor-corrector strategy to choose a very large in each iteration, similar to Xu et al. [7] where they used the strategy to choose a very small. 9

References [] D. A. Bayer and J. C. Lagarias, \The nonlinear geometry of linear programming: I. Ane and projective scaling trajectories, II. Legendre transform coordinates and central trajectories," Transactions of the American Mathematical Society 34 (989) 499-58. [2] G. de Ghellinck and J.-P. Vial \A polynomial Newton method for linear programming," Algorithmica (986) 425-454. [3] A. J. Goldman and A. W. Tucker, \Polyhedral convex cones," in: H. W. Kuhn and A. W. Tucker eds., Linear Inequalities and Related Systems (Princeton University Press, Princeton, NJ, 956) 9-40. [4] M. Kojima, S. Mizuno, and A. Yoshise, \A primal-dual interior point algorithm for linear programming," in: N. Megiddo, ed., Progress in Mathematical Programming, Interior Point and Related Methods (Springer-Verlag, New York, 989) 29-47. [5] M. Kojima, S. Mizuno, and A. Yoshise, \A polynomial-time algorithm for a class of linear complementarity problems," Mathematical Programming 44 (989) -26. [6] M. Kojima, N. Megiddo, and S. Mizuno, \A primal-dual infeasible-interior-point algorithm for linear programming," Mathematical Programming 6 (993) 263-280. [7] I. J., Lustig, R. E. Marsten, and D. F. Shanno, \Computational experience with a primal-dual interior point method for linear programming" Linear Algebra and Its Applications 52 (99) 9-222. [8] N. Megiddo, \Pathways to the optimal set in linear programming," in: N. Megiddo, ed., Progress in Mathematical Programming, Interior Point and Related Methods (Springer-Verlag, New York, 988) 3-58. [9] S. Mehrotra \On the implementation of a (primal-dual) interior point method," SIAM Journal on Optimization 2 (992) 575-60. [0] S. Mizuno, \Polynomiality of infeasible interior point algorithms for linear programming," Mathematical Programming 67 (994) 09-20. [] S. Mizuno, M. J. Todd, and Y. Ye, \A surface of analytic centers and infeasible-interior-point algorithms for linear programming," Technical Report, School of Operations Research and Industrial Engineering, Cornell University, (Ithaca, New York, 992), to appear in Mathematics of Operations Research. [2] R. C. Monteiro and I. Adler, \Interior path following primal-dual algorithms, part I: linear programming," Mathematical Programming 44 (989) 27-42. 0

[3] Yu. Nesterov, \Long-step strategies in interior-point potential reduction methods," Central Economical and Mathematical Institute, Russian Academy of Science, (Moscow, Russia, 993). [4] F. A. Potra, \An infeasible interior-point predictor-corrector algorithm for linear programming," Report No. 26, Department of Mathematics, University of Iowa (Iowa City, IA, 992), to appear in SIAM Journal on Optimization. [5] A. W. Tucker, \Dual systems of homogeneous linear relations," in: H. W. Kuhn and A. W. Tucker eds., Linear Inequalities and Related Systems (Princeton University Press, Princeton, NJ, 956) 3-8. [6] S. J. Wright, \A path-following infeasible-interior-point algorithm for linear complementarity problems," Optimization Methods and Software 2 (993) 79-06. [7] X. Xu, P. F. Hung, and Y. Ye, \A simplied homogeneous and self-dual linear programming Algorithm and Its Implementation," College of Business Administration, The University of Iowa (Iowa City, IA, 993). [8] Y. Ye, M. J. Todd, and S. Mizuno, \An O( p nl)-iteration homogeneous and self-dual linear programming algorithm," Mathematics of Operations Research 9 (994) 53-67. [9] Y. Zhang, \On the convergence of a class of infeasible interior-point methods for the horizontal linear complementarity problem," SIAM Journal on Optimization 4 (994) 208-227.