A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

Similar documents
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

On Mehrotra-Type Predictor-Corrector Algorithms

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

A new primal-dual path-following method for convex quadratic programming

A Simpler and Tighter Redundant Klee-Minty Construction

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Interior Point Methods for Nonlinear Optimization

Local Self-concordance of Barrier Functions Based on Kernel-functions

CCO Commun. Comb. Optim.

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

Lecture 10. Primal-Dual Interior Point Method for LP

New Interior Point Algorithms in Linear Programming

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

A priori bounds on the condition numbers in interior-point methods

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

On well definedness of the Central Path

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

Interior Point Methods in Mathematical Programming

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

Semidefinite Programming

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

Optimization: Then and Now

Limiting behavior of the central path in semidefinite optimization

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

4TE3/6TE3. Algorithms for. Continuous Optimization

An EP theorem for dual linear complementarity problems

New stopping criteria for detecting infeasibility in conic optimization

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

Interior Point Methods for Mathematical Programming

IMPLEMENTATION OF INTERIOR POINT METHODS

Using Schur Complement Theorem to prove convexity of some SOC-functions

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

A strongly polynomial algorithm for linear systems having a binary solution

Largest dual ellipsoids inscribed in dual cones

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

18. Primal-dual interior-point methods

Interior-Point Methods

Curvature as a Complexity Bound in Interior-Point Methods

Interior Point Methods for Linear Programming: Motivation & Theory

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

c 2005 Society for Industrial and Applied Mathematics

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Self-Concordant Barrier Functions for Convex Optimization

Nonsymmetric potential-reduction methods for general cones

On implementing a primal-dual interior-point method for conic quadratic optimization

א K ٢٠٠٢ א א א א א א E٤

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

Primal-dual IPM with Asymmetric Barrier

On self-concordant barriers for generalized power cones

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization

Applications of the Inverse Theta Number in Stable Set Problems

A Polynomial Column-wise Rescaling von Neumann Algorithm

10 Numerical methods for constrained problems

Second-order cone programming

On the Sandwich Theorem and a approximation algorithm for MAX CUT

Interior-Point Methods for Linear Optimization

A Distributed Newton Method for Network Utility Maximization, II: Convergence

We describe the generalization of Hazan s algorithm for symmetric programming

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

McMaster University. Advanced Optimization Laboratory. Title: The Central Path Visits all the Vertices of the Klee-Minty Cube.

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

A Smoothing Newton Method for Solving Absolute Value Equations

On Mehrotra-Type Predictor-Corrector Algorithms

New Primal-dual Interior-point Methods Based on Kernel Functions

Lecture 14: Primal-Dual Interior-Point Method

A New Self-Dual Embedding Method for Convex Programming

2 The SDP problem and preliminary discussion

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

Lecture 18 Oct. 30, 2014

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

Transcription:

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University of Technology P.O.Box 503, 600 GA Delft, The Netherlands Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada, L8S 4L7 Abstract We propose a new class of primal-dual methods for linear optimization LO. By using some new analysis tools, we prove that the large update method for LO based on the new search direction has a polynomial complexity O n 4 4+ρ log n ε iterations where ρ [0, ] is a parameter used in the system defining the search direction. If ρ = 0, our results reproduce the well known complexity of the standard primal dual Newton method for LO. At each iteration, our algorithm needs only to solve a linear equation system. An extension of the algorithms to semidefinite optimization is also presented. Keywords: Linear Optimization, Semidefinite Optimization, Interior Point Method, Primal- Dual Newton Method, Polynomial Complexity. AMS Subject Classification: 90C05 Introduction Interior point methods IPMs are among the most effective methods for solving wide classes of optimization problems. Since the seminal work of Karmarkar [7], many researchers have proposed and analyzed various IPMs for Linear and Semidefinite Optimization LO and SDO and a large amount of results have been reported. For a survey we refer to recent books on the subject [7], [], [4]. An interesting fact is that almost all known polynomial-time variants of IPMs use the so-called central path [8] as a guideline to the optimal set, and some variant of Newton s method to follow the central path approximately. Therefore, the theoretical analysis of IPMs consists for a great deal of analyzing Newton s method. At present there is still a

gap between the practical behavior of the algorithms and the theoretical performance results, in favor of the practical behavior. This is especially true for so-called primal-dual large-update methods, which are the most efficient methods in practice see, e.g. Andersen et al. []. The aim of this paper is to present a new class of primal dual Newton-type algorithms for LO and SDO. To be more concrete we need to go into more detail at this stage. We consider first the following linear optimization problem: P min{c T x : Ax = b, x 0}, where A R m n satisfies ranka = m, b R m, c R n, and its dual problem D max{b T y : A T y + s = c, s 0}. We assume that both P and D satisfy the interior point condition IPC, i.e., there exists x 0, s 0, y 0 such that Ax 0 = b, x 0 > 0, A T y 0 + s 0 = c, s 0 > 0. It is well known that the IPC can be assumed without loss of generality. For this and some other properties mentioned below, see, e.g., [7]. Finding an optimal solution of P and D is equivalent to solving the following system. Ax = b, x 0, A T y + s = c, s 0, xs = 0. Here xs denotes the coordinatewise product of the vectors x and s. The basic idea of primal-dual IPMs is to replace the third equation in, the so-called complementarity condition for P and D by the parameterized equation xs = µe, where e denotes the all-one vector and µ > 0. Thus we consider the system Ax = b, x 0, A T y + s = c, s 0, xs = µe. If the IPC holds, then for each µ > 0, the parameterized system has a unique solution. This solution is denoted as xµ, yµ, sµ and we call xµ the µ-center of P and yµ, sµ the µ-center of D. The set of µ-centers with µ running through all positive real numbers gives a homotopy path, which is called the central path of P and D. The relevance of the central path for LO was recognized first by Sonnevend [8] and Megiddo [0]. If µ 0 then the limit of the central path exists and since the limit points satisfy the complementarity condition xs = 0, the limit yields optimal solutions for P and D. IPMs follow the central path approximately. Let us briefly indicate how this goes. Without loss of generality we assume that xµ, yµ, sµ is known for some positive µ. We first update µ to µ := θµ, for some θ 0,. Then we solve the following well-defined Newton system A x = 0, A T y + s = 0, 3 s x + x s = µe xs,

and get a unique search direction x, s, y. By taking a step along the search direction where the step size is defined by some line search rules, one constructs a new triple x, y, s that is close to xµ, yµ, sµ. This process is repeated until the point x, y, s is in a certian neighborhood of the central path. Then µ is again reduced by the factor θ and we apply Newton s method targeting at the new µ-centers, and so on. We continue this process until µ is small enough. Most practical algorithms then construct a basic solution and produce an optimal basic solution by crossing-over to the Simplex method. An alternative way is to apply a rounding procedure as described by Ye [3] see also Mehrotra and Ye [] [7]. The choice of the parameter θ plays an important role both in the theory and practice of IPMs. Usually, if θ is a constant independent of n, for instance θ =, then we call the algorithm a large update or long-step method. If θ depends on n such as θ = n, then the algorithm is named a small update or short-step method. It is now known that small update methods have an O n log n ε iteration bound and the large update ones have a worse case iteration bound as On log n ε [7,, 4]. The reason for the worse bound for large update methods is that, in both the analysis and implementation of large update IPMs, we usually use some proximities or potential functions to control the iteration, and up to now we can only prove that the proximity or the potential function has at least a constant decrease after one step. For instance, considering the primal dual Newton method, after one step the proximity δ used in this paper satisfies δ+ δ β for some constant β [6]. On the other hand, contrary to the theoretical results, large update IPMs work much more efficient in practice than small update methods []. Several authors have suggested to use so called higher-order methods to improve the complexity of large update IPMs [4, 6, 4, 5, 6]. Then, at each iteration, one solves some additional equations based on the higher-order approximations to the system. The motivation of this work is to improve the complexity of large update IPMs. Different from the higher-order approach, we reconsider the Newton system for, keeping in mind that our target is to get closer to the µ-center. Now let us focus on the Newton step. For notational convenience, we introduce the following notations: v := xs µ µ, v := xs ; 4 d x := v x x, d s := v s s ; 5 d x := x x, ds := s s. 6 Using the above notations, one can state the central condition in as v = v = e. Denote d v = d x + d s, the last equation in 3 is equivalent to d v = v v. Observe that we can also decompose the above system into two systems as the predictor direction which is obtained by solving d v P red = v, and the corrector direction by d v Corr = v. Here δ and δ + denote the proximity before and after one step,respectively. 3

The corrector direction serves the purpose of centering, it points towards the analytic center of the feasible set, while the predictor direction aims to decrease the duality gap. It is straightforward to verify that d v i 0 for all the components v i and d v i > 0 for the components v i <. This means that if v i < then the Newton step increases v i and v i decreases whenever v i > to get more close to the µ-center. It is reasonable to expect that if we can increase the small components and decrease the large components of v more, we might arrive our target the µ-center faster. Motivated by this observation, we reconsider the right hand side of the equation defining the corrector direction according to the current point v. The new corrector direction is defined by d v Corr = v ρ, ρ 0, thus yielding a new system as follows Ād x = 0, Ā T y + d s = 0, 7 d x + d s = v ρ v, where Ā = AV X, V = diag v, X = diag x and ρ 0 is a parameter. In this work we consider only the case that the parameter ρ is restricted to the interval [0, ]. Note that if ρ = 0, the new system is identical to the standard Newton system. It may be clear from the above description that in the analysis of IPMs we need to keep control on the distance from the current iterates to the current µ-centers. In other words, we need to quantify the distance from the vector xs to the vector µe in some proximity measure. In fact, the choice of the proximity measure is crucial for both the quality and the elegance of the analysis. The proximity measure we use here is defined as follows. δxs, µ := v v. 8 Note that the measure vanishes if xs = µe and is positive otherwise. An interesting observation is that in the special case of ρ =, the right hand side of the third term in the system 7 represents the negative gradient of the proximity measure δ in the v-space. When solving this system with ρ =, we get the steepest descent direction for the proximity measure along which the proximity can be driven to zero. As we will see later, after one step using the new search direction, the proximity will decrease at least as large as βδ 3 where β is a constant. Consequently we get an improvement over the complexity of the algorithm. We also mention that in [6], the authors have shown that δ+ δ βδ after one feasible standard Newton step if v min, which is exactly the same as we will state in Lemma 3.8 of this work. However, we failed to prove a similar inequality for the case v min < and hence could not improve the complexity of the large update IPM in [6]. The measure δ, up to a factor, was introduced by Jansen et al. [5], and thoroughly used in [7], Zhao [6], more recently [6]. Its SDO analogue was also used in the analysis of interior point methods for semidefinite optimization [3]. We notice that variants of the proximity δxs, µ had been used by Kojima et al. in [9] and Mizuno et al. in [3]. The paper is organized as follows. First, in Section, we present some technical results which will be used in our analysis later. In Section 3 we analyze the method with damped step and show that it has an O n 4 4+ρ log n ε polynomial iteration bounds. In Section 4 we discuss an extension of the new primal dual algorithm for SDO and study its complexity. Finally we close this paper by some concluding remarks in Section 5. 4

A few words about the notations. Throughout the paper, denotes the -norm for vectors and the Frobenius norm for matrices while denotes the infinity norm. For any x = x, x,, x n T R n, x min = minx, x,, x n or x max is the component of x which takes the minimal or maximal value. For any symmetric matrix G, we also define λ min G or λ max G the minimal or maximal eigenvalue of G. Furthermore we also assume that the eigenvalues of G are listed according to the order of their absolute values such that λ G λ G λ n G. If G is positive semidefinite, then it holds 0 λ min G = λ n G, λ max G = λ G. For any symmetric matrix G, we also denote G = GG. For two symmetric matrices G, H, the relation G H means H G is positive semidefinite, or equivalently H G 0. Technical Results As we stated in the introduction, a key issue in the analysis of an interior point method, particularly for a large update IPM, is the decreasing property of a positive sequence of the proximity measure values. This is crucial for the complexity of the algorithm. In this section we consider a general positive decreasing sequence. First we give a technical result which will be used in our later analysis. Lemma. Suppose that α. Then αt t α, t [0, ]; 9 αt t α, t 0. 0 If α α > 0, then t t α t t α, t > 0. Proof: The inequality 9 follows since for any fixed α, the function t α + αt is increasing for t [0, ] and zero if t = 0. When α, the function t α α t is convex with respect to t 0 and has a global minimum zero at t =, which gives 0. One can easily verify. An equivalent statement of 9 is Now we can state our main result in this section. t α t α αt α, t, α. Proposition. Suppose that t 0 > is a constant. Suppose {t k, k = 0,, } is a sequence satisfying the following inequalities with γ [0, and β > 0. Then one has t k+ t k βt γ k, k = 0,,, 3 for all k t γ 0 β γ. t k+ 0, 5

Proof: First we note if β t γ 0, then step is sufficient. Hence we can assume without loss of generality that 0 < β < t γ 0. We begin the proof by considering the simple case that β = and τ = 0. It follows from 3 that Assume the current point t k, we get t γ k γ γ = γ γ γ γ t k+ t k t γ k. 4 t γ k = t k t γ k t k+, γ t γ k γ γ γ t γ γ k γ γ where the first inequality follows from. The above inequality further implies Hence, since β =, after at most t γ k+ t γ k t γ 0 γ γ. general case that β. Let us define a new sequence by Then the inequality 3 gives steps we have that t k+ 0. Now we turn to the t k = t k β γ, k = 0,, t k+ t k t γ k. t By our discussion about the special case that β =, we know that after at most γ 0 γ steps it holds t k+ 0, which is equivalent to say that, after at most steps we get t k+ 0. This completes the proof of the proposition. t γ 0 β γ 3 New Primal Dual Methods for LO and Their Complexity In the present section we discuss the new primal dual Newton methods for LO and study the complexity of the large update algorithm. The section consists of four parts. In the first subsection we describe the new algorithm. In the second subsection we estimate the magnitude of the search direction and the maximum value of a feasible step size. The third subsection is devoted to estimate the proximity measure after one step. The complexity of the algorithm is given in the last subsection. 3. The algorithm Let x, y, s denote the solution of the following modified Newton equation system for the parameterized system : A x = 0, A T y + s = 0, 5 s x + x s = µ + ρ e xs. xs ρ 6

This is the Newton system for the equation xs = µ µe ρ xs. Note that on the central path it holds xs µ = e = µ ρ xs. From the definitions of v, d x, d s one can easily check that the above system is equivalent to 7. Recall cf. Chapter 7 in [7] that since ranka = m, for any µ > 0, the above equation system has a unique solution x, y, s. The result of a damped Newton step with damping factor α is denoted as x + = x + α x, y + = y + α y, s + = s + α s. 6 In the algorithm we use a threshold value τ for the proximity and we assume that we are given a triple x 0, y 0, s 0 such that δx 0 s 0, µ 0 τ for µ 0 =. This can be done without loss of generality cf. [7]. If, for the current iterates x, y, s and barrier parameter value µ the proximity δxs, µ exceeds τ then we use one or more damped Newton steps to recenter while keeping µ temporarily fixed; otherwise µ is reduced by the factor θ. This is repeated until nµ < ε. Thus the algorithm can be stated as follows. Large Update Primal-Dual Algorithm for LO Input: A proximity parameter τ; an accuracy parameter ε > 0; a variable damping factor α; a fixed barrier update parameter θ, 0 < θ < ; x 0, s 0 and µ 0 = such that δx 0 s 0 ; µ 0 τ. begin x := x 0 ; s := s 0 ; µ := µ 0 ; while nµ ε do begin µ := θµ; while δxs; µ τ do Solve the system 5, begin x := x + α x; s := s + α s; y := y + α y end end end Remark 3. The damping parameter α has to be taken such that the proximity measure function δ decreases sufficiently. In the next section we determine a default value for α. 7

3. Magnitude of the search direction and feasible step size First recall that the proximity δxs, µ before the step satisfies δxs, µ = We have, using 5 and 6, Defining v v = e T v v = e T v + v e. 7 x + s + = x + α x s + α s = xs + α x s + s x + α x s = xs + α µ + ρ e xs + α x s. xs ρ v + = x+ s + µ, and using that xs = µv and x s = µd x d s we obtain v + = x +s + µ = v + α v ρ v + α d x d s. 8 Since the displacements x and s are orthogonal, the scaled displacements d x and d s are orthogonal as well. Hence we have d T x d s = 0. 9 Thus we get the following expression for the proximity after the step. δx + s +, µ = e T v+ + e e = e T v + α v + v ρ v + α d x d s + = e v T + α v ρ v + e v + α v ρ v + α e d x d s. 0 e v + α v ρ v + α d x d s e In the sequel we will denote δxs, µ simply as δ and δx + s +, µ simply as δ +. Recall that the term d x d s represents the second order effect in the Newton step. It may be worthwhile to consider the case where this term is zero, i.e., when the Newton process is exact. In that case after a step of size α, x + s + /µ is given by wα := v + α v ρ v = αv ρ + αv. We have the following result. Lemma 3. If d x d s = 0 then δ+ αδ + α v ρ v ρ, α [0, ]. Particularly, if α = ρ+, then δ + ρδ ρ +. 8

Proof: Defining If d x d s = 0 then δ+ = e v T + α v ρ v e + v + α v ρ v e. χt := t +, t > 0, t one may easily verify that χt is strictly convex on its domain and minimal at t = where χ = 0. Moreover it holds χt = χ t. It follows δ = χ vi = Therefore, since w i α = v i + αv ρ i χ v i v i = αv ρ i, δ+ = χ w i α. 3 + αvi, we may write δ+ = χ w i α αχvi + αχ v ρ i. This proves the first statement of the lemma. To prove the second conclusion of the lemma, we observe first that, when the Newton process is exact, the step size α = ρ+ is feasible. We need only to consider the case ρ > 0. It follows from δ+ ρ = vi + vi + ρ + ρ + v ρ i ρ ρ + v i 4 ρ + + ρ + ρvi + v ρ i = ρ ρ + δ + ρ + v ρ i ρ ρ + v i 4 ρ + + ρ + ρvi + v ρ i ρ ρ + δ + ρ + v ρ i ρ ρ + ρ v ρ i + 4 ρ + + ρ + ρvi + v ρ i = ρ ρ + ρ + δ + ρ ρ + ρ + δ + ρ ρ + δ, ρv i + v ρ i ρ + v ρ i + v ρ i where the first and second inequalities are given by 0 in Lemma. where that α = ρ, and the last one implied by the fact that v ρ i + v ρ i for all i =,,, n. The proof of the lemma is finished. Now we consider the practical case where the Newton step is not exact. This is the case where the vector d x d s is nonzero. According to 8, after a step of size α we then have see 4-6 v + = x +s + µ = v e + αv d x e + αv d s = v e + α d x e + α d s. 4 Hence the maximal feasible step size is determined by the vector d x, d s. For notation convenience, we also define n σ := v i vi 3 v i v ρ i. 5 9

Lemma 3.3 Let δ and σ be defined by 8 and 5 respectively. It holds that σ δ for all ρ 0, and if ρ [0, ] then σ d x, d s δ. Proof: Combining 7 with 9 we obtain d x, d s = d T x d x + d T s d s = d x + d s T d x + d s = v v ρ. Since ρ [0, ], it follows from in Lemma. that σ = v i v 3 i v i v ρ i v i v i v i v ρ i = dx, d s = δ, which yields the second statement of the lemma. Skipping the first inequality and the second equality in the above proof gives the first conclusion of the lemma. Our next lemma estimates the norm v in terms of σ. We have Lemma 3.4 Let σ be defined by 5. Then v + σ 4+ρ. Proof: First note that v = v min. Hence it suffices to show that v min + σ 4+ρ. This is trivial if v min. Now we consider the case that v min <. From 5 we derive σ vi 3 v i v ρ i v i, i =,, n, which further implies, since ρ [0, ], σ vmin 3 v min v ρ min v min = v 4+ρ min. v 4+ρ min v 4 ρ min v 4+ρ min v ρ min The statement of the lemma follows directly from the above inequality. Now we are ready to state our main result in this section. Lemma 3.5 Let d x, d s be defined by 6. Then it holds d x, d s σ + σ 4+ρ. Furthermore, the maximal feasible step size α max satisfies α max σ + σ 4+ρ. 0

Proof: Since the current point x, y, s is strictly feasible, from 6 one can easily see that the step size α is feasible if and only if both the vectors e + α d x and e + α d s are strictly feasible. It follows d x, d s = dx v, d s d x, d s σ σ + σ 4+ρ, v v min v min where the last two inequalities follow from Lemma 3.3 and Lemma 3.4, which concludes the statements of the lemma. 3.3 Estimate of the proximity after a step We estimate the decreasing value of the proximity after one step in this section. Let d x = d x, d x,..., d n x T and similarly for d s. We define the difference between the proximity before and after one step as a function of α, i.e., fα = δ+ δ. From 9 and the definitions of δ 8 and δ + 0, we derive fα = αv ρ i vi + vi + αv ρ i vi + α d i xd i s vi = αv ρ i v i + v i + α d xi + α d si. 6 Obviously fα is a twice continuously differentiable function of α if the step size α is feasible. Our next result says that in the interval [0, α max the function fα is a convex function of α, where α max is the maximal feasible step size. Lemma 3.6 Let the function fα be defined by 6 and let the parameter α [0, α max. Then fα is convex. Furthermore, it holds v i d x i + α d xi 3 + α d si + ds i + α d si 3 + α d f α; 7 xi and that f α 3 v i d x i + α d xi 3 + α d si + ds i + α d si 3 + α d. 8 xi Proof: f α = By direct algebraic calculus, we have v i d x i + α d xi 3 + α d si + d xi dsi + α d xi + α d si + d s i + α d si 3 + α d xi By using the well-known inequality t t t + t, we get d xi dsi + α d xi + α d si dx i + α d xi 3 + α d si + ds i + α d si 3 + α d xi,.

which implies the statements of the lemma. Denote ω i = Lemma 3.5, dx i + d s i and ω = ω,..., ω n. Obviously it holds ω = d x, d s. From ω σ v min σ + σ 4+ρ. 9 Now recalling 7 and 8 we can conclude that for any α [0, α max, v i ωi + αω 4 f α 3 v i ωi αω 4 3ω vmin αω4. 30 A direct calculation gives f 0 = σ. It follows from 30 and the convexity of fα that where fα f 0α + 3ω vmin = f 0α + ω vmin α ξ 0 0 α 0 ζω 4 dζdξ ξω 3 = σ α + f α, 3 f α = σ α + ω v min α 0 ξω 3 It is easy to see that f α is also convex and twice differentiable in the interval [0, α max. We are interested in the point α > 0 at which the function f α has the value zero. By direct calculus, one has f α = σ α ωα + αω where = ωα v min This means the function f α = 0 if α = ω v min v min dξ dξ. αω αω η, 3 + 4η + 9 η + η = σ vmin. 33 ω = 3 + η 4η + 9 ωη +. 34 For this α we have fα 3 + η 4η + 9 4ωη + σ. Now we are in a position to state our main result in this section.

Theorem 3.7 Let the function fα be defined by 6 with δ. Then the step size α = 3+η 4η+9 ωη+ defined by 33 is feasible. Moreover it holds fα f 0α 30 δ ρ 4+ρ. Proof: The first part of the theorem follows directly from inequality 3 and the choice of α. The second conclusion of the theorem depends on several technical results which will be derived below. We first discuss the case that v min. Lemma 3.8 Let the function fα be defined by 6 with δ and let the step size defined by 34. If v min, then fα < 5 3 δ < 30 δ ρ 4+ρ. Proof: Since v min, it follows from 9 that ω σ. By the choice of η we have that η = σ v min ω Now recalling the fact that δ, it follows from Lemma 3.3 that which implies η σ δ, σ. 35 The above inequality gives α 5 3. 6ω fα f 0α 5 3 σ σ 5 3 σ 5 3 δ. 36 It is easy to verify the second inequality in the lemma. This completes the proof of the lemma. In what follows we consider the case that v min <. First we want to estimate the constant η and the step size α in Theorem 3.7. Lemma 3.9 Let the constants η be defined by 33 and α by 34. If δ then it holds and η + σ ρ 4+ρ, 37 α 5 σ + σ ρ 4 ρ+4. 38 3

Proof: From Lemma 3.4, 9 and 33 we obtain η = σ vmin ω σ + σ σv 3 min σ + σ 6 4+ρ ρ + σ 4+ρ ρ + σ 4+ρ, where the last two inequalities are implied by the fact σ δ and because the function gt = is increasing with respect to t for t 0. This proves the first inequality in the lemma. t +t Now we consider the second inequality. From 34 we derive that α = 3 + η 4η + 9 ωη + η = ω 3 + η + 4η + 9 = ω 3η + + 4η + 9η ω + σ ρ 4+ρ + 3 5 σ + σ ρ 4 ρ+4, 5ω + σ ρ 4+ρ ω 6η + 3 where the first inequality follows by direct calculus and the second one from 37, the third is true since + σ ρ 4+ρ when δ, and the last given by 9. The proof of the lemma is finished. Now we can prove the following lemma. Lemma 3.0 Let the function fα be defined by 6 with δ and let the step size defined by 34 and v min <. Then it holds fα f 0α 30 δ ρ 4+ρ. Proof: From 38 we obtain fα f 0α = ασ 5 = 5 σ ρ 4+ρ 30 δ ρ 4+ρ, σ + σ ρ 4 ρ+4 ρ 4 + σ ρ+4 30 σ ρ 4+ρ where the third inequality is true since + σ ρ 4 ρ+4 when δ, and the last by Lemma 3.3. This completes the proof for the lemma. Theorem 3.7 follows from Lemma 3.8 and Lemma 3.0. 4

3.4 Complexity of the algorithm In this section we derive an upper bound for the number of iterations of the algorithm if in each step the damping factor α is as in Theorem 3.7, namely α = 3+η 4η+9 ωη+. Then each damped Newton step reduces the squared proximity by at least 4+ρ which depends on the current proximity. We first recall a lemma that estimates the proximity after a barrier parameter update. 30 δ ρ Lemma 3. Let x, y, s be strictly feasible and µ > 0. If µ + = θµ then δxs, µ + δxs, µ + θ n θ. Proof: The lemma is a slight modification of Lemma IV.36 page 359 in [7]. The only difference exists in the definition of δ where δ = v v in this work while δ = v v in [7], hence the details are omitted here. Lemma 3. Let δxs, µ τ and τ. Then after an update of the barrier parameter no more than 8 54 + ρ τ + θ n 4+ρ θ iterations are needed to recenter, namely to reach δxs, µ + τ again. Proof: By Lemma 3., after the update, δxs, µ + τ + θ n. θ Each damped Newton step decreases δ by at least 4+ρ. It follows from Proposition. that after at most ρ 8 30 τ + θ n 4+ρ ρ 4+ρ θ = 54 + ρ τ + θ n 4+ρ θ 30 δ ρ inner iterations, the proximity will have passed the threshold value τ. This implies the lemma. Theorem 3.3 If τ, the total number of iterations required by the primal-dual Newton algorithm is no more than 54 + ρ 8 τ + θ n 4+ρ θ θ log n ε. 5

Proof: It can easily be shown that the number of barrier parameter updates is given by cf. Lemma II.7, page 6, in [7] θ log n. ε Multiplication of this number by the bound in Lemma 3. yields the theorem. For large update IPMs, omitting the round off brackets in Theorem 3.3 does not change the order of magnitude of the iteration bound. Hence we may safely consider the following expression as an upper bound for the number of iterations in the case τ = O n, θ 0, independent of n, and ρ = : O n 3 log n ε This gives the best bound known for large-update methods with large neighborhoods. Note that if ρ = 0, we obtain the to date best known complexity bounds for both small and large update methods. Moreover, our analysis allows extremely agressive updates of µ while preserving On log n ɛ complexity. For instance, we may take θ = n, τ = O n and ρ =, resulting in On log n ɛ complexity bound.. 4 New Primal Dual Algorithms for Semidefinite Optimization In this section we consider an extension of the algorithms posed in the previous section to the case of SDO. We consider the SDO given in the following standard form: SDO min Tr CX Tr A i X = b i i m, X 0, and its dual problem SDD max b T y m y i A i + S = C, S 0. Here C and A i i m are symmetric n n matrices, and b, y IR m. Furthermore, X 0 means that X is symmetric positive semidefinite. The matrices A i are assumed to be linearly independent. SDO is a generalization of LO where all the matrices A i and C are diagonal which implies S is automatically diagonal and so X might also be assumed to be diagonal. The concept of the central path can also be extended to SDO. We assume both the SDO and its dual SDD are strictly feasible. The central path for SDO is defined by the solution sets {Xµ, yµ, Sµ, µ > 0} of the following system Tr A i X = b i, i =,..., m, m y i A i + S = C, 39 XS = µe, X, S 0, where E denotes the n n identity matrix and µ > 0. Suppose the point X, y, S is strictly feasible, so X 0 and S 0. Newton s method amounts to linearizing the system 39, thus 6

yielding the following equation Tr A i X = 0, i =,, m, m y i A i = S 40 X S + XS = µe XS. A crucial observation for SDO is that the above Newton system might have no symmetric solution X. Many researchers have proposed different ways of symmetrizing the third equation in the Newton system so that the new system have a unique symmetric solution [0, ]. In this paper we consider the symmetrization scheme that yields the NT direction [5, ]. Let us define the matrix P = X X SX X = S S XS S, 4 and D = P. The matrix D can be used to rescale X and S to the same matrix V defined by [3, 5, 0, 9] V := D XD = DSD. 4 µ µ Obviously the matrices D and V are symmetric, and positive definite. Also defining Ā i := DA i D; i =,, m; D X := µ D XD, D S := µ D SD. 43 Then the NT search direction can be written as the solution of the following system Tr Ā i D X = 0, i =,, m, m y i Ā i = D S 44 D X + D S = V V. Similarly to the case of LO, the new search direction we suggest for SDO is a slight modification of the NT direction, which is defined by the solution of the following system Tr Ā i D X = 0, i =,, m, m y i Ā i = D S 45 D X + D S = V ρ V, where we choose ρ [0, ]. Then X and S can be calculated from 43. Due to the orthogonality of X and S, it is trivial to see that The proximity measure we use here is where is the Frobenius norm. The algorithm can be stated as follows. Tr D X D S = Tr D S D X = 0. 46 δ XS, µ := V V, 47 7

Large Update Primal-Dual Algorithm for SDO Input: A proximity parameter τ; an accuracy parameter ε > 0; a variable damping factor α; a fixed barrier update parameter θ 0, ; a strictly feasible X 0, S 0 and µ 0 > 0 such that δx 0 S 0 ; µ 0 τ. begin X := X 0 ; S := S 0 ; µ := µ 0 ; while nµ ε do begin µ := θµ; while δx, S; µ τ do Solve the system 45, begin X := X + α X; S := S + α S; y := y + α y; end end end Now we begin to estimate the decrease of the proximity after one step. Let us define σ = Tr V V 3 V V ρ. 48 Note that the matrices V V 3 and V V ρ commute. Hence they admit a similarity transformation that simultaneously diagonalizes both matrices. Then by using similar arguments as in the LO case, one can easily derive the following result Lemma 4. Let δ and σ be defined by 47 and 48 respectively. It holds σ D X, D S δ, and V = λmax V = + σ 4+ρ. 49 λ min V Defining δ + as the proximity measure after a feasible step, we have δ + = Tr V + αd X V + αd S n + Tr V + αd X V + αd S = Tr V + αtr V D X + D S n + Tr V + αd X V + αd S, 50 8

where the equality follows from 46. Our goal is to estimate the decreasing value of fα = δ + δ. 5 for a feasible step size α. The main difficulty in the estimation of the function fα is to evaluate its first and second derivatives. For this we need some knowledge about matrix functions []page 490-49. Suppose the matrix functions Gt, Ht are differentiable and nonsingular at the point t. Then we have d dt G t = G t[ d dt Gt]G t; 5 d d dt Tr Gt = Tr dt Gt, 53 d dt GtHt = [ d dt Gt]Ht + Gt[ d Ht]. 54 dt The following inequalities about matrix eigenvalues are also necessary for our analysis [] 3.3.5, page 83, 3.3.46a,page 9. Suppose that the matrices G, H are symmetric, we have The inequality 56 also implies that Tr GH λ i Gλ i H, 55 λ i GH min λ Gλ i H, λ Hλ i G. 56 Tr GH min λ G λ i H, λ H λ i G. 57 Particularly, if G G and H 0, then it holds Tr G H Tr G H. 58 In fact, for symmetric matrices G, H, it is not difficult to prove the following result which is a refinement of 55. Lemma 4. Suppose both G and H are symmetric. Then it holds Tr GH Tr G H. 59 Proof: The proof is inductive. First we observe that for any orthogonal matrix Q, it holds Tr QGHQ T = Tr GH. Premultipling by an orthogonal matrix Q and postmultipling by its transpose Q T if necessary, we can assume without loss of generality that G is diagonal. Now recalling the definition of the operator for matrices, we can claim that H H which implies H i,i H i,i for all i =,,, n. It follows Tr GH = G i,i H i,i G i,i H i,i G i,i H i,i = Tr G H. This completes the proof of the lemma. 9

Using 5, 53 and 54 we obtain that f 0 = σ. Now we are going to estimate the second derivative f α of fα. For notation convenience we also define D x = V DX V, Ds = V DS V. 60 We insert here a technical result about the norms of the matrices D x and D s. Lemma 4.3 Let the matrices D x and D s are defined by 60. Then it holds Dx + Ds σ + σ 4 4+ρ. Proof: From 56 we conclude that λ i D x λ min V λ id X, and It follows Dx + Ds = n = λ i D s λ i D x + λ i D s λ min V λ id S. λ min V λ min V D X + D S = σ + σ 4 4+ρ, λ i D X + λ i D X λ min V D X + D S where the third equality given by 46, and the last inequality follows from the definition of σ and 49. Note that the last term in 50 can be written as Tr V E + α Dx V E + α D s V. A useful observation is that the matrices D x and E + α D x commute, and similarly D s and E + α D s. Now we are ready to state one of our main results in this section. Lemma 4.4 Suppose the step size α is strictly feasible. Then it holds f 3 λ i α D x λ min V αλ i Dx 3 αλ i Ds + λ i D s αλ i Ds 3 αλ i Dx. 0

Proof: By applying 5, 53 and 54 to the function fα, we obtain f α = Tr V E + α Dx 3 D x V E + α D s V +Tr V E + α Dx V E + α D s 3 D s V +Tr V E + α Dx Dx V E + α D s Ds V. 6 We proceed by considering the first term in the above formulae. By the definition of the operator for a matrix, we conclude that which equivalent to E + α D x E α Dx, E + α D s E α Ds, E + α D x E α Dx, E + α D s E α Ds. Hence E + α D x 3 D x = D x E + α D x 3 Dx D x E + α D x 3 Dx = Dx E + α Dx 3 Dx E α Dx 3. Since Dx = D x, one has V E + α Dx 3 D x V V E α Dx 3 D x V. It follows Tr V E + α Dx 3 D x V E + α D s V Tr V E α Dx 3 D x V E + α D s V Tr V E α Dx 3 D x V E α Ds V, where the inequalities given by 58. Now recalling 56 we can claim that and λ i [V E α Dx 3 D x V λ ] i D x λ min V [ αλ i Dx, i =,, n; ] 3 λ i [V E α Ds V ] λ min V αλ i Dx, i =,, n. The above two inequalities, combining with 55 yield Tr V E + α Dx 3 D x V E + α D s V Tr V E α Dx 3 D x V E α Ds V λ min V λ i D x αλ i Dx 3 αλ i Ds.

Similarly we have Tr V E + α Ds 3 D s V E + α D x V Tr V E α Ds 3 D s V E α Dx V λ min V λ i D s αλ i Ds 3 αλ i Dx. The proof of the Lemma will be finished if we can show that the last term in 6 satisfies the following inequality Tr V E + α Dx Dx V E + α D s Ds V λ i D x λ min V αλ i Dx 3 αλ i Ds + λ i D s αλ i Ds 3 αλ i Dx. Using the definition of the operator again, one can easily see that V E + α Dx Dx V V E + α Dx Dx V V E α Dx Dx V, V E + α Ds Ds V V E + α Ds Ds V V E α Ds Ds V. These two inequalities, together with 55 and Lemma 4. give Tr V E + α Dx Dx V E + α D s Ds V V Tr E + α Dx Dx V V E + α Ds Ds V Tr V E α Dx Dx V E α Ds Ds V λ i D x λ i D s λ min V αλ i Dx αλ i Ds λ min V λ i D x αλ i Dx 3 αλ i Ds + λ i D s αλ i Ds 3 αλ i Dx where the last inequality follows from Cauchy inequality. This completes the proof of Lemma 4.4. Similar to the LO case, we also define ω i = direct consequence of Lemma 4.4 is f α 6 λ i D x + λ i D x and ω = ω, ω,, ω n. A 3ω λ min V αω 4,, which is the same as in the case of LO, except that v min is replaced by λ min V. f 0 = σ, we can use the same arguments as in the LO case to get Because fα σ α + f α, 63

where f α = σ α + and f α has the value zero at the point where α = ω ω λ min V + 4η + 9 η + α 0 ξω 3 = 3 + η 4η + 9 ωη + dξ,, 64 For this α we have η = σ λ min V. 65 ω fα 3 + η 4η + 9 4ωη + σ. Our next result estimates the above-defined constant η and the step size α in terms of σ. Lemma 4.5 Let the constants η be defined by 65 and α by 64. If δ then it holds and η + σ ρ 4+ρ, 66 α 5 σ + σ ρ 4 ρ+4. 67 Proof: First note that from Lemma 4.3 and its proof we obtain ω σ σ + σ 4+ρ. 68 λ min V The above relation means η = σ λ min V ω σ ω σ if λ min V. Hence 66 is true whenever λ min V. Now we consider the case λ min V <. Using 68, together with Lemma 4., 64 and 65, and by following an analogous process as in the proof of 37 in Lemma 3.9, one gets the desirable inequality 66. By using 66 and 68, and following similar arguments as in the proof of Lemma 3.9, one can easily prove 67. This completes the proof of the lemma. Now we can state our another main result in this section, which is a direct consequence of 63 and Lemma 4.5. Theorem 4.6 Let the function fα be defined by 5 with δ. Then the step size α = 3+η 4η+9 ωη+ defined by 65 is feasible. Moreover it holds fα f 0α 30 δ ρ 4+ρ. 3

We proceed to estimate the complexity of the algorithm. If the damping factor α is defined as in Theorem 4.6, namely α = 3+η 4η+9 ωη+ proximity by at least δxs,µ+θ n θ 30 δ ρ. Then each damped Newton step reduces the squared 4+ρ. From Lemma 3. we know that the proximity δxs, µ + after the update of µ. It follows from Lemma 3. that after an update of the barrier parameter no more than 8 54 + ρ τ + θ n 4+ρ θ iterations are needed to reach δxs, µ + τ again. This means the algorithm has an polynomial complexity bound. 54 + ρ 8 τ + θ n 4+ρ θ θ log n ε For large update IPMs, omitting the round off brackets in the above estimation, one can see that the complexity of our algorithm for SDO is in the order of O n 4 4+ρ log n ε. 5 Concluding Remarks A new class of search directions was proposed for solving LO and SDO problems. The new directions are a slight modification of the classical Newton direction. By using some new analysis tools, we proved that the large update method based on the new direction has a complexity of n 4 4+ρ log n ε. It is worthwhile to note that a simple idea, to change slightly the corrector order O direction improves the complexity of the algorithm. This gives rise to some interesting issues. The first issue is whether the new algorithm works well in practice and how to incorporate the idea in the paper into the implementation of IPMs? For instance, in the Mehrotra-type predictor-corrector algorithm [], the target we used is µe or µe for SDO, what will happen if we replace this by the new target v ρ? We also mention that we have implemented a simple version of our algorithm, and tested on few problems. Our preliminary numerical results show that the algorithm is promising. approach. Nevertheless, much more work is needed to test the new The second question is related to the proximity and the search direction. As we claimed in the introduction: The proximity is crucial for both the quality and elegance of the analysis. In the preparation of this work, we tried to give a proof of the complexity of the algorithm based on the logarithmic barrier approach. However, the complexity we obtained is not as good as we presented here. As we observed in the introduction, when ρ =, the right hand side of the equation system defining the new search direction is the negative gradient of the proximity we used. It is also of interests to note that the right hand side defining the classical Newton direction is the negative gradient of the proximity based on logarithmic barrier approach. This indicates some interrelations between the search direction and the proximity used in the analysis. Will a new analysis based on a new proximity give a better complexity for the standard Newton method? Are there new IPMs with large updates whose complexity is equal to or less than the best known complexity for IPMs? The complexity of our algorithm depends on the parameter ρ. In this work we were forced to restrict ourselves to the case ρ [0, ]. If we could remove 4

this restriction, we might be able to approach the best known complexity bound as close as we wish by letting ρ goes to infinity. This will be a topic for further research. Our third question is about the algorithm for SDO. The new search direction in this paper is based on the NT symmetrizing scheme: is it possible to design similar algorithms using other schemes? What is the complexity of these new algorithms? This is a topic deserving more research. Lastly we would like to mention that there are also other ways to extend our results here, for instance to study the local convergence properties of these algorithms. This is particularly important since usually, when the point is close to the solution set, then the performance of a algorithm will be determined by its local convergence properties. It may be also worthwhile to build similar algorithms for classes of linear complementarity problems and convex programming. References [] E.D. Andersen, J. Gondzio, Cs. Mészáros, and X. Xu. Implementation of interior point methods for large scale linear programming. In T. Terlaky, editor, Interior Point Methods of Mathematical Programming, pages 89 5. Kluwer Academic Publishers, Dordrecht, The Netherlands, 996. [] R.A. Horn, C.R. Johnson. Topics in Matrix Analysis. Cambridge University Press, 99. [3] E. de Klerk. Interior Point Methods for Semidefinite Programming. Ph.D. Thesis, Faculty of ITS/TWI, Delft University of Technology, The Netherlands, 997. [4] P. Hung and Y. Ye. An asymptotically O nl-iteration path-following linear programming algorithm that uses long steps. Siam J. on Optimization, 6:570-586,996. [5] B. Jansen, C. Roos, T. Terlaky, and J.-Ph. Vial. Primal-dual algorithms for linear programming based on the logarithmic barrier method. Journal of Optimization Theory and Applications, 83: 6, 994. [6] B. Jansen, C. Roos, T. Terlaky, and Y. Ye. Improved complexity using higher order correctors for primal-dual Dikin affine scaling, Mathematical Programming, Series B, 76:7 30, 997. [7] N.K. Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 4:373 395, 984. [8] M. Kojima, S. Mizuno, and A. Yoshise. A primal-dual interior point algorithm for linear programming. In N. Megiddo, editor, Progress in Mathematical Programming: Interior Point and Related Methods, pages 9 47. Springer Verlag, New York, 989. [9] M. Kojima, N. Megiddo, T. Noma and A. Yoshise. A unified approach to interior point algorithms for linear complementarity problems, volume 538 of Lecture Notes in Computer Science. Springer Verlag, Berlin, Germany, 99. [0] N. Megiddo. Pathways to the optimal set in linear programming. In N. Megiddo, editor, Progress in Mathematical Programming: Interior Point and Related Methods, pages 3 58. Springer Verlag, New York, 989. Identical version in : Proceedings of the 6th Mathematical Programming Symposium of Japan, Nagoya, Japan, pages 35, 986. 5

[] S. Mehrotra. On the implementation of a primal-dual interior point method. SIAM Journal on Optimization, 4:575 60, 99. [] S. Mehrotra and Y. Ye. On finding the optimal facet of linear programs. Mathematical Programming, 6:497 55, 993. [3] S. Mizuno and A. Nagasawa. A primal-dual affine scaling potential reduction algorithm for linear programming. Mathematical Programming, 6:9-3, 993. [4] R.D.C. Monteiro, I. Adler and M.G.C. Resende. A polynomial-time primal-dual affine scaling algorithm for linear and convex quadratic programming and its power series. Mathematics of Operations Research, 5:9 4, 990. [5] Y.E. Nesterov and M.J. Todd. Self-scaled barriers and interior-point methods for convex programming. Mathematics of Operations Research, : 4, 997. [6] J. Peng, C. Roos and T. Terlaky. New complexity analysis of the primal-dual Newton method for linear optimization. Technical Report No. 98-05, Faculty of Technical Mathematics and Information, Delft University of Technology, The Netherlands, 998. To appear in Annals of Operations Research [7] C. Roos, T. Terlaky, and J.-Ph.Vial. Theory and Algorithms for Linear Optimization. An Interior Approach. John Wiley & Sons, Chichester, UK, 997. [8] G. Sonnevend. An analytic center for polyhedrons and new classes of global algorithms for linear smooth, convex programming. In A. Prekopa, J. Szelezsan, and B. Strazicky, editors, System Modelling and Optimization : Proceedings of the th IFIP-Conference held in Budapest, Hungary, September 985, volume 84 of Lecture Notes in Control and Information Sciences, pages 866 876. Springer Verlag, Berlin, West Germany, 986. [9] J.F. Sturm, S. Zhang, Symmetric primal-dual path following algorithms for semidefinite programming, Technical Report 9554/A, Tinbergen Institute, Erasmus University, Rotterdam, The Netherlands, 995. [0] M.J. Todd, A study of search directions in primal-dual interior-point methods for semidefinite programming, Technical Report 05, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, NY 4853, October 997. [] M.J. Todd, K.C. Toh and R.H. Tütüncü, On the Nesterov-Todd direction in semidefinite programming, SIAM J. Optimization, 8998, pp. 769-796. [] S. J. Wright. Primal-Dual Interior-Point Methods. SIAM, Philadelphia, USA, 997. [3] Y. Ye. On the finite convergence of interior-point algorithms for linear programming. Mathematical Programming, 57:35 335, 99. [4] Y. Ye. Interior Point Algorithms, Theory and Analysis. John Wiley & Sons, Chichester, UK, 997. [5] L.L. Zhang and Y. Zhang. On polynomiality of the Mehrotra-type predictor-corrector interior-point algorithms. Mathematical Programming, 68:303-38, 995. [6] G.Y. Zhao. Interior point algorithms for linear complementarity problems based on large neighborhoods of the central path. SIAM J. on Optimization, 8:397-43, 998. 6