Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University,

Similar documents
A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

c 2005 Society for Industrial and Applied Mathematics

Limiting behavior of the central path in semidefinite optimization

Exploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming.

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

PRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING

March 18,1996 DUALITY AND SELF-DUALITY FOR CONIC CONVEX PROGRAMMING 1 Zhi-Quan Luo 2, Jos F. Sturm 3 and Shuzhong Zhang 3 ABSTRACT This paper consider

DEPARTMENT OF MATHEMATICS

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

The Q Method for Symmetric Cone Programmin

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Semidefinite Programming

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

Though the form of the SDP (1) appears very specialized, it turns out that it is widely encountered in systems and control theory. Examples include: m

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

א K ٢٠٠٢ א א א א א א E٤

Largest dual ellipsoids inscribed in dual cones

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

The Simplest Semidefinite Programs are Trivial

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A class of Smoothing Method for Linear Second-Order Cone Programming

Lecture 3. Optimization Problems and Iterative Algorithms

Department of Social Systems and Management. Discussion Paper Series

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

A priori bounds on the condition numbers in interior-point methods

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Lecture 6: Conic Optimization September 8

Lecture: Algorithms for LP, SOCP and SDP

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Abstract. A new class of continuation methods is presented which, in particular,

The Trust Region Subproblem with Non-Intersecting Linear Constraints

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

arxiv: v1 [math.oc] 9 Sep 2013

New Infeasible Interior Point Algorithm Based on Monomial Method

c 2000 Society for Industrial and Applied Mathematics

Advances in Convex Optimization: Theory, Algorithms, and Applications

ON POSITIVE SEMIDEFINITE PRESERVING STEIN TRANSFORMATION

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Interior Point Methods in Mathematical Programming

Downloaded 10/11/16 to Redistribution subject to SIAM license or copyright; see

An E cient A ne-scaling Algorithm for Hyperbolic Programming

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin

y Ray of Half-line or ray through in the direction of y

Summer School: Semidefinite Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Interior-Point Methods

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Conic Linear Programming. Yinyu Ye

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

New stopping criteria for detecting infeasibility in conic optimization

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture 5. The Dual Cone and Dual Problem

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Primal-dual path-following algorithms for circular programming

Interior-Point Methods for Linear Optimization

On implementing a primal-dual interior-point method for conic quadratic optimization

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

A projection algorithm for strictly monotone linear complementarity problems.

Semidefinite Programming

5. Duality. Lagrangian

A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP

Nondegenerate Solutions and Related Concepts in. Ane Variational Inequalities. M.C. Ferris. Madison, Wisconsin

Optimality, Duality, Complementarity for Constrained Optimization

Lecture 7: Convex Optimizations

arxiv: v1 [math.oc] 26 Sep 2015

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Copositive Plus Matrices

12. Interior-point methods

Using Schur Complement Theorem to prove convexity of some SOC-functions

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

More First-Order Optimization Algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

SEMIDEFINITE PROGRAMMING RELAXATIONS FOR THE QUADRATIC ASSIGNMENT PROBLEM. Abstract

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract

Optimality Conditions for Constrained Optimization

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Transcription:

SUPERLINEAR CONVERGENCE OF A SYMMETRIC PRIMAL-DUAL PATH FOLLOWING ALGORITHM FOR SEMIDEFINITE PROGRAMMING ZHI-QUAN LUO, JOS F. STURM y, AND SHUZHONG ZHANG z Abstract. This paper establishes the superlinear convergence of a symmetric primal-dual path following algorithm for semidenite programming under the assumptions that the semidenite program has a strictly complementary primal-dual optimal solution and that the size of the central path neighborhood tends to zero. The interior point algorithm considered here closely resembles the Mizuno-Todd-Ye predictor-corrector method for linear programming where it is nown to be quadratically convergent. It is shown that when the iterates are well centered, the duality gap is reduced superlinearly after each predictor step. Indeed, if each predictor step is succeeded by r consecutive corrector steps then the predictor reduces the duality gap superlinearly with order 2.Theproof +2;r relies on a careful analysis of the central path for semidenite programming. It is shown that under the strict complementarity assumption, the primal-dual central path converges to the analytic center of the primal-dual optimal solution set, and the distance from any point on the central path to this analytic center is bounded by the duality gap. Key words. Semidenite programming, central path, path following, superlinear convergence. AMS subject classications. 90C25, 90C26, 90C60.. Introduction. Recently, there have been many interior point algorithms developed for semidenite programming (SDP), see for example [, 2, 4, 8, 2, 4, 5, 7, 2]. These algorithms dier in their choices of scaling matrix, the size of the central path neighborhoods, and stepsize rules, among others. In particular, the algorithms of Kojima-Shindoh-Hara [8] and Nesterov-Todd [4, 5] are based on the primal-dual scaling and they both can be viewed as extensions of the predictor-corrector method for linear programming []. It has been shown [7, 9,4, 5, 7, 2] that these algorithms for SDP retain many important properties of the interior point algorithms for linear programming including polynomial complexity. For an overview of SDP and its applications, we refer to Vandenberghe and Boyd [9]. However, there exists considerable diculty in extending one ey property of the predictor-corrector method for linear programming to the interior point algorithms for SDP. This is the property of quadratic convergence of the duality gap (see [20] for a proof of the LCP case). In some sense, the need for superlinear convergence in solving SDP is more pronounced than that for the linear programming case. This is because for SDP there cannot exist any nite termination procedures as in the case of linear programming. Indeed, the recent papers of Kojima-Shida-Shindoh [7] and Potra-Sheng [6] are both focused on the issue of superlinear convergence for solving SDP. In particular, the latter reference provided a sucient condition for the superlinear convergence of an infeasible path following algorithm, while the former reference [7] established the superlinear convergence of their algorithm [8] under certain ey assumptions. These assumptions are () SDP is nondegenerate in the sense that the Jacobian matrix of its KKT system is nonsingular (2) SDP has a strictly comple- Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario, L8S 4L7, Canada (luozq@mcmail.cis.mcmaster.ca). Supported by the Natural Sciences and Engineering Research Council of Canada, Grant No. OPG009039, during a research leave to the Econometric Institute, Erasmus University Rotterdam. y Econometric Institute, Erasmus University Rotterdam, The Netherlands (sturm@few.eur.nl). z Econometric Institute, Erasmus University Rotterdam, The Netherlands (zhang@few.eur.nl).

2 Z.-Q. LUO, J.F. STURM AND S. ZHANG mentary optimal solution (3) the iterates converge tangentially to the central path in the sense that the size of the central path neighborhood in which the iterates reside must tend to zero. Among these three assumptions for superlinear convergence, (2) is inevitable since it is needed even in the case of LCP (see [20]). Assumption (3) is needed to ensure the duality gap is reduced superlinearly after each predictor step for all points in the central path neighborhood. In reference [7], an example was given which showed that, without the tangential convergence assumption, the duality gap is reduced only linearly after one predictor step for certain points in the central path neighborhood. Our goal in this paper is to establish the superlinear convergence of a symmetric path following algorithm for SDP under only assumptions (2) and (3) (i.e., without the nondegeneracy assumption). In particular, we consider the primal-dual path following algorithm of Nesterov-Todd [4, 5] (later discovered independently by Sturm and Zhang [7] using a V -space notion). In this paper we adopt the framewor of [7] since it greatly facilitates the subsequent analysis. We show that this symmetric primaldual path following algorithm has an order of convergence that is asymptotically quadratic (i.e., sub-quadratic). Indeed, for any given constant positive integer r, the 2 algorithm can be set so that the duality gap decreases superlinearly with order +2;r after one predictor (ane scaling) step followed by (at most) r corrector steps. The cornerstone in our bid to establish this superlinear convergence result is a bound on the distance from any point on the central path to the optimal solution set (see Section 3). Specically, itisshown that, under the strict complementarity assumption, the primal-dual central path converges to the analytic center of the optimal solution set, and that the distance to this analytic center from any pointonthecentral path can be bounded above by the duality gap. These properties of the central path are algorithm-independent and are liely to be useful in the analysis of other interior point algorithms for SDP. The organization of this paper is as follows. At the end of this section, we describe some basic notation to be used in this paper. In Section 2, we will discuss some fundamental bacground notions, and we will mae two assumptions concerning the solution set of the SDP. In Section 3 we will analyze the limiting behavior of the primal-dual central path. In Section 4, the notion of V -space for SDP is reviewed and a path following algorithm in the spirit of [7] isintroduced. The superlinear convergence of this algorithm is established in Section 5. Finally, some concluding remars are given in Section 6. Notation. The space of symmetric n n matrices will be denoted S. Given matrices X and Y in < mm2, the standard inner product is dened by X Y = tr X T Y where tr () denotes the trace of a matrix. The notation X? Y denotes orthogonality in the sense that X Y = 0. The Euclidean norm and its associated operator norm, viz. the spectral norm, are both denoted by. TheFrobenius norm of X p is X F = X X. If X 2Sis positive (semi-) denite, we write (X 0) X 0. The cone of positive semi-denite matrices is denoted by S + and the cone of positive denite matrices is S ++. The identity matrix is denoted by I. We use the standard Landau notation (\big O" and \small o"). In particular, if fu() >0g and fw() >0g are real sequences with w() > 0, then u() = O(w()) means that u()=w() is bounded, independent of u() =o(w()) means that lim!0 u()=w() =0 u() w() means that lim!0 u()=w() = u() =(w()) whenever u()=w()

SUPERLINEAR CONVERGENCE FOR SDP 3 and w()=u() are both bounded. For a symmetric positive denite matrix, we use \O" and \" to denote the order of all its eigenvalues. Hence, for U() 2S ++,the notation U() =(w()) signies the existence of ; > 0suchthat ; I U() ;I w() for all >0 Therefore, the condition U() = (w()) implies that U() = (w()) and the diagonal entries are all (w()) too. as U () U 22 () U nn () 2. Problem formulation. A semidenite programming (SDP) problem is given minimize C X subject to A (i) X = b i for i = 2 m, (P) X 0 where C 2S, A () A (2) A (m) 2Sand b 2< m. The decision variable is X 2S. The corresponding dual program can be formulated as and maximize subject to b T P y m i= y ia (i) + Z = C (D) Z 0 Denote the feasible sets of (P) and (D) by F P and F D respectively, i.e. F P = fx 2S A (i) X = b i i= 2 m X 0g F D = fz 2S Z = C ; mx i= y i A (i) for some y 2< m Z 0g We mae the following assumptions throughout this paper. Assumption 2.. There exist positive denite solutions X 2F P and Z 2F D for (P) and (D) respectively. Assumption 2.2. There exists a pair of strictly complementary primal-dual optimal solutions for (P) and (D). Specically, there exists (X Z ) 2F P F D such that ( X Z =0 X + Z 0 Since X Z = Z X =0,we can diagonalize X and Z simultaneously. Therefore, by applying an orthonormal transformation to the problem data if necessary, we can assume without loss of generality that X, Z are both diagonal and of the form (2.) X = B 0 0 0 Z = 0 0 0 N

4 Z.-Q. LUO, J.F. STURM AND S. ZHANG where B = diag( K ), N = diag( K+ n ) for some integer 0 K n and some positive scalars i > 0, i = 2 n. Here the subscripts B and N signify the \basic" and \nonbasic" subspaces (following the terminology of linear programming). Throughout this paper, the decomposition of any n n matrix X is always made with respect to the above partition B and N. In fact, we shall adhere to the following notation throughout XB X X = U XU T X N so X U will always denote the o-diagonal bloc ofx with size K (n ; K), etc. Notice that X 2F P is an optimal solution to (P) if and only if XZ = 0. Hence, by Assumption 2.2, the primal optimal solution set can be written as F P = fx 2F P X U =0andX N =0g Analogously, the dual optimal solution set is given by F D = fz 2F D Z U =0andZ B =0g Given 2< ++, the pair (X Z) 2F P F D is said to be the -center (X() Z()) if and only if (2.2) XZ = I We refer to [8, 8] for a proof of the existence and uniqueness of -centers. The central path of the problem (P) is the curve f(x() Z()) >0g The logarithmic barrier function log det X B interior of the primal optimal solution set is strictly concave on the relative fx 2F P X B 0g Moreover, Assumption 2. implies that F P and F D are bounded. The unique maximizer X a of log det X B on the relative interior of F P is called the analytic center of.itischaracterized by the Karush-Kuhn-Tucer system F P (2.3) 8 >< > X a B Z B = I P m i= y ia (i) B + Z B =0 X a 2F P and Z B 0 In a similar fashion, we dene the analytic center of F D as the maximizer of the logarithmic barrier log det Z N on the relative interior of F D Za is characterized by the Karush-Kuhn-Tucer system 8 >< > X N Z a N = I A (i) X =0 Z a 2F D and X N 0 i = 2 m

SUPERLINEAR CONVERGENCE FOR SDP 5 3. Properties of the central path. The notion of central path plays a fundamental role in the development ofinterior point methods for linear programming. In this section, we shall study the analytic properties of the central path in the context of semidenite programming. These properties will be used in Section 5 where we perform convergence analysis of a predictor-corrector algorithm for SDP. For linear programming (i.e., A (i) 's and C are diagonal), it is nown that the central path curve converges (X() Z())! (X a Z a ), as! 0, with (X a Z a ) being the analytic center of the primal and dual optimal solution sets F P and F D respectively ([0]). Another important property of the central path in the context of linear programming is that it never converges tangentially to the optimal face [3]. This means that for any point on the central path, the distance to the end of the central path is of the same order as the distance to the optimal face, viz. O(). The aim of this section is to establish a similar property of the central path for semidenite programming. More specically, we shall prove that X() ; X a + Z() ; Z a = O() We begin with the following lemma which shows that the set f(x() Z()) 0 <<g is bounded. Lemma 3.. For any >0 there holds X() + Z() = O( + ) Proof. Since A (i) (X() ; X()) = 0 for i = m, and Z() ; Z() 2 SpanfA (i) i= mg, it follows that (X() ; X())? (Z() ; Z()).. Using this property, we obtain Since X() 0andZ() 0, we have n + n = X() Z()+X() Z() = X() Z()+Z() X() X() + Z() = O(X() Z()+Z() X()) = O( + ) It follows from Lemma 3. that the central path has a limit point. We will now show that any limit point ofthecentral path f(x() Z())g is a strictly complementary optimal primal-dual pair. Lemma 3.2. For any 2 (0 ) there holds X B () =() Z B () =() X N () =() Z N () = () Hence, any limit point of f(x() Z())g as! 0 is a pair of strictly complementary primal-dual optimal solutions of (P) and (D). Proof. Let 0 < <. For notational convenience, we will use X and Z to denote the matrices X() and Z(). Let (X Z ) be the pair of strictly complementary

6 Z.-Q. LUO, J.F. STURM AND S. ZHANG primal-dual optimal solutions postulated by Assumption 2.2. Since (X ; X )? (Z ; Z ), we have 0=(X ; X ) (Z ; Z ) = X Z ; X Z ; X Z = tr (I ; X Z ; XZ ) = n ; KX i= i Z ii ; nx i=k+ i X ii where the last step follows from (2.). Since i > 0 for all i and X ii 0 and Z ii 0 (by the positive semideniteness of X and Z), we obtain Zii = O() i = K X ii = O() i = K + n Since X 0, Z 0, it follows that (3.) X N = O() Z B = O() From X 0andZ 0we obtain Now consider the identities X N ; X T U X; B X U 0 Z B ; Z U Z ; N ZT U 0 log det X = log det X B +logdet(x N ; XU T X; B X U) log det Z = log det Z N + log det(z B ; Z U Z ; N ZT U ) Since det X det Z =det(i) = n,itfollows that log det X + log det Z = n log and 0 = log det X B + log det (X N ; XU T X; B X U) + log det Z N + log det (Z B ; Z U Z ; N ZT U ) By the estimates (3.) and using Lemma 3., we see that X B = O() (X N ; XU T X; B X U)=O() Z N = O() (Z B ; Z U Z ; N ZT U )=O() Therefore each of the four logarithm terms in the preceding equation are bounded from above as! 0. Since these four terms sum to zero, we must have X B = () Z N = () (X N ; XU T X; B X U) = () (Z B ; Z U Z ; N ZT U )=()

Together with (3.), this implies SUPERLINEAR CONVERGENCE FOR SDP 7 X N =() Z B =() This completes the proof of the lemma. Lemma 3.2 provides a precise result on the order of the eigenvalues of X B (), X N (), Z B () andz N (). We will now prove a preliminary result on the order of the o-diagonal blocs X U () andz U (). Lemma 3.3. For 2 (0 ), there holds (3.2) X U () = () Z U () ;X U () Z U () =()X U () 2 X U () = o( p ) Z U () = o( p ) as! 0 Proof. By the central path denition, we have XB () X I = U () ZB () X U () T X N () Z U () T Z U () Z N () Expanding the right-hand side and comparing the upper-right corner of the above identity, wehave (3.3) 0=X B ()Z U ()+X U ()Z N () or equivalently, (3.4) Z U () =;X B () ; X U ()Z N () Using X B () = () and Z N () = () (see Lemma 3.2), this implies that Z U () = () X U () This proves the rst part of the lemma. We now prove (3.2). Pre-multiplying both sides of (3.4) by the matrix X U () T yields X U () T Z U () =;X U () T X B () ; X U ()Z N () Now taing the trace of the above matrices, we obtain X U () Z U () =; tr X U () T X B () ; X U ()Z N () = ; tr Z N () =2 X U () T X B () ; X U ()Z N () =2 = ;() X U () 2 where we used the fact that X B () = () and Z N () = (), as shown in Lemma 3.2. This establishes (3.2). It remains to prove the last part of the lemma. We consider an arbitrary convergent sequence f(x( ) Z( )) = 2 g on the central path with! 0 its limit is denoted by X, Z, so that (3.5) X( ) ; X = o() Z( ) ; Z = o()

8 Z.-Q. LUO, J.F. STURM AND S. ZHANG By Lemma 3.2, we have X N =0, X U = Z U = 0 and Z B =0. Since (X( ) ; X )? (Z( ) ; Z ), we have (3.6) 0=(X B ( ) ; X B) Z B ( )+2X U ( ) Z U ( ) +X N ( ) (Z N ( ) ; Z N ) Using (3.2) and (3.6), we obtain X U ( ) 2 = ;() (X U ( ) Z U ( )) (3.7) = () ((X B ( ) ; XB) Z B ( )+X N ( ) (Z N ( ) ; Z N )) = o( ) where in the last step we used (3.5) and of Lemma 3.2. This implies that X U () 2 = o() holds true on the entire central path curve, for otherwise there would exist a convergent subsequence f(x( ) Z( )) = 2 g for which lim inf! X U ( ) 2 > 0 contradicting (3.7). The proof is complete.. We now use Lemma 3.2 and Lemma 3.3 to prove the convergence of the central path f(x() Z()) >0g to the analytic center (X a Z a ), and to estimate the rate at which itconverges to this limit. Lemma 3.4. The primal-dual central path f(x() Z()) >0g converges to the analytic centers (X a Z a ) of F P and F D respectively. Moreover, if we let then () = X U() p X B () ; X a B = O((()+ p ) 2 ) Z N () ; Z a N = O((()+ p ) 2 ) Proof. Suppose 0 <<. By expanding X()Z() = I and comparing the upper-left bloc, we obtain I B = X B ()Z B ()+X U ()Z U () T Pre-multiplying both sides with (X B ()) ; yields (3.8) X B () ; = Z B()+ X B() ; X U ()Z U () T Let J be an index set of minimal cardinality such that SpanfA (i) B i 2Jg=SpanfA(i) B i = 2 mg Since Z B = 0, it follows from dual feasibility and (3.8) that (3.9) Z B() = X i2j i ()A (i) B for some scalars i() = X B () ; ; X B() ; X U ()Z U () T

SUPERLINEAR CONVERGENCE FOR SDP 9 From Lemma 3.2, we nowthatz B ()= = (). As the matrices A (i) B, i 2J are linearly independent, this implies that the sequences f i () 2 (0 )g, i 2J, are bounded. Moreover, we have from Lemma 3.3 that lim!0 X B() ; X U ()Z U () T =0 Hence, any limit X, i, i 2J for! 0 satises the following nonlinear system of equations (3.0) 8 >< > X X ; B ; i A (i) B =0 i2j A (i) B X B = b i i 2J Moreover, since Z B ()= =()andx B () = () for 2 (0 ), we have X i2j i A(i) B 0 X B 0 By (2.3), this means that X = X a, the analytic center of F P and hence X() ; X a = o() as! 0 Using the linear independence of the matrices A (i) B, i 2J and using the fact that Xa B is positive denite, it can be checed that the Jacobian (with respect to the variables X B and i, i 2J) of the nonlinear system (3.0) is nonsingular at the solution XB a, i, i 2J. Hence we can apply the classical inverse function theorem to the above nonlinear system at the point X B = XB a, i = i i2j,toobtain (3.) X B () ; XB a = O X B() ; ; X i2j i ()A (i) B + X i2j By (3.9) and the denition of (), we obtain from Lemma 3.3 X X B() ; ; i2j i ()A (i) B Also we have from X() 2F P A (i) B X B() ; b i = 2A (i) U ja (i) B X B() ; b i j = X B() ; X U ()Z U () T = O(() 2 ) X U ()+A (i) N X N () = O (() p + ) for i 2J Substituting the above two bounds into (3.) yields X B () ; X a B = O((()+p ) 2 ) It can be shown by an analogous argument that The proof is complete. Z N () ; Z a N = O((()+ p ) 2 )!

0 Z.-Q. LUO, J.F. STURM AND S. ZHANG Lemma 3.4 only provides a rough setch of the convergence behavior of the central path as! 0. Our goal is to characterize this convergence behavior more precisely. Theorem 3.5. Let 2 (0 ). There holds (3.2) X B () =() Z N () =() X N () =() Z B () =() and (3.3) X() ; X a = O() Z() ; Z a = O() Proof. The estimate (3.2) is already nown from Lemma 3.2, so we only need to prove (3.3). By Lemma 3.3 and Lemma 3.4, it is sucient to show that X U () = O() Suppose to the contrary that there exists a sequence (3.4) f(x( ) Z( )) = 2 g with X U ( ) > 0 for all and (3.5) lim! X U ( ) =0 With the notation of Lemma 3.4, the condition (3.5) implies (3.6) ( )+ p ( )= X U( ) p By virtue of Lemma 3.2, we can choose the subsequence (3.4) such that Z B ( ) lim! exists, and using Lemma 3.4 and relation (3.6), we can also assume the existence of x B () = lim! X U ( ) (X B( ) ; X a 2 B ) From the existence of the above limits, we obtain (3.7) (X B ( ) ; XB lim a ) Z B( ) = lim x B () Z B( )! X U ( ) 2! Notice that the hypothesis (3.5) implies that lim! X U ( ) 2 (X( ) ; X a )= x B () 0 0 0 Using also ZB a =0,wethus obtain for any = 2 that x B () Z B( ) = x B () (Z B( ) ; Z a B ) = lim j! j X U ( j ) 2 ((X( j) ; X a ) (Z( ) ; Z a )) =0

SUPERLINEAR CONVERGENCE FOR SDP where the last step is due to the orthogonality condition (X( j );X a )? (Z( );Z a ) for all j and. Therefore, (3.8) 0= lim x B () Z B( ) = lim!! (X B ( ) ; XB a ) Z B( ) X U ( ) 2 where we used (3.7). Analogously, it can be shown that (3.9) X N ( ) (Z N ( ) ; ZN lim a ) =0! X U ( ) 2 Since (X( ) ; X a )? (Z( ) ; Z a ), we have from (3.8) and (3.9) that (X( ) ; X 0 = lim a ) (Z( ) ; Z a )! X U ( ) 2 = lim 2 X U( ) Z U ( )! X U ( ) 2 which clearly contradicts (3.2). The proof is complete. Theorem 3.5 characterizes completely the limiting behavior of the primal-dual central path as! 0. We point out that this limiting behavior was well understood in the context of linear programming and the monotone horizontal linear complementarity problem, see Guler [3] and Monteiro and Tsuchiya [3] respectively. Notice that under a Nondegeneracy Assumption (the reader is refered to Alizadeh, Haeberly and Overton [2] or Kojima, Shida and Shindoh [7] for a discussion on this concept), i.e. the Jacobian of the nonlinear system 8 >< A (i) X = b i i = 2 m P m i= y ia (i) + Z = C > XZ + ZX =0 X 2S Z2S is nonsingular at (X a y a Z a ), the estimates (3.3) follow immediately from the application of the classical inverse function theorem. Thus, the real contribution of Theorem 3.5 lies in establishing these estimates in the absence of the nondegeneracy assumption. It is nown that in the case of linear programming the proof of quadratic convergence of predictor-corrector interior point algorithms required an error bound result of Homan. This error bound states that the distance from any vector x 2< n to a polyhedral set P = fx Ax ag can be bounded in terms of the \amount of constraint violation" at x, namely[ax ; a] +, where [] + denotes the positive part ofavector. More precisely, Homan's error bound ([5]) states that there exists some constant >0such that dist(x P) [Ax ; a] + 8x 2< n Unfortunately, this error bound no longer holds for linear systems over the cone of positive semidenite matrices (see the example below). In fact, much of the diculty in the local analysis of interior point algorithms for SDP can be attributed to this lac

2 Z.-Q. LUO, J.F. STURM AND S. ZHANG of an analog of Homan's error bound result (see the analysis of [7, 6]). Specically, without such an error bound result, it is dicult to estimate the distance from the current iterates to the optimal solution set. In essence, what we have established in Theorem 3.5 is an error bound result along the central path. In other words, although a Homan type error bound cannot hold over the entire feasible set of (P), it nevertheless still holds true on the restricted region \near the central path". One consequence of this restriction to the central path is that we will need to require the iterates to stay \suciently close" to the central path in order to establish the superlinear convergence of the algorithm. Such a requirement on the iterates was called \tangential convergence to the optimal solution set" by Kojimaet. al. [7]. Notice that the analysis in this reference required the additional nondegeneracy assumption to establish their superlinear convergence result. In contrast, this assumption is no longer needed in our analysis because Theorem 3.5 holds without the nondegeneracy assumption. Example. Consider the following linear system over the cone of positive semidefinite matrices in < 22 X =0 X 22 = X = X X 2 X 2 X 22 0 Clearly, there is exactly one solution X to the above linear system, namely X 0 0 = 0 For each >0, consider the matrix 2 X() = Clearly, X() 0. The amount of constraint violation is equal to 2. However, the distance X() ; X F =(). Thus, there cannot exist some xed >0suchthat X() ; X 2, for all >0. 4. A polynomial predictor-corrector algorithm. In [4], Nesterov andtodd developed a theoretical foundation for interior point methods for self-scaled cones. This framewor was then used in [5] to generalize primal-dual interior point methods from linear programming to conic convex programming over self-scaled cones. The results of Nesterov and Todd can be specialized to semidenite programming, since the cone S + of positive semi-denite matrices is self-scaled. In fact, the main results in Sturm and Zhang [7] can be viewed as such a specialization. However, unlie Nesterov andtodd [4, 5], Sturm and Zhang [7] adopt the V -space framewor, thus generalizing the approach of Kojima et al. [6]. We begin by summarizing some of the results on V -space path following for SDP that were obtained in [7]. Let (X Z) 2F P F D with X 0 Z 0. Then, there exists a unique positive denite matrix D 2S ++ such that ([7, Lemma 2.]) (4.) X = DZD Let L be such that (4.2) LL T = D

and let V = L T ZL. It follows that SUPERLINEAR CONVERGENCE FOR SDP 3 V = L ; XL ;T = L T ZL The quantity (X Z) = I ; L; XZL F serves as a centrality measure, with = X Z=n. Considering the identity (X Z) = I ; L; XZL = F I ; F X=2 ZX =2 it becomes apparent that (X Z) is the same centrality measure used in many other papers, including [8] and [2]. It is easy to see that the central path is the set of solutions (X Z)with(X Z) = 0 or, equivalently, those solutions for which V = p I. Moreover, we have (4.3) ( + (X Z))I V 2 ( ; (X Z))I In V -space path following, we want to drive the V -iterates towards the origin by Newton's method, in such a way that the iterates reside in a cone around the identity matrix. Before stating the Newton equation, we need to introduce the linear space A(L), and its orthogonal complement in S A(L) =SpanfL T A (i) L i = 2 mg A? (L) =fx 2S(L T A (i) L) X = 0 for i = 2 mg A Newton direction for obtaining a ()-center, for some 2 [0 ], is the solution (X Z) of the following system of linear equations ([7], equation (7) therein) (4.4) 8 < X + DZD = Z ; ; X X 2A? (I) Z 2A(I) For =0,we denote the solution of (4.4) by (X p Z p ), the predictor direction. For =, the solution is denoted by (X c Z c ), the corrector direction. If we let X = L ; XL ;T Z = L T ZL then we can rewrite (4.4) as 8 < X +Z = V ; ; V X 2A? (L) Z 2A(L) It follows from orthogonality that (4.5) X p 2 F + Z p 2 F = V 2 F = n

4 Z.-Q. LUO, J.F. STURM AND S. ZHANG The corrector direction does not change the duality gap, (4.6) (X +X c ) (Z +Z c )=X Z whereas (4.7) (X + tx p ) (Z + tz p )=(; t)x Z for any t 2<, see equation (6) of [7]. Algorithm SDP() Given (X 0 Z 0 ) 2F P F D with (X 0 Z 0 ) 4. Parameter, 0< (X 0 Z 0 )=n and positive integer r. Let =0. REPEAT (main iteration) Let X = X Z = Z and = X Z=n. Predictor compute (X p Z p ) from (4.4) with =0. Compute the largest step t such that for all 0 t t there holds (X + tx p Z+ tz p ) min(=2 (( ; t) =) 2;r ) Let X 0 = X + t X p Z 0 = Z + t Z p and =min( 4 ( ; t ) =). Corrector FOR i =to r DO Let X = X 0 Z = Z 0. IF (X Z) THEN exit loop. Compute (X c Z c ) from (4.4) with =. Set X 0 = X +X c Z 0 = Z +Z c. END FOR X + = X 0 Z + = Z 0 Set = +. UNTIL convergence. Interestingly, each corrector step reduces ( ) at a quadratic rate as stated in the following result Lemma 4.. If (X Z) then 2 (X +X c Z+Z c ) (X Z) 2 Proof. It follows from Lemma 4.5 in [7] that X +X c 0 Z +Z c 0 Hence, the desired result is an immediate consequence of Lemma 4.4 in [7]. Lemma 4. implies that for any, we have (4.8) (X Z ) ; Also, it follows from (4.6) and (4.7) that for any > (4.9) (X Z ) ; ( ; t ; ) ; = = = = O( )

SUPERLINEAR CONVERGENCE FOR SDP 5 Furthermore, if ==4, then only one (instead of r) corrector step is needed to recenter the iterate (see [7]). In other words, the iterations of Algorithm SDP() are identical to those of the primal-dual predictor-corrector algorithm of [7], for all with 4 We can therefore conclude from Theorem 5.2 in [7] that the algorithm yields 4 for all ; p n log( 0 =), where ; is a universal constant, independent of the problem data. Thus, we have the following polynomial complexity result. Theorem 4.2. For each 0 <<(X 0 Z 0 )=n, Algorithm SDP() willgenerate an iterate (X Z ) 2F P F D with (X Z )=n =4 in at most ; p n log( 0 =) predictor-corrector steps, where ; is a constant that is independent of the problem data. In addition to having polynomial complexity, Algorithm SDP() also possesses a superlinear rate of convergence. We prove this in the next section. 5. Convergence analysis. We begin by establishing the global convergence of Algorithm SDP(). Notice that Algorithm SDP() chooses the predictor step length t to be the largest step such that for all 0 t t there holds (5.) (X + tx p Z+ tz p ) min It was shown in [7] (equation (2) therein) that (5.2) 2 (( ; t)=)2;r ( ; t)(x + tx p Z+ tz p ) ( ; t)(x Z)+t 2 X p Z p F = Using (4.5), we thus obtain for 0 t<that (5.3) (X + tx p Z+ tz p ) (X Z)+ nt2 2( ; t) Combining (5.) and (5.3), we can easily establish the global convergence of Algorithm SDP(). Theorem 5.. There holds lim! =0 i.e. Algorithm SDP() is globally convergent. Proof. Due to (4.7), the sequence 0 is a monotonically decreasing sequence. Hence, lim! exists. Suppose contrary to the statement of the lemma that (5.4) = lim! > 0 Consider ; p n log( 0 =). From Theorem 4.2, we have (5.5) =4 and hence, using (4.8), (5.6) (X Z ) ; =min( 4 )=

6 Z.-Q. LUO, J.F. STURM AND S. ZHANG Now consider a step length 0 t 05 p =(n) and note from (5.5) that We obtain from (5.3) and (5.6) that ; t 3 4 (X + t(x p ) Z + t(z p ) ) (X Z )+ t 2r n 2 + 6 7 6 4 3 ( ; t) < ( ; t) nt2 2( ; t) 2 ;r where we used (5.5) and the fact that r. By denition of t, this implies that r = n () This, together with (4.7), implies that ; + = (), which contradicts (5.4). Next we proceed to establish the superlinear convergence of Algorithm SDP(). In light of (4.7), we only need to show that the predictor step length t approaches. Hence we are led to bound t from below. For this purpose, we note from (5.2) that, for t 2 (0 ), (5.7) (X + tx p Z+ tz p ) (X Z)+ ; t X p Z p F = Thus, if we can properly bound X p Z p F, then we will obtain a lower bound on the predictor step length t. To begin, let us consider L with Remar that L L T = p X() p I = L ; X()L;T = L T Z()L Now dene the predictor direction starting from the solution (X() Z()) on the central path as follows 8 < ^Xp ()+^Zp () =; p I ^X p () 2A? (L ) ^Z p () 2A(L ) Let ( ^Xa ^Za ) be the analytic center of the optimal solution set in the L -transformed space, ^X a = L ; Xa L ;T ^Za = L T Za L

SUPERLINEAR CONVERGENCE FOR SDP 7 We will show in Lemma 5.2 below that^xp () is close to the optimal step ^Xa ; p I for small. We will bound the dierence between ^Xp () and X p afterwards. Lemma 5.2. There holds p I +^X p () ; ^X a + p I +^Zp () ; ^Z a = O( 3=2 ) Proof. Since it follows that ^X a ^Za = L ; Xa Z a L =0 ( p I ; ^X a )( p I ; ^Z a )=( p I ; ^Z a )( p I ; ^X a ) Therefore, the matrix ( p I ; ^X a )( p I ; ^Z a ), or equivalently, the matrix L ; (X() ; X a )(Z() ; Z a )L is symmetric. By the property of the F -norm, we obtain (5.8) ( p I ; ^X a )( p I ; ^Z a ) = L ; F (X() ; X a )(Z() ; Z a )L F =(tr(x() ; X a )(Z() ; Z a ) (X() ; X a )(Z() ; Z a )) =2 = O( 2 ) where the last step follows from Theorem 3.5. Now since ^X a ^Z a = 0 and ^X p ()+ ^Zp () =; p I, wehave As ( p I ; ^Xa )( p I ; ^Za )=I ; p ( ^Xa + ^Za ) = p ( p I +^Xp () ; ^Xa ) + p ( p I +^Z p () ; ^Z a ) p I +^Xp () ; ^Xa 2A? (L ) p I +^Zp () ; ^Za 2A(L ) it follows that p I +^Xp () ; ^Xa = 2 + p I +^Zp () ; ^Za F ( ^Xa ; p I)( ^Za ; p I) 2 F = O( 3 ) 2 F where the last step is due to (5.8). This proves the lemma. Lemma 5.2 applies only to ( ^Xp () ^Zp ()), namely the predictor directions for the points located exactly on the central path. What we need is a similar bound for (X p Z p ) (obtained at points close to the central path). This leads us to bound the dierence ^Xp () ; X p. Indeed, our next goal is to show (Lemma 5.6) that ^Xp () ; X p = O( p (X Z)) F

8 Z.-Q. LUO, J.F. STURM AND S. ZHANG We prove this bound by a sequence of lemmas. Let D be given by (4.) and dene D = L ; DL;T so that D = I if X = X() andz = Z(). Choose L by L = L D =2 and notice that indeed LL T = D, as stipulated by (4.2). Lemma 5.3. Suppose (X Z) =2. There holds L ; (X() ; X)L ;T + L T (Z() ; Z)L = O( p (X Z)) Proof. Let x () =L ; (X() ; X)L ;T z () =L T (Z() ; Z)L Clearly, x () and z () are symmetric and x ()? z (). Let denote the smallest eigenvalue of x ()+ z (), i.e. Since X Z = X() Z() =n, wehave = arg maxf x ()+ z () Ig tr (Z(X() ; X)+X(Z() ; Z)) = tr ((X() ; X)Z + X(Z() ; Z)) = ; tr ((X() ; X)(Z() ; Z)) ; tr XZ +trx()z() =0 where the last step follows from (X() ; X)? (Z() ; Z). Recall that V = L ; XL ;T = L T ZL. Consider tr (V ( x ()+ z ())) = tr (L T Z(X() ; X)L ;T + L ; X(Z() ; Z)L) = tr (Z(X() ; X)+X(Z() ; Z)) =0 By (4.3), the matrix V is symmetric positive denite and V =( p ). Diagonalize the symmetric matrix x ()+ z () =Q T Q and consider 0 = tr (V ( x ()+ z ())) = tr (V Q T Q) = tr (QV Q T ) Since QV Q T =( p ), the diagonal entries of QV Q T must be ( p ). Therefore, the preceding equation implies that the diagonal matrix must have a nonpositive eigenvalue and that its diagonal entries are of same order of magnitude. In other words, 0 and = O(jj). This further implies (5.9) x ()+ z () = O(jj) By the denition of the central path, we have I =(V + x ())(V + z ()) = V + x()+ z () 2 V + x()+ z () 2 + x() ; z () 2 ; x() ; z () 2

SUPERLINEAR CONVERGENCE FOR SDP 9 Since the left hand side matrix is symmetric, the sew-symmetric cross term must cancel when we expand the matrix product in the right hand side. It follows that and therefore, I = V + x()+ z () 2 2 ; 4 ( x() ; z ()) 2 Using (4.3), we obtain V + x()+ z () 2 p I jj = O( p (X Z)) Combining this with (5.9) and using the fact that x ()? z (), we have x () + z () = O(jj) =O( p (X Z)) Lemma 5.4. Suppose (X Z) =2. Then there holds D ; I = O((X Z)) Proof. Notice that (5.0) and (5.) L ; XL;T = p I + L ; (X ; X())L ;T L T ZL = p I + L T (Z ; Z())L Now using L ; XL;T = D(L T ZL ) D we have, by pre- and post-multiplying (5.0) by D ;=2 and (5.) by D =2 and rearranging terms, p ( D ; ; D)=L ; (X() ; X)L ;T + L T (Z ; Z())L Together with Lemma 5.3, this implies D = () and D ; I = O((X Z)) The lemma is proved. Now, let ^Xp = D =2 X p D =2 ^Zp = D ;=2 Z p D ;=2 Notice that ( ^Xp ^Zp ) 2A? (L ) A(L ). Lemma 5.5. Suppose (X Z) =2. We have ^Xp ; X p + ^Zp ; Z p = O( p (X Z))

20 Z.-Q. LUO, J.F. STURM AND S. ZHANG Proof. Wehave ^X p = D =2 X p D =2 = X p +( D =2 ; I) X p D =2 + X p ( D =2 ; I) Now using Lemma 5.4 and (4.5), we see that It can be shown in an analogous way that ^X p ; X p = O( p (X Z)) ^Z p ; Z p p = O( (X Z)) Now we are ready to bound the dierence between ^Xp () and X p. Lemma 5.6. Suppose (X Z) =2. We have ^Xp () ; X p + ^Zp () ; Z p = O( p (X Z)) Proof. By denition of the predictor directions, we have ^X p ()+^Z p () =; p I and X p + Z p = ;V Combining these two relations yields ^Xp () ; ^Xp +^Zp () ; ^Zp = V ; p I + X p ; ^Xp + Z p ; ^Zp Now using Lemma 5.5 and using the fact that V ; p I = (V + p I) ; (V 2 ; I) p (X Z) we obtain ^Xp () ; ^Xp +^Zp () ; ^Zp = O( p (X Z)) Since ( ^X p (); ^X p )? ( ^Z p (); ^Z p ), the lemma follows from the above relation, after applying Lemma 5.5 once more. X p Z p, and hence, using (5.7), we can estimate the predictor step length t. Combining (5.8), Lemma 5.2 and Lemma 5.6 we cannow estimate the order of Lemma 5.7. We have X p Z p = O(( + (X Z))) (5.2) Proof. Combining Lemma 5.6 with Lemma 5.2, we have p I + X p ; ^Xa + p I + Z p ; ^Za = O( p ( + (X Z)))

SUPERLINEAR CONVERGENCE FOR SDP 2 so that, using (4.5), (5.3) p I ; ^Xa + p I ; ^Za = O( p ) Moreover, X p Z p =(^Xa ; p I)( ^Za ; p I)+(^Xa ; p I)( p I + Z p ; ^Za ) +( p I + X p ; ^Xa ) Z p Applying (5.8), (5.2), (5.3) and (4.5) to the above relation yields X p Z p = O(( + (X Z))) Theorem 5.8. The iterates (X Z ) generated byalgorithm SDP() converge to (X a Z a ) superlinearly with order 2=( + 2 ;r ). The duality gap converges to zero at the same rate. Proof. From (5.7) we see that for any t 0 satisfying there holds ; + X p Z p F = ( ; t)(( ; t) =) 2;r (X + tx p Z+ tz p ) (( ; t)=) 2;r This implies using (4.9) and Lemma 5.7 that so that ( ; t ) +2;r ( ; + X p Z p F = )( =) ;2;r = O( ;2;r ) + =(; t ) = O( 2=(+2;r ) ) This shows that the duality gap converges to zero superlinearly with order 2=(+2 ;r ). It remains to prove that the iterates converge to the analytic center with the same order. Notice that (5.4) X ; X( ) 2 F = tr (X ; X( ))L ;T (L T L)L ; (X ; X( )) L T L tr (X ; X( ))L ;T L ; (X ; X( )) = L T L tr L ; (X ; X( ))(X ; X( ))L ;T L T L 2 L ; (X ; X( ))L ;T 2 F However, using the denition of the F -norm and applying Lemma 5.4, L T L F = LL T F = L DL T F = O(L L T F ) Recall that L L T = p X( )by denition, so that using Lemma 3., (5.5) L T L F = O( p )

22 Z.-Q. LUO, J.F. STURM AND S. ZHANG Combining (5.4) and (5.5) with Lemma 5.3, we have X ; X( ) F = O( p L ; (X ; X( ))L ;T F )=O((X Z )) = O( ) Hence, we obtain from Theorem 3.5 that Similarly, it can be shown that X ; X a F = O( ) Z ; Z a F = O( ) This shows that the iterates converge to the analytic center R-superlinearly, with the same order as converges to zero. 6. Conclusions. We have shown the global and superlinear convergence of the predictor-corrector algorithm SDP, assuming only the existence of a strictly complementary solution pair. The local convergence analysis is based on Theorem 3.5, which states that X();X a +Z();Z a = O(). By enforcing (X Z )! 0, the iterates \inherit" this property of the central path. For the generalization of the Mizuno- Todd-Ye predictor-corrector algorithm in [7], we do not enforce (X Z )! 0, and hence we cannot conclude superlinear convergence for it yet. In this respect, it will be interesting to study the asymptotic behavior of the corrector steps. Finally, it is liely that our line of argument can be applied to the infeasible primal-dual path following algorithms of Kojima-Shindoh-Hara [8] and Potra-Sheng [6]. Acnowledgment. We wish to than the referees for providing careful comments which led to substantial improvement in the presentation of the paper. REFERENCES [] F. Alizadeh, Interior point methods in semidenite programming with applications to combinatorial optimization problems, SIAM J. Optim. 5 (995), pp. 3{5. [2] F. Alizadeh, J. A. Heaberly and M. Overton, Primal-dual interior-point methods for semidenite programming convergence rates, stability and numerical results, Technical report, Computer Science Department, NewYor University, New Yor, 996. [3] O. Guler, Limiting behavior of weighted central paths in linear programming, Math. Programming 65 (994), pp. 347{363. [4] C. Helmberg, F. Rendl, R. J. Vanderbei and H. Wolowicz, An interior-point method for semidenite programming, SIAM J. Optim. 6 (996), pp. 342{36. [5] A. J. Hoffman, On approximate solutions of systems of linear inequalities, Journal of Research of the National Bureau of Standards 49 (952), pp. 263{265. [6] M. Kojima, N. Megiddo, T. Noma and A. Yoshise, A Unied Approach to Interior Point Algorithms for Linear Complementarity Problems, Springer-Verlag, Berlin, 99. [7], M. Shida and S. Shindoh, Local convergence ofpredictor-corrector infeasible-interiorpoint algorithms for SDPs and SDLCPs, Research Reports on Information Sciences, B- 306, Dept. of Information Sciences, Toyo Institute of Technology, 2-2- Oh-Oayama, Meguro-u, Toyo 52, Japan, 995. [8], S. Shindoh, and S. Hara, Interior-point methods for the monotone linear complementarity problem in symmetric matrices, SIAM J. Optim. 7 (997), pp. 86{25. [9] C. J. Lin and R. Saigal, An infeasible start predictor corrector method for semi-denite linear programming, Technical report, Dept. of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, USA, 995. [0] N. Megiddo, Pathways to the optimal solution set in linear programming in Progress in Mathematical Programming, Interior-Point and Related Methods, N. Megiddo ed., Springer- Verlag, New Yor, 989, pp. 3{58.

SUPERLINEAR CONVERGENCE FOR SDP 23 [] S. Mizuno, M. J. Todd and Y. Ye, On Adaptive-Step Primal-Dual Interior-Point Algorithms for Linear Programming, Math. Oper. Res. 8 (993), pp. 964{98. [2] R. D. C. Monteiro, Primal-dual path following algorithms for semidenite programming, Technical report, School of Industrial and Systems Engineering, Georgia Tech, Atlanta, Georgia, U.S.A., 995. To appear in SIAM J. Optim. [3] and T. Tsuchiya, Limiting behavior of the derivatives of certain trajectories associated with a monotone horizontal linear complementarity problem, Math. Oper. Res. 2 (996), pp. 793{84. [4] Y. Nesterov and M. J. Todd, Self-scaled barriers and interior-point methods for convex programming, Math. Oper. Res. 22 (997), pp. {42. [5] and, Primal-dual interior-point methods for self-scaled cones, Technical report 25, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New Yor, 995. [6] F. A. Potra and R. Sheng, A superlinearly convergent primal-dual infeasible-interior-point algorithm for semidenite programming, Report on Computational Mathematics No. 78, Dept. of Mathematics, The University of Iowa, Iowa City, USA, 995. [7] J. F. Sturm and S. Zhang, Symmetric primal-dual path following algorithms for semidefinite programming, Technical report 9554/A, Econometric Institute, Erasmus University Rotterdam, The Netherlands, 995. [8] L. Vandenberghe and S. Boyd, A primal-dual potential reduction method for problems involving matrix inequalities, Math. Programming 69 (995), pp. 205{236. [9] and, Semidenite programming, SIAM Rev. 38 (996), pp. 49{95. [20] Y. Ye and K. Anstreicher, On quadratic and O( p nl) convergence of a predictor-corrector algorithm for LCP, Math. Programming 62 (993), pp. 537{55. [2] Y. Zhang, On extending primal-dual interior-point algorithms from linear programming to semidenite programming, Technical report, Dept. of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, Maryland, U.S.A., 995.