A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

Similar documents
Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

2 The SDP problem and preliminary discussion

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

On Mehrotra-Type Predictor-Corrector Algorithms

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

א K ٢٠٠٢ א א א א א א E٤

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

New stopping criteria for detecting infeasibility in conic optimization

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Interior Point Methods for Nonlinear Optimization

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

c 2005 Society for Industrial and Applied Mathematics

Local Self-concordance of Barrier Functions Based on Kernel-functions

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

Nonsymmetric potential-reduction methods for general cones

Primal-dual path-following algorithms for circular programming

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Second-order cone programming

DEPARTMENT OF MATHEMATICS

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

AN EQUIVALENCY CONDITION OF NONSINGULARITY IN NONLINEAR SEMIDEFINITE PROGRAMMING

A priori bounds on the condition numbers in interior-point methods

Interior Point Methods in Mathematical Programming

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

Interior Point Methods for Mathematical Programming

Lecture: Algorithms for LP, SOCP and SDP

Largest dual ellipsoids inscribed in dual cones

Approximate Farkas Lemmas in Convex Optimization

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

CCO Commun. Comb. Optim.

A new primal-dual path-following method for convex quadratic programming

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

On the Sandwich Theorem and a approximation algorithm for MAX CUT

15. Conic optimization

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior-Point Methods

Limiting behavior of the central path in semidefinite optimization

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Introduction to Semidefinite Programs

The Simplest Semidefinite Programs are Trivial

Using Schur Complement Theorem to prove convexity of some SOC-functions

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

A Simpler and Tighter Redundant Klee-Minty Construction

Semidefinite Programming

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

IMPLEMENTATION OF INTERIOR POINT METHODS

Interior Point Methods for Linear Programming: Motivation & Theory

PRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING

Lecture 17: Primal-dual interior-point methods part II

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

Lecture: Introduction to LP, SDP and SOCP

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Lecture Note 5: Semidefinite Programming for Stability Analysis

Optimization: Then and Now

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

arxiv: v1 [math.oc] 26 Sep 2015

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

New Interior Point Algorithms in Linear Programming

Lecture 6: Conic Optimization September 8

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

On Mehrotra-Type Predictor-Corrector Algorithms

Interval solutions for interval algebraic equations

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

Exploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming.

Lecture 5. The Dual Cone and Dual Problem

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

An Interior-Point Method for Approximate Positive Semidefinite Completions*

Deterministic Methods for Detecting Redundant Linear. Constraints in Semidefinite Programming

Primal-dual IPM with Asymmetric Barrier

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Strong duality in Lasserre s hierarchy for polynomial optimization

Real Symmetric Matrices and Semidefinite Programming

18. Primal-dual interior-point methods

On implementing a primal-dual interior-point method for conic quadratic optimization

Advances in Convex Optimization: Theory, Algorithms, and Applications

Transcription:

J Syst Sci Complex (01) 5: 1108 111 A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION Mingwang ZHANG DOI: 10.1007/s1144-01-0317-9 Received: 3 December 010 / Revised: 18 August 011 c The Editorial Office of JSSC & Springer-Verlag Berlin Heidelberg 01 Abstract Mehrotra-type predictor-corrector algorithm is one of the most effective primal-dual interiorpoint methods. This paper presents an extension of the recent variant of second order Mehrotra-type predictor-corrector algorithm that was proposed by Salahi, et al.(006) for linear optimization. Based on the NT direction as Newton search direction, it is shown that the iteration-complexity bound of the algorithm for semidefinite optimization is O(n 3 log X 0 S 0 ), which is similar to that of the corresponding ε algorithm for linear optimization. Key words Mehrotra-Type algorithm, polynomial complexity, predictor-corrector algorithm, semidefinite optimization. 1 Introduction After the landmark paper of Karmarkar [1], linear optimization (LO) revitalized as an active area of research. Lately the interior-point methods (IPMs) have shown their powers in solving LO problems and large classes of other optimization problems (see []). IPMs are also the powerful tools to solve mathematical programming problems such as complementarity problem (CP), second order conic optimization (SOCO) and semidefinite optimization (SDO). SDO is a generation of LO and it has various applications in diverse areas, such as system and control theory [3] and combinatorial optimization [4]. Generalization of IPMs of LO to the context of SDO started in the early of 1990s. The first IPMs for SDO were independently developed by Alizadeh [4] and Nesterov and Nemirousky [5]. Alizadeh [4] applied Ye s potential reduction ideal to SDO and showed how variants of dual IPMs could be extended to SDO. Almost at the same time, in their milestone book [5], Nesterov and Nemirousky proved IPMs are able to solve general conic optimization problems, in particular SDO problems, in polynomial time. Other IPMs designed for LO have been successfully extended to SDO. For an overview of these related results we refer to the subject monographs [6 7] and their references. Most of these more recent works are concentrated on primal-dual methods. Mehrotra-type predictor-corrector algorithm is one of the most remarkable primal-dual methods, and it is also the base of IPMs software packages, such as [8 10] and many others. In spite of extensive use of this method, not much about its complexitywas knownbefore the recent Mingwang ZHANG College of Science, China Three Gorges University, Yichang 44300, China. E-mail: zmwang@ctgu.edu.cn. This research was supported by Natural Science Foundation of Hubei Province under Grant No. 008CDZ047. This paper was recommended for publication by Editor Shouyang WANG.

A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1109 paper by Salahi, et al. [11], which presents a new variant of Mehrotra-type predictor-corrector algorithm for LO. By introducing certain safeguards, this variant enjoys the polynomial iteration complexity, while practical efficiency of the algorithm is preserved. Later on, Salahi and Amiri [1] analyzed a new variation of second order Mehrotra-type predictor-corrector algorithm. He also proved that the algorithm has polynomial iteration complexity. Recently, Koulaei and Terlaky [13] extended the Mehrotra-type predictor-corrector algorithm [11] for LO to SDO. This paper studies the extension of the second order Mehrotra-type algorithm [1] for SDO. The analysis for SDO is more complicated than for LO. A large part of the theoretical difficulty is due to the issue of maintaining symmetry in the linearized complementarity condition [13].The aim of this paper is to establish iteration-complexity bound for a generalization of the Mehrotratype algorithm of [1], based on the NT direction. Borrowing analytic tools from [14], we derive the iteration bound O(n 3 log X0 S 0 ε ) for the algorithm, which is analogous to the linear case. The rest of the paper is organized as follows. In Section, we introduce the SDO problem. We review some basic concepts for IPMs for solving the SDO problem, such as central path, NT-search direction, ect. We conclude this section by presenting a second order Mehrotra type predictor-corrector algorithm for the SDO. In Section 3, we state and prove some technical results. Based on these results, the iteration-complexity bound of the algorithm is established. Finally, conclusion and final remarks are given in Section 4. The following notations are used through the paper. R n denotes the n-dimensional Euclidean space. R n n denotes the set of n n real matrices. F and denote the Frobenius norm and spectral norm for matrices, respectively. S n, S+ n and S++ n denote the cone of symmetric, symmetric positive semidefinite and symmetric positive definite matrices, respectively. For M S n, M 0(M 0) means that M is positive semi-definite (positive definite). Tr(M) denotes the trace of matrix M R n n,tr(m) = n M ii. The matrix inner product is defined by A B =Tr(A T B). For M S n,wedenotebyλ i (M) the eigenvalues of M, λ max (M) andλ min (M) denote the largest and the smallest eigenvalues of M, respectively. Moreover, the spectral condition number of M is denoted by cond(m) =λ max (M)/λ min (M). The Kroneker product of two matrices X and S is denoted by X S (see [15]). For X R n n, the operator vec(x) mapsann n matrix into a vector of length n by stacking the columns of the matrix argument. Finally, I denotes the n n identity matrix. The SDO Problem and Preliminaries In this section, we introduce the SDO problem and state the symmetrization scheme which is used to derive the Newton direction. We also give some extent results and describe our variant of the second order Mehrotra-type predictor-corrector algorithm. We consider the following SDO problem min C X s.t. A i X = b i, i =1,,,m, X 0, (1) where C, X S n, A i S n,,,,m, are linearly independent and b =(b 1,b,,b m ) T R m. We call the problem (1) in the given form the primal problem, and X is the primal matrix variable.

1110 MINGWANG ZHANG Corresponding to every primal problem (1), there exists a dual problem max s.t. b T y y i A i + S = C, S 0, () where y R m, S S n and (y, S) is the dual variable. The primal-dual feasible set is defined as { F = (X, y, S) S n R m S n A i X = b i,x 0,,,,m, } y i A i + S = C, S 0, and the relative interior of the primal-dual feasible set is { F = (X, y, S) S++ n Rm S++ n A i X = b i,x 0,,,,m, } y i A i + S = C, S 0. Under the assumptions that F is nonempty and the matrices A i, i =1,,,m are linearly independent, then X and (y,s ) are optimal if and only if they satisfy the optimality condition [7] Ai X = bi, X 0, i =1,,,m, y i A i + S = C, S 0, XS =0, (3) where the last equality is called the complementarity equation. The central path consists of points (X(μ),y(μ),S(μ)) satisfying the perturbed system A i X = b i, i =1,,,m, X 0, y i A i + S = C, S 0, XS = μi, (4) where μ R, μ > 0. It is proved in [5] that there is a unique solution (X(μ),y(μ),S(μ)) to the central path equations (4) for any barrier parameter μ>0, assuming that F is nonempty and the matrices A i, i = 1,,,m, are linearly independent. Moreover, the limit point (X,y,S )asμgoes to 0 is a primal-dual optimal solution for the SDO problem. In the next, we derive the Newton direction for the system (4). Observe that for X, S S n, the product XS is generally not in S n. Hence, the lefthand side of (4) is a map from S n R m S n to R n n R m S n. Thus, the system (4) is

A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1111 not square system when X and S are restricted to S n, which is needed for applying Newtonlike methods. A remedy for this is to make the perturbed optimality system (4) square by modifying the left-hand side to a map from S n R m S n to itself. To achieve this, Zhang [16] introduced a general symmetrization scheme based on the so-called similar symmetrization operator H P : R n n S n defined as H P (M) 1 [PMP 1 +(PMP 1 ) T ], M R n n, where P R n n is some nonsingular matrix. Zhang [16] also observed that H P (M) =μi M = μi, for any nonsingular matrix P and any matrix M with real spectrum, and any μ R. Therefore, for any given nonsingular matrix P, (4) is equivalent to A i X = b i, i =1,,,m, X 0, y i A i + S = C, S 0, (5) H P (XS)=μI. A Newton-like method applied to system (5) leads to the following linear system: A i ΔX =0, i =1,,,m, Δy i A i +ΔS =0, H P (XΔS +ΔXS)=σμ g I H P (XS), where (ΔX, Δy, ΔS) S n R m S n is the unknown direction (see [14] for more details), σ [0, 1] is the centering parameter, and μ g = X S/n is the normalized duality gap corresponding to (X, y, S). We refer to the directions derived from (6) as the Monteiro-Zhang (MZ) family. The matrix P used in (6) is called the scaling matrix for the search direction. For the choice of P,when P = I, the direction obtained from (6), coincides with AHO direction [17].IfP = X 1 or S 1, the (6) gives the H K M directions [18 0], respectively. Further, we obtain the NT direction when P = W 1 NT,whereW NT is the solution of the system W 1 NT XW 1 NT (6) = S. Nestorov and Todd [1] proved the existence and uniqueness of such as W NT = X 1 (X 1 SX 1 ) 1 X 1. In the paper, we restrict the scaling matrix P to the specific class P(X, S) {P S++ P n XS = SXP }, (7) where X, S S++ n. We should mention that this restriction on P is common for large neighborhood primal-dual IPMs proposed in [13 14]. Furthermore, this restriction on P does not lose any generality, in terms of the solution set of system (6), as Monteiro and Zhang indicated in [14]. Apparently, P = X 1, S 1 and W 1 NT belong to this specific class. However, P = I does not. In what follows we describe the variant of second order Mehrotra-type predictor-corrector algorithm. Let us define (X(α),y(α),S(α)) = (X, y, S)+α(ΔX a, Δy a, ΔS a )+α (ΔX, Δy, ΔS), (8) X(α) S(α) μ g (α) =. (9) n

111 MINGWANG ZHANG To prove the convergence, a certain neighborhood of the central path is considered in which the algorithm operates. In this paper, the algorithm uses the so-called negative infinity norm neighborhood that is a large neighborhood, defined as N (γ) ={(X, y, S) F λ min (XS) γμ g }, where γ (0, 1) is a given constant. In the predictor step the algorithm computes the affine search direction, i.e., A i ΔX a =0, i =1,,,m, Δyi a A i +ΔS a =0, H P (XΔS a +ΔX a S)= H P (XS). (10) Then the maximum feasible step is computed, i.e., the largest for which X(α a )=X + α a ΔX a, S(α a )=S + α a ΔS a 0. However, the algorithm does not take such a step. Based on this step size, the algorithm chooses σ =(1 α a ) 3 to compute the corrector direction that is defined as the solution of the system A i ΔX =0, i =1,,,m, Δy i A i +ΔS =0, H P (XΔS +ΔXS)=σμ g I H P (ΔX a ΔS a ). (11) Finally, the algorithm computes the maximum feasible step size α that keeps the next iteration in N (γ). Based on the aforementioned discussion, we can now outline second order Mehrotra-type predictor-corrector algorithm as Algorithm 1. Algorithm 1 Input A proximity parameter γ (0, 1 4 ); an accuracy parameter ε>0; a starting point (X 0,y 0,S 0 ) N (γ). begin while X S ε do Compute the scaling matrix P =(X 1 (X 1 SX 1 ) 1 X 1 ) 1. begin Predictor step Solve (10) and compute the maximum step α a such that (X(α a ),y(α a ), S(α a )) F; end begin Corrector step If α a 0.1, then solve (11) with σ =(1 α a ) 3 and compute the maximum step size α such that (X(α),y(α),S(α)) N (γ); If α a γ 3, then solve (11) with σ = γ 3n 3 (1 γ) and compute the maximum step size α such that (X(α),y(α),S(α)) N (γ); end

A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1113 else Solve (11) with σ = such that (X(α),y(α),S(α)) N (γ); end set (X, y, S) =(X(α),y(α),S(α). end end 3 Complexity Analysis of the Algorithm γ (1 γ) and compute the maximum step size α In this section, we present the complexity proof for Algorithm 1. To simply the proofs of the main results, we write the third equation of system (11) in the form H( ˆXΔŜ +ΔˆXŜ) =σμ gi H(Δ ˆX a ΔŜa ), (1) where H H I is the plain symmetrization operator and ˆX PXP, Δ ˆX P ΔXP, Ŝ P 1 SP 1, ΔŜ P 1 ΔSP 1. (13) Moreover, in terms of Kronecker product, Equation (1) becomes ÊvecΔ X + F vecδŝ = vec(σμ gi H(Δ X a ΔŜa )), (14) where Ê 1 (Ŝ I + I Ŝ), 1 F ( X I + I X). In [14], Monteiro and Zhang proved that Ê and F are n n symmetric positive semidefinite matrices. Similarly, the third equation of (10) can be rewritten as ÊvecΔ X a + F vecδŝa = vec(h( XŜ)). (15) Using (7) and (13), it is easy to see that for X, S S n ++, one has P(X, S) ={P S n ++ XŜ = Ŝ X}, (16) i.e., we require P to make X and Ŝ to commute after scaling, implying that XŜ is symmetric, as long as X and S are both symmetric. This requirement on P also guarantees that Ê and F commute. These properties play a crucial role in the proof of the following technical lemmas. We need to find a lower bound for the maximum step size α in the corrector step in order to establish the iteration of Algorithm 1. The following lemmas are needed to derive a lower bound on the size of centering step. Lemma 3.1 Suppose that (X, y, S) S++ n Rm S++ n, (ΔXa, Δy a, ΔS a ) is the solution of (10), and(δx, Δy, ΔS) is the solution of (11). Then H P (X(α)S(α)) = (1 α)h P (XS)+ασμ g I + α 3 H P (ΔX a ΔS +ΔXΔS a ) +α 4 H P (ΔXΔS), (17) μ g (α) =(1 α + α σ)μ g. (18)

1114 MINGWANG ZHANG Proof By Equation (8), we have X(α)S(α) =(X + αδx a + α ΔX)(S + αδs a + α ΔS) = XS + α(xδs a +ΔX a S)+α (XΔS +ΔXS +ΔX a ΔS a ) + α 3 (ΔX a ΔS +ΔXΔS a )+α 4 ΔXΔS. Applying the linearity of H p ( ) to this equality, and noticing the third equations of (10) and (11), we obtain H P (X(α)S(α)) = H P (XS)+αH P (XΔS a +ΔX a ΔS)+α H P (XΔS +ΔXS +ΔX a ΔS a ) +α 3 H P (ΔX a ΔS +ΔXΔS a )+α 4 H P (ΔXΔS) = H P (XS) αh P (XS)+α σμ g I α H P (ΔX a ΔS a )+α H P (ΔX a ΔS a ) +α 3 H P (ΔX a ΔS +ΔXΔS a )+α 4 H P (ΔXΔS) =(1 α)h P (XS)+α σμ g I + α 3 H P (ΔX a ΔS +ΔXΔS a )+α 4 H P (ΔXΔS). Using (9) and identity Tr(H p (M)) = Tr(M), we have X(α)S(α) =Tr[(1 α)h P (XS)+α σμ g I + α 3 H P (ΔX a ΔS +ΔXΔS a )+α 4 H P (ΔXΔS)] =(1 α)tr[h P (XS)] + α σμ g n + α 3 Tr[H P (ΔX a ΔS)] + α 3 Tr[H P (ΔXΔS a )] +α 4 Tr[H P (ΔXΔS)] =(1 α)x S + α σμ g n + α 3 ΔX a ΔS + α 3 ΔX ΔS a + α 4 ΔX ΔS (19) Using the first two equations of (10) and (11) and the fact that (X, y, S) is a primal-dual feasible solution, we can conclude that ΔX a ΔS =0,ΔX ΔS a =0andΔX ΔS = 0. Thus dividing (19) by n gives (18). That completes the proof. Lemma 3. Suppose that the current iterate (X, y, S) N (γ) and let (ΔX a, Δy a, ΔS a ) be the solution of (10) and (ΔX, Δy, ΔS) be the solution of (11). Then ( σ H P (ΔX a ΔS) cond(g) 4 + 1 16 + σ ) 1 3 n γ 1 μg, 4 ( σ H P (ΔXΔS a ) cond(g) 4 + 1 16 + σ ) 1 3 n γ 1 μg, 4 where G = Ê 1 F, cond(g) =λmax (G)/λ min (G). Proof By applying Lemma A. to (15), we obtain ( F Ê) 1 ÊvecΔ X a + ( F Ê) 1 F vecδ Ŝ a +Δ X a ΔŜa = ( F Ê) 1 vec(h( XŜ)). Since P P(X, S), so Ê and F commute, which implies that It follows that ( F Ê) 1 Ê F =(Ê 1 ) 1 = G 1, ( F Ê) 1 F =( Ê 1 1 1 F ) = G. G 1 vecδ Xa + G 1 vecδ Ŝ a +Δ X a ΔŜa = ( F Ê) 1 vec(h( XŜ)).

A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1115 Using Δ X a ΔŜa = 0 and Lemma A.5 with σ =0,wehave By doing the same procedure for the relation (14), one has G 1 vecδ Xa nμ g, (0) G 1 vecδ Ŝ a nμ g. (1) G 1 vecδ X + G 1 vecδ Ŝ +Δ X ΔŜ = ( F Ê) 1 vec(σμg I H(Δ X a ΔŜa )) ( F Ê) 1 vec(σμg I) + ( F Ê) 1 vec(h(δ Xa ΔŜa )) + ( F Ê) 1 vec(σμg I) ( F Ê) 1 vec(h(δ Xa ΔŜa )). The upper bound for the first expression of the right hand side follows from Lemma A.1, where A =(ρ(a T A)) 1. ( F Ê) 1 vec(σμg I) ( F Ê) 1 vec(σμ g I) = ρ(( F Ê) 1 ) vec(σμ g I) = ρ(( F Ê) 1 ) σμ g I F = 1 σμ g I F 4λ 1 n σ μ g 4γμ = nσ g 4γ μ g. () By Corollary A.7, the upper bound for the second expression can be obtained in the same way as in the proof of (). For the third expression, () and (3) imply From (), (3), and (4), we obtain ( F Ê) 1 vec(h(δx a ΔS a )) cond(g) n μ g 16 γ. (3) ( F Ê) 1 vec(σμg I) ( F Ê) 1 vec(h(δx a ΔS a )) ( nσ 4γ μ g cond(g) n ) 1 μ g cond(g) = σn 3 μg. (4) 16 γ 8γ ( F Ê) 1 vec(σμg I H(Δ X a ΔŜa )) σ 4γ nμ g + cond(g) n μ g + σ cond(g) n 3 μg 16γ 4γ ( σ cond(g) 4 + 1 16 + σ ) n μ g 4 γ. (5) Therefore, using Δ X ΔŜ =0,wehave G 1 vecδ X cond(g) ( σ 4 + 1 ) 1 n μg γ, (6) 16 + σ 4 G 1 vecδ Ŝ ( σ cond(g) 4 + 1 16 + σ ) 1 μg n 4 γ. (7)

1116 MINGWANG ZHANG Finally, from Lemma A.3, (0), and (7), we obtain H P (ΔX a ΔS) F = H I (Δ X a ΔŜ) F Δ X a F ΔŜ a F = vec(δ X ) vec(δŝ) cond(g) G 1 vec(δ Xa ) G 1 vec(δ Ŝ) ( σ cond(g) 4 + 1 16 + σ ) 1 3 1 n μ g, 4 γ and analogously one has the second statement of the lemma, which completes the proof. Lemma 3.3 Let a point (X, y, S) N (γ) and P P(X, S) be given, and define G Ê 1 F. Then the Newton step corresponding system (11) satisfies ( H P (ΔXΔS) F (cond(g)) 3 σ 4 + 1 16 + σ ) n μ g 4 γ. Proof The proof is analogous to the proof of Lemma 3.. Lemma 3.4 (see [13], Lemma 3.6) Let P be the NT scaling and t be defined as follows { u T H P (ΔX a ΔS a } )u t =max u =1 u T. (8) H P (XS)u Then t satisfies t 1 4. Theorem 3.5 Suppose that the current iteration (X, y, S) N (γ) and (ΔXa, Δy a, ΔS a ) is the solution of (10) and (ΔX, Δy, ΔS) is the solution of (11) with σ =(1 α a ) 3.Then,for α a satisfying ( ) 1 γt 3 α a < 1 (9) 1 γ with t defined by (8), the algorithm always takes a step with positive step size in the corrector step. Proof The goal is to determine maximum step size α (0, 1] such that By Lemma A.4., it is equivalent to λ min [X(α)S(α)] γμ g (α). λ min [H P (X(α)S(α))] γμ g (α), (30) where P P(X, S). By (17) and the fact that λ min ( ) is a homogeneous function on the space of symmetric matrix [15], it follows that λ min (H P (X(α)S(α))) = λ min ((1 α)h P (XS)+α 3 H P (ΔX a ΔS a ) α 3 H P (ΔX a ΔS a )+α σμ g I +α 3 H P (ΔX a ΔS +ΔXΔS a )+α 4 H P (ΔXΔS)) α σμ g + λ min ((1 α)h P (XS) α 3 H P (ΔX a ΔS a )) +α 3 [λ min (H P (ΔX a ΔS a )) + λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)).

A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1117 Let Q(α) =(1 α)h P (XS) α 3 H P (ΔX a ΔS a ). Since Q(α) is symmetric, so we have λ min (Q(α)) = min u =1 ut Q(α)u. Therefore, there is a vector ū with ū = 1, such that λ min (Q(α)) = ū T Q(α)ū, which implies λ min (H P (X(α)S(α))) α σμ g +ū T [(1 α)h P (XS) α 3 H P (ΔX a ΔS a )]ū +α 3 [λ min (H P (ΔX a ΔS a )) + λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)). The fact that H P (XS) is positive definite and Tr(H P (ΔX a ΔS a )) = 0 imply t 0 in (8) and thus it follows that which enables us to derive u T H P (ΔX a ΔS a )u tu T H P (XS)u, u, u =1, λ min (H P (X(α)S(α))) α σμ g +(1 α)ū T H P (XS)ū α 3 tū T H P (XS)ū +α 3 [λ min (H P (ΔX a ΔS a )+λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)) α σμ g +(1 α α 3 t)ū T H P (XS)ū +α 3 [λ min (H P (ΔX a ΔS a )+λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)) α σμ g +(1 α α 3 t)λ min (H P (XS)) +α 3 [λ min (H P (ΔX a ΔS a )+λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)), where the last inequality follows for (1 α α 3 t) 0. Thus, using the fact that μ g (α) = (1 α + α σ)μ g, (30) holds whenever α σμ g +(1 α α 3 t)λ min (H P (XS)) +α 3 [λ min (H P (ΔX a ΔS a )) + λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)) γ(1 α + α σ)μ g. (31) The worst case for the inequality (31) happens when λ min (H P (XS)) = λ min (XS) = γμ g, λ min (H P (ΔX a ΔS a ))+λ min (H P (ΔX a ΔS))+λ min (H P (ΔXΔS a )) < 0andλ min (H P (ΔXΔS)) < 0, so one has to have α σμ g +(1 α α 3 t)γμ g >γ(1 α + α σ)μ g or It is sufficient to have (1 γ)(1 α a ) 3 αtγ > 0. (1 γ)(1 α a ) 3 γt > 0.

1118 MINGWANG ZHANG This definitely holds whenever ( ) 1 γt 3 α a < 1, 1 γ which completes the proof. Similarly as in [1] for LO, we let α a =1 ( ) 1 γ 3 (1 γ) whenever the maximum step size in the corrector step is below certain threshold. In the following theorem we give the lower bound for the maximum step size in the corrector step for this specific choice. Note also that for α a =1 ( ) 1 γ 3 (1 γ),byusingσ =(1 α a ) 3 one has σ = γ (1 γ). The following two corollaries which follow from Lemmas 3. and 3.3 give explicit upper bound for the specific σ. Corollary 3.6 Let σ = γ (1 γ),where0 γ 1,andP is the NT scaling. Then H P (ΔX a ΔS) 1 n 3 1 γ μ g and H P (ΔXΔS a ) 1 n 3 1 γ μ g. Proof Using σ = γ (1 γ) and Lemma 3., we can derive H P (ΔX a ΔS) 1 cond(g)n 3 1 γ μ g and H P (ΔXΔS a ) 1 cond(g)n 3 1 γ μ g. Since P is the NT scaling, so we have X = Ŝ and consequently Ê = F, which implies cond(g) = 1. This completes the proof. Corollary 3.7 Let σ = γ (1 γ),where0 γ 1,andP is the NT scaling. Then H P (ΔXΔS) 1 n 4 γ μ g. Proof The proof is analogous to the proof of Lemma 3.6. Theorem 3.8 Suppose that the current iteration (X, y, S) N (γ) and (ΔXa, Δy a, ΔS a ) is the solution of (10) and (ΔX, Δy, ΔS) is the solution of (11) with σ = γ (1 γ).then α γ 3. 3n 3 Proof The goal is to determine maximum step size α (0, 1] in the corrector step such that (30) holds. Following the similar analysis of the previous theorem it is sufficient to have (1 α)λ min (H p (XS)) + α σμ g +α 3 [λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)) γ(1 α + α σ)μ g.

A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1119 Using Corollaries 3.6 and 3.7, it is sufficient to have or (1 α)γμ g + α σμ g α 3 n 3 1 μ g 1 n α4 γ 4 γ μ g γ(1 α + α σ)μ g 1 γ 1 γ n 3 α n 4γ α 0. This inequality definitely holds for α = γ 3 3n 3, which completes the proof. Now, we are ready to give the iteration-complexity of Algorithm 1. Theorem 3.9 Algorithm 1 stops after at most O (n 3 X 0 S 0 ) log ε iterations with a solution for which X S ε. Proof If α a 0.1 andα γ 3, then using (18) we obtain If α a 0.1 andα< γ 3 3n 3 3n 3 μ g (α) =(1 α + α σ)μ g,then Finally, if α a < 0.1, then one has This completes the proof. 4 Conclusions μ g (α) =(1 α + α σ)μ g μ g (α) =(1 α + α σ)μ g ( 1 γ ) 3 μ 5n 3 g. ( 1 γ ) 3 ( 3γ) μ 6(1 γ)n 3 g. ( 1 γ ) 3 ( γ) μ 6(1 γ)n 3 g. In the paper, we have extended the recently proposed second order Mehrotra-type predictorcorrector algorithm of Salahi and Amiri [1] to SDO and derived the iteration bound for the algorithm, namely, O(n 3 log X 0 S 0 ε ), which is the same iteration bound as in the LO case. By slightly modifying the algorithm, we can easily obtain the generalization of the modified version of Salahi [1], and the iteration-complexity of the modified version is improved to O(n log X0 S 0 ε ). Hence, the details are omitted here. Some interesting topics remain for further research. Firstly, the search directions used in this paper are based on the NT-symmetrization scheme. It may be possible to design similar algorithms using other symmetrization schemes and still obtain polynomial-time iteration bounds. Secondly, the extension to SOCO and the general convex optimization deserve to be investigated. Furthermore, numerical test is an interesting-topic for investigating the behavior of the algorithm so as to be compared with other approaches.

110 MINGWANG ZHANG References [1] N. K. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 1984, 4: 373 395. [] Y. Ye, Interior Point Algorithms, Theory and Analysis, Wiley, UK, 1997. [3] S. Boyd, L. EI Ghaoui, E. Fern, et al., Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, PA, 1994. [4] F. Alizadeh, Interior point methods in semidefinite programming with applications to combinatorial optimization, SIAM Journal on Optimization, 1995, 5: 13 51. [5] Y. E. Nesterov and A. S. Nemirovsky, Interior Point Methods in Convex Programming: Theory and Applications, SIAM, Philadelphia, PA, 1994. [6] H. Wolkowicz, R. Saigal, and L. Vandenberghe, Handbook of Semidefinite Programming: Theory, Algorithms, and Applications, Kluwer Academic publishers, Dordrecht, The Netherlands, 000. [7] E. de Klerk, Aspects of Semidefinite Programming: Interior Point Algorithms and Selected Applications, Kluwer Academic Publishers, Dordrecht. The Netherlands, 00. [8] J. Czyayk, S. Mehrotra, M. Wagner, et al., PCx: An interior-point code for linear programming, Optimization Methods and Software, 1999, 11/1: 397 430. [9] Y. Thang, Solving large-scale linear programmes by interior point methods under the Matlab environment, Optimization Methods and Software, 1999, 10: 1 31. [10] CPLEX: ILOG Optimization, http://www.ilog.com. [11] M. Salahi, J. Peng, and T. Terlaky, On Mehrotra-type predictor-corrector algorithms, Technical Report 005/4, Advanced Optimization Lab., Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada. [1] M. Salahi and N. M. Amiri, Polynomial time second order Mehrotra-type predictor-corrector algorithms, Applied Mathematics and Computation, 006, 183: 646 658. [13] M. H. Koulaei and T. Terlaky, On the extension of a Mehrotra-type algorithm for semidefinite optimization, Technical Report 007/4, Advanced optimization Lab., Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada. [14] R. D. C. Monteiro and Y. Zhang, A unified analysis for a class of long-step primal-dual pathfollowing interior-point algorithms for semidefinite programming, Mathematical Programming, 1998, 81: 81 99. [15] R. A. Horn and R. J. Charles, Matrix Analysis, Cambridge University Press, UK, 1986. [16] Y. Zhang, On extending some primal-dual interior-point algorithms from linear programming to semidefinite programming, SIAM Journal on Optimization, 1998, 8: 365 383. [17] F. Alizadeh, J. A. Haeberly, and M. Dverton, Primal-dual interior-point methods for semidefinite programming: Convergence rates, stability and numerical results, SIAM Journal on Optimization 1998, 8: 746 768. [18] C. Helmberg, F. Rendl, R. J. Vanderdei, et al., An interior-point method for semidefinite programming, SIAM Journal on Optimization, 1996, 6: 34 361. [19] M. Kojima, M. Shindoh, and S. Hara, Interior point methods for the monotone semidefinite linear complementarity problem in symmetric matrices, SIAM Journal on Optimization, 1997, 7: 86 15. [0] R. D. C. Monteiro, Primal-dual path-following algorithms for semidefinite programming, SIAM Journal on Optimization, 1997, 7: 663 678. [1] Y. E. Nesterov and M. J. Todd, Self-scaled barriers and interior-point methods for convex programming, Mathematics of Operations Research, 1997, : 1 4. Appendix The following results introduced in [14] are used during the analysis. Lemma A.1 Let λ 1 be the smallest eigenvalue of the matrix XŜ. Then for any P

A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 111 P(X, S) one has ρ(( F Ê) 1 )= 1. 4λ 1 Lemma A. Let u, v, r R n and E,F R n n satisfy Eu + Fv = r. If FE T S++, n then (FE T ) 1 Eu + (FE T ) 1 Fv +u T v = (FE T ) 1 r. Lemma A.3 For any u, v R n and G S++ n, we have u v cond(g) cond(g) G 1/ u G 1/ v ( G 1/ u + G 1/ v ). Let the spectrum of XS be {λ i : i =1,,,m}. Then following lemma holds. Lemma A.4 Suppose that (X, y, S) S++ n R m S++, n P S++, n andq P(X, S). Then λ min [H P (XS)] λ min [XS]=λ min [H Q (XS)]. Lemma A.5 Let P P(X, S) be given. Then ) ( F Ê) 1/ vec(σμi H( XŜ)) (1 σ + σ nμ g. γ Lemma A.6 Let (X, y, S) N (γ) and P P(X, S) be given, and define G = Ê 1 F. Then the Newton step corresponding to system (6) satisfies ) cond(g) H P (ΔXΔS) F (1 σ + σ nμ g. γ Corollary A.7 If we set σ =0in Lemma A.6, then the search direction in the predictor step satisfies cond(g) H P (ΔX a ΔS a ) F nμ g.