PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

Similar documents
A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

Local Self-concordance of Barrier Functions Based on Kernel-functions

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A new primal-dual path-following method for convex quadratic programming

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

CCO Commun. Comb. Optim.

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

Self-Concordant Barrier Functions for Convex Optimization

Semidefinite Programming

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

New Primal-dual Interior-point Methods Based on Kernel Functions

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Using Schur Complement Theorem to prove convexity of some SOC-functions

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Nonlinear Optimization

A priori bounds on the condition numbers in interior-point methods

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior-Point Methods

On the Sandwich Theorem and a approximation algorithm for MAX CUT

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

New stopping criteria for detecting infeasibility in conic optimization

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

The Simplest Semidefinite Programs are Trivial

arxiv: v1 [math.oc] 26 Sep 2015

Lecture 5. The Dual Cone and Dual Problem

Real Symmetric Matrices and Semidefinite Programming

15. Conic optimization

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Absolute value equations

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On self-concordant barriers for generalized power cones

New Interior Point Algorithms in Linear Programming

Linear Algebra Massoud Malek

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

A Smoothing Newton Method for Solving Absolute Value Equations

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Largest dual ellipsoids inscribed in dual cones

On implementing a primal-dual interior-point method for conic quadratic optimization

On well definedness of the Central Path

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

Lecture Note 5: Semidefinite Programming for Stability Analysis

Copositive Plus Matrices

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

What can be expressed via Conic Quadratic and Semidefinite Programming?

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

We describe the generalization of Hazan s algorithm for symmetric programming

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 7: Positive Semidefinite Matrices

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

A New Self-Dual Embedding Method for Convex Programming

III. Applications in convex optimization

Lecture: Algorithms for LP, SOCP and SDP

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

c 2005 Society for Industrial and Applied Mathematics

Lecture 17: Primal-dual interior-point methods part II

The Q Method for Symmetric Cone Programmin

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

Nonlinear Programming

Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds

4TE3/6TE3. Algorithms for. Continuous Optimization

The extreme rays of the 5 5 copositive cone

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

Lecture: Introduction to LP, SDP and SOCP

c 2000 Society for Industrial and Applied Mathematics

Nonsymmetric potential-reduction methods for general cones

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

א K ٢٠٠٢ א א א א א א E٤

Transcription:

International Journal of Pure and Applied Mathematics Volume 4 No. 4 07, 797-88 ISSN: 3-8080 printed version); ISSN: 34-3395 on-line version) url: http://www.ijpam.eu doi: 0.73/ijpam.v4i4.0 PAijpam.eu PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION M. El Ghami Faculty of Education and Arts Mathematics Section Nord University-Nesna 8700, Nesna, NORWAY Abstract: Recently, M. Bouafoa, et al. [5] Journal of optimization Theory and Applications, August, 06),investigated a new kernel function which differs from the self-regular kernel functions. The kernel function has a trigonometric Barrier Term. In this paper we generalize the analysis presented in the above paper for Semidefinit Optimization Problems SDO). It is shown that the interior-point methods based on this function for large-update methods, the iteration bound is improved significantly. For small-update interior point methods the iteration bound is the best currently known bound for primal-dual interior point methods. The analysis for SDO deviates significantly from the analysis for linear optimization. Several new tools and techniques are derived in this paper. AMS Subject Classification: 90C, 90C3 Key Words: interior-point, kernel function, primal-dual method semidefinite optimization, large update, small update. Introduction We consider the standard semidefinite optimization problem SDO) SDP) p = inf X {TrCX) : TrA ix) = b i i m),x 0}, and its dual problem SDD) Received: January 4, 07 Revised: March 7, 07 Published: June 7, 07 c 07 Academic Publications, Ltd. url: www.acadpubl.eu

798 M. El Ghami SDD) d = sup y,s { b T y : } m y i A i +S = C,S 0, where C and A i are symmetric n n matrices, b,y R m, and X 0 means that X is symmetric positive semidefinite and TrA) denotes the trace of A i.e., the sum of its diagonal elements). Without loss of generality the matrices A i are assumed to be linearly independent. Recall that for any two n n matrices, A and B their natural inner product is given by TrA T B) = A ij B ij. j= IPMs provide a powerful approach for solving SDO problems. A comprehensive list of publications on SDO can be found in the SDO homepage maintained by Alizadeh []. Pioneering works are due to Alizadeh [, ] and Nesterov et al[]. Most IPMs for SDO can be viewed as natural extensions of IPMs for linear optimization LO), and have similar polynomial complexity results. However, to obtain valid search directions is much more difficult than in the LO case. In the sequel we describe how the usual search directions are obtained for primal-dual methods for solving SDO problems. Our aim is to show that the kernel-function-based approach that we presented for LO in [7] can be generalized and applied also to SDO problems... Classical search direction We assume thatsdp) andsdd) satisfy the interior-point conditionipc), i.e., there exists X 0 0 and y 0,S 0 ) with S 0 0 such that X 0 is feasible for SDP) and y 0,S 0 ) is feasible for SDD). Moreover, we may assume that X 0 = S 0 = E, where E is the n n identity matrix []. Assuming the IPC, one can write the optimality conditions for the primal-dual pair of problems as follows. TrA i X) = b i, i =,...,m m y i A i +S = C ) XS = 0 X,S 0. The basic idea of primal-dual IPMs is to replace the complementarity condition

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 799 XS = 0 by the parameterized equation XS = µe; X,S 0, where µ > 0. The resulting system has a unique solution for each µ > 0. This solution is denoted by Xµ),yµ),Sµ)) for each µ > 0; Xµ) is called the µ-center of SDP) and yµ),sµ)) is the µ-center of SDD). The set of µ- centers with µ > 0) defines a homotopy path, which is called the central path of SDP) and SDD) [, 3]. The principal idea of IPMs is to follow this central path and approach the optimal set as µ goes to zero. Newton s method amounts to linearizing the system ), thus yielding the following system of equations. TrA i X) = 0, i =,...,m m y i A i + S = 0 ) X S + XS = µe XS. This so-called Newton system has a unique solution X, y, S). Note that S is symmetric, due to the second equation in ). However, a crucial point is that X may be not symmetric. Many researchers have proposed various ways of symmetrizing the third equation in the Newton system so that the new system has a unique symmetric solution. All these proposals can be described by using a symmetric nonsingular scaling matrix P and by replacing ) by the system TrA i X) = 0, i =,...,m m y i A i + S = 0 3) X +P SP T = µs X Now X is automatically a symmetric matrix... Nesterov-Todd direction In this paper we consider the symmetrization schema of Nesterov-Todd [4]. So we use P = X X SX ) X = S S XS ) S,

800 M. El Ghami where the last equality can be easily verified. Let D = P, where P denotes the symmetric square root of P. Now, the matrix D can be used to scale X and S to the same matrix V, namely [, 5]: V := µ D XD = µ DSD. 4) Obviously the matrices D and V are symmetric, and positive definite. Let us further define Ā i := µ DA i D, i =,,...,m; and D X := µ D XD ; D S := µ D SD. 5) WerefertoD X andd S asthescaledsearchdirections. Now3)canberewritten as follows: TrĀiD X ) = 0, i =,...,m. m y i Ā i +D S = 0, 6) D X +D S = V V. In the sequel, we use the following notational conventions. Throughout this paper, denotes the -norm of a vector. The nonnegative and the positive orthants are denoted as R n + and intrn +, respectively, and Sn, S n +, and intsn + denote the cone of symmetric, symmetric positive semidefinite and symmetric positivedefiniten nmatrices, respectively. ForanyV S n, wedenotebyλv) the vector of eigenvalues of V arranged in increasing order, λ V) λ V),...,λ n V). For any square matrix A, we denote by η A) η A),..., η n A) the singular values of A; if A is symmetric, then one has η i A) = λ i A), i =,,...,n. If z R n and f : R R, then f z) denotes the vector in R n whose i-th component is f z i ), with i n, and if D is a diagonal matrix, then fd) denotes the diagonal matrix with fd ii ) as i diagonal component. For X S n, X = Q DQ, where Q is orthogonal, and D a diagonal matrix, fx) = Q fd)q. Finally ifv isavector, diagv) denotesthediagonal matrix with the diagonal elements v i.. New search direction In this section we introduce the new search direction. But we start with the definition of a matrix function [6, 7].

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 80 Definition. Let X be a symmetric matrix, and let X = Q X diagλ X),λ X),...,λ n X))Q X, be an eigenvalue decomposition of X, where λ i X), i n denotes the i- th eigenvalue of X, and Q X is orthogonal. If ψt) is any univariate function whose domain contains {λ i X); i n} then the matrix function ψx) is defined by ψx) = Q X diagψλ X)),ψλ X)),...,ψλ n X)))Q X. and the scalar function ΨX) is defined as follows [3]: ΨX) := ψλ i X)) = TrψX)). 7) The univariate function ψ is called the kernel function of the scalar function Ψ. In this paper, when we use the function ψ ) and its first three derivatives ψ ), ψ ), and ψ ) without any specification, it denotes a matrix function if the argument is a matrix and a univariate function from R to R) if the argument is in R. Analogous to the case of LO, the kernel-function-based approach to SDO is obtained by modifying Nesterov-Todd direction [3]. Theobservationunderlyingourapproachisthattheright-handsideV V in the third equation of 6) is precisely ψ V) if ψt) = t )/ logt, the latter being the kernel function of the well-known logarithmic barrier function. Note that this kernel function is strictly convex and nonnegative, whereas its domain contains all positive reals and it vanishes at. As we will now show any continuously differentiable kernel function ψt) with these properties gives rise to a primal-dual algorithm for SDO. Given such a kernel function ψt) we replace the right-hand side V V in the third equation of 6) by ψ V), with ψ V) defined according to Definition. Thus we use the following system to define the scaled) search directions D X an D S : TrĀiD X ) = 0, i =,...,m. m y i Ā i +D S = 0 8) D X +D S = ψ V).

80 M. El Ghami Having D X and D S, X and S can be calculated from 5). Due to the orthogonality of X and S, it is trivial to see that D X D S, and so TrD X D S ) = TrD S D X ) = 0. 9) The algorithm considered in this paper is described in Figure. The inner Generic Primal-Dual Algorithm for SDO Input: a kernel function ψt); a threshold parameter τ ; an accuracy parameter ǫ > 0; a barrier update parameter θ, 0 < θ < ; begin X := E; S := E; µ := ; V := E; while nµ ǫ do begin µ := θ)µ; V := V θ ; while ΨV) τ do begin Find search directions by solving system 8); Determine a step size α; X = X +α X; y = y +α y; S = S +α S; Compute V from 4); end end end Figure : Generic primal-dual interior-point algorithm for SDO. while loop in the algorithm is called inner iteration and the outer while loop outer iteration. So each outer iteration consists of an update of the barrier parameter and a sequence of one or more inner iterations. Note that by using

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 803 the embedding strategy [], we can initialize the algorithm with X = S = E. Since then XS = µe for µ = it follows from 4) that V = E at the start of the algorithm, whence ΨV) = 0. We then decrease µ to µ := θ)µ, for some θ 0,). In general this will increase the value of ΨV) above the threshold value τ. To get this value smaller again, and coming closer to the current µ-center, we solve the scaled search directions from 8), and unscale these directions by using 5). By choosing an appropriate step size α, we move along the search direction, and construct a new triple X +,y +,S + ) with X + = X +α X y + = y +α y S + = S +α S. 0) If necessary, we repeat the procedure until we find iterates such that ΨV) no longer exceed the threshold value τ, which means that the iterates are in a small enough neighborhood of Xµ), yµ), Sµ)). Then µ is again reduced by thefactor θ andwe applythesame proceduretargeting at thenew µ-centers. This process is repeated until µ is small enough, i.e. until nµ ǫ. At this stage we have found an ǫ-solution of SDP) and SDD). Just as in the LO case, the parameters τ,θ, and the step size α should be chosen in such a way that the algorithm is optimized in the sense that the number of iterations required by the algorithm is as small as possible. Obviously, the resulting iteration bound will depend on the kernel function underlying the algorithm, and our main task becomes to find a kernel function that minimizes the iteration bound. The rest of the paper is organized as follows. In Section 3 we introduce the kernel function ψt) considered in this paper and discuss some of its properties that are needed in the analysis of the corresponding algorithm. In Section 4 we derive the properties of the barrier function ΨV). The step size α and the resulting decrease of the barrier function are discussed in Section 5. The total iteration bound of the algorithm and the complexity results are derived in Section 6. Finally, some concluding remarks follow in Section 7. 3. Our kernel function and some of its properties Recently in [5] investigated new kernel functions with trigonometric barrier for linear optimization. The extension to P κ)-linear complementarity problem was also presented in [0]. The obtained complexity for large-update improve significantly the complexity obtained in [6, 7]. In this paper we consider kernel functions of the form

804 M. El Ghami ψt) = t + 4 pπ tanp ht)) ), ) with ht) = π t+, and to show that the interior-point methods for SDO based on these function have favorable complexity results. Note that the growth term of our kernel function is quadratic. However, this function ) deviates from all) other) kernel functions since its barrier term 4 is trigonometric as pπ tan p π t+. In order to study the new kernel function, several new arguments had to be developed for the analysis. This section is started by technical lemma, and then some properties of the new kernel function introduced in this paper are derived. 3.. Some technical results In the analysis of the algorithm based on ψt) we need its first three derivatives. These are given by with ψ t) = t+ 4h t) π sec ht)) tan p ht)) ), ) ψ t) = + 4 π sec ht))gt). 3) ψ t) = 4 π sec ht)) kt)h t) 3 +rt)h t)h t) ) h t) 4) + tan p ht))h t) ), 5) gt) := p )tan p ht))+p+)tan p ht)) ) h t) + h t)tan p ht)), and kt) := p )p )tan p 3 ht))+p tan p ht)) + p+)p+)tan p+ ht)), rt) := 3p )tan p ht))+3p+)tan p ht)), 6) and the first three derivatives of h are given by h t) = π t+) ; h t) = π t+) 3; h t) = 3π t+) 4

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 805 The next lemma serves to prove that the new kernel function ) is eligible. Lemma Lemma 3. in [5]). Let ψ be as defined in ) and t > 0. Then, ψ t) >, tψ t)+ψ t) > 0, tψ t) ψ t) > 0, and ψ t) < 0. 7-a) 7-b) 7-c) 7-d) It follows that ψ) = ψ ) = 0 and ψ t) 0, proving that ψ is defined by ψ t). ψt) = t ξ ψ ζ)dζdξ. 8) Thesecondproperty7-b)inLemmaisrelatedtoDefinition..andLemma.. in [3]. This property is equivalent to convexity of the composed function z ψe z ) and this holds if and only if ψ t t ) ψt )+ψt )) for any t,t 0. Following [3], we therefore say that ψ is exponentially convex, or shortly, e-convex, whenever t > 0. Lemma 3. Let ψ be as defined in ), one has ψt) < ψ )t ), if t >. Proof. By Taylor s theorem and ψ) = ψ ) = 0, we obtain ψt) = ψ )t ) + 6 ψ ξ)ξ ) 3, where < ξ < t if t >. Since ψ ξ) < 0, the lemma follows. Lemma 4. Let ψ be as defined in ), one has tψ t) ψt), if t. Proof. Defining gt) := tψ t) ψt) one has g) = 0 and g t) = tψ t) 0. Hence gt) 0 and the lemma follows. At some places below we apply the function Ψ to a positive vector v. The interpretation of Ψv) is compatible with Definition when identifying the vector v with its diagonal matrix diagv). When applying Ψ to this matrix we obtain Ψv) = ψv i ), v intr n +.

806 M. El Ghami 4. Properties of ΨV ) and δv ) In this section we extend Theorem 4.9 in [4] to the cone of positive definite matrices. The next theorem gives a lower bound on the norm-based proximity measure δv), defined by δv) = ψ V) = n ψ λ i V)) = D X +D S, 9) in terms of ΨV). Since ΨV) is strictly convex and attains its minimal value zero at V = E, we have ΨV) = 0 δv) = 0 V = E. We denote by : [0, ) [, ) the inverse function of ψt) for t. In other words s = ψt) t = s), t, 0) Theorem 5. Let be as defined in 0). Then δv) ψ ΨV))). Proof. If V = E then δv) = ΨV) = 0. Since 0) = and ψ ) = 0, the inequality holds with equality if V = E. Otherwise, by the definitions of δv) in 9) and ΨV) in 7), we have δv) > 0 and ΨV) > 0. Let v i := λ i V), i n. Then v > 0 and δv) = n ψ λ i V)) = n ψ v i ). Since ψt) satisfies 7-d) we may apply Theorem 4.9 in [4] to the vector v. This gives )) δv) ψ ψv i ). Since ψv i ) = ψλ i V)) = ΨV), the proof of the theorem is complete.

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 807 Lemma 6. If ΨV), then δv) 6 ΨV). ) Proof. The proof of this lemma uses Theorem 5 and Lemma 4. Putting s = ΨV), we obtain from Theorem 5 that Putting t = s), we have by 0), ψt) = t δv) ψ s)). + 4 pπ tanp ht)) ) = s, with ht) = π t+, t. We derive an upper bound for t, as this suffices for our goal. One has from 8) and ψ t), which implies s = ψt) = t ξ ψ ζ)dζdξ t ξ dζdξ = t ), t = s) + s. ) Assuming s, we get t = s) s+ s 3s. Now applying Lemma 4 we may write δv) ψ s)) ψ s)) s) = s s) 6 s = 6 ΨV). This proves the lemma. Note that since τ we have at the start of each inner iteration that ΨV). Substitution in ) gives δv) 6. 3) 5. Analysis of the algorithm In the analysis of the algorithm the concept of exponential convexity [4, 8] is again a crucial ingredient. In this section we derive a default value for the step

808 M. El Ghami size and we obtain an upper bound for the decrease in ΨV) during a Newton step. 5.. Three technical lemmas The next lemma is cited from [6, Lemma 3.3.4 c)]. Lemma 7. Let A,B S n be two nonsingular matrices and ft) be given real-valued function such that fe t ) is a convex function. One has fη i AB)) fη i A)η i B)), where η i A), and η i B) i =,,...,n denote the singular values of A and B respectively Lemma 8. Let A, A+B S n +, then one has λ i A+B) λ λ n B), i =,,...,n. Proof. It is obvious that λ i A + B) λ A + B). By the Rayleigh-Ritz theorem see [8]), there exists a nonzero X 0 R n, such that λ A+B) = XT 0 A+B)X 0 X T 0 X 0 = XT 0 AX 0 X T 0 X 0 + XT 0 BX 0 X0 TX. 0 We therefore may write λ A+B) XT 0 AX 0 X T 0 X 0 X T AX min X 0 X0 TBX 0 X0 TX 0 X T X max X 0 X T BX X T X = λ λ n B). This completes the proof of the lemma. A consequence of condition 7-b) is that any eligible kernel function is exponentially convex [3, Eq..0)]: ψ t t ) ψt )+ψt )), t > 0, t > 0. 4) This implies the following Lemma, which is crucial for our purpose. Lemma 9. Let V and V be two symmetric positive definite matrices, then ) Ψ V V V ) ΨV )+ΨV )), V 0, V 0.

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 809 Proof. For any nonsingular matrix U S n, we have η i U) = λ i U T U) ) = λ i UU T ) ), i =,,...,n. Taking U = V V, we may write η i V V ) = λ i V V V ) ) = λi V V V ) ), i =,,...,n. Since V and V are symmetric positive definite, using Lemma 7 one has ) Ψ V V V ) = ) ψ η i V V ) ) ψ η i V )η i V ). Since η V ),η V ) > 0 we may use that ψt) satisfies 7-b) for t > 0. Using 4), hence we obtain ) Ψ V V V ) = This completes the proof. )) ψ ηi V )+ψ ) ηi V ) ψλ i V ))+ψλ i V ))) = ΨV )+ΨV )). 5.. The decrease of the proximity in the inner iteration In this subsection we are going to compute a default value for the step size α in order to yield a new triple X +,y +,S + ) as defined in 0). After a damped step, using 5) we have X + = X +α X = X +α µdd X D = µdv +αd X )D, y + = y +α y, S + = S +α S = X +α µd D S D = µd V +αd S )D. Denoting the matrix V after the step as V +, we have V + = µ D X + S + D ).

80 M. El Ghami Note that V + is unitarily similar to the matrix µ X + S + X + and hence also to V +αd X ) V +αds )V +αd X ). This implies that the eigenvalues of V + are the same as those of the matrix Ṽ + := V +αd X ) V +αds )V +αd X ) ). The definition of ΨV) implies that its value depends only on the eigenvalues of V. Hence we have ) Ψ Ṽ+ = ΨV + ). Our aim is to find α such that the decrement fα) := ΨV + ) ΨV) = Ψ is as small as possible. Due to Lemma 9, it follows that ) Ψ Ṽ+ = Ψ Ṽ+ ) ΨV), 5) V +αd X ) V +αds )V +αd X ) [ΨV +αd X)+ΨV +αd S )]. From the definition 5) of fα), we now have fα) f α), where f α) := [ΨV +αd X)+ΨV +αd S )] ΨV). ) ) Note that f α) is convex in α, since Ψ is convex. Obviously, f0) = f 0) = 0. Taking the derivative with respect to α, we get f α) = Tr ψ V +αd X )D X +ψ V +αd S )D S ). Using the last equality in 8) and also 9), this gives f 0) = Tr ψ V)D X +D S ) ) = Tr ψ V) ) = δv). Differentiating once more, we obtain f α) = Tr ψ V +αd X )D X +ψ V +αd S )D S). 6) In the sequel we use the following notation: Lemma 0. One has λ := minλ i V)), δ := δv). f α) δ ψ λ αδ).

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 8 Proof. Thelast equality in8) and9) implythat D X +D S = D X + D S = 4δ. Thus we have λ n D X ) δ and λ n D S ) δ. Using Lemma 8 and V +αd X 0, As a consequence we have, for each i, λ i V +αd X ) λ α λ n D X ) λ αδ, λ i V +αd S ) λ α λ n D S ) λ αδ. Due to 7-d), ψ is monotonically decreasing. So the above inequalities imply that ψ λ i V +αd X )) ψ λ αδ), ψ λ i V +αd S )) ψ λ αδ). Substitution into 6) gives f α) ψ λ αδ)tr DX ) +D S = ψ λ αδ) D X + D S ). Now, using that D X and D S are orthogonal, by 9), and also D X +D S = 4δ, by 9), we obtain f α) δ ψ λ V) αδ). This proves the lemma. Using the notation v i = λ i V), i n, again, we have f α) δ ψ v αδ), 7) which is exactly the same inequality as Lemma 3. in [7]. This means that our analysis closely resembles the analysis of the LO case in [7]. From this stage on we can apply similar arguments as in the LO case. In particular, the following two lemmas can be stated without proof. Lemma. [Lemmas 3.3 and 3.4 in [9]] Let ρ be the inverse function of ψ t) for t 0,]. Then the largest value of the step size α satisfying 7) is given by ˆα := δ [ρδ) ρδ)]. Moreover For future use we define ˆα α := ψ ρδ)). ψ ρδ)). 8) By Lemma this step size satisfies 7).

8 M. El Ghami Lemma. If the step size α is such that α ˆα then fα) αδ. Using the above lemmas from [7] we proceed as follows. Theorem 3. Let ρ be as defined in Lemma and α as in 8) and Ψv). Then δ p +p f α) ψ ρδ)) δ 30p. Proof. Since α ˆα, Lemma gives f α) αδ, where α = ψ ρδ)) as defined in 8). Thus the first inequality follows. To obtain the inverse function t = ρs) of ψ t) for t 0,], we need to solve t from the equation This implies, = t+ 4h t) sec ht)) tan p ht)) )) π t+ 4h t) csc ht)) tan p+ ht)) )) = s. π csc ht)) tan p+ ht)) ) = π 4h t) s+t). For t, we get πt+) 4π s+t) s+). Hence, putting t = ρδ), which is equivalent to 4δ = ψ t). Using that sin ht)) we get tanht)) 8δ +) +p. 9) Sincesec ht)) = +tan ht)), By9), thuswehavetan ht)) 8δ +) +p, tan p ht)) 8δ +) p +p, tan p ht)) 8δ +) p +p and tan p ht)) 8δ +) +p. p Since h t) = 8π 3π 8t+) 3 4, and h t) 4π = π 6t+) 4 4 for all 0 t, and using also 8δ +) this implies ψ t) + 4π )) p π 4 +π 8δ +) p+ +p = 9+4pπ)8δ +) p+ +p

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 83 By 8), thus we have α = ψ ρδ)) 9+4pπ)8δ +) p+ +p. Hence Also using 3) i.e., 6δ ) and p we get, α = Thus the theorem follows. Substitution in ) gives 9+4pπ)8δ +δ) p+ +p 9+4pπ)0δ) p+ +p f α) δ 30pδ p+ +p 30pδ p+ +p = δ p +p 30p.. f α) δ p +p 30p Ψ p +p) 30p6) p +p Ψ p +p) 790p. 5.3. A uniform upper bound for Ψ In this subsection we extend Theorem 3. in [4] to the cone of positive definite matrices. As we will see the proof of the next theorem easily follows from Theorem 3. in [4]. Theorem 4. Let be as defined in 0). Then for any positive vector v and any β > we have: ΨβV) nψ β ΨV) n )). Proof. Let v i := λ i V), i n. Then v > 0 and ΨβV) = ψλ i βv)) = ψβλ i V)) = ψβv i ) = Ψβv).

84 M. El Ghami Due to the fact that ψt) satisfies 7-c), at this stage we may use Theorem 3. in [4], which gives )) Ψv) Ψβv) nψ β. n Since Ψv) = ψv i ) = ψλ i V)) = ΨV), the theorem follows. Before the update of µ we have ΨV) τ, and after the update of µ to θ)µ we have V + = V θ. Application of Theorem 4, with β = θ, yields that τ ) ) n ΨV + ) nψ. θ Therefore we define τ ) ) n L = Ln,θ,τ) := nψ θ 30) In the sequel the value Ln,θ,τ) is simply denoted as L. A crucial but trivial) observation is that during the course of the algorithm the value of ΨV) will never exceed L, because during the inner iterations the value of Ψ always decreases. 6. Complexity We are now ready to derive the iteration bounds for large-update methods. An upper bound for the total number of inner) iterations is obtained by multiplying an upper bound for the number of inner iterations between two successive updates of µ by the number of barrier parameter updates. The last number is bounded above by cf. [9, Lemma II.7, page 6]) θ log n ǫ. To obtain an upper bound K for the number of inner iterations between two successive updates we need a few more technical lemmas.

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 85 The following lemma is taken from Proposition.3. in [3]. Its relevance is due to the fact that the barrier function values between two successive updates of µ yield a decreasing sequence of positive numbers. We will denote this sequence as Ψ 0, Ψ,... Lemma 5. Let t 0,t,,t K beasequence of positive numberssuch that t k+ t k κt γ k, k = 0,,,K,. where κ > 0 and 0 < γ. Then K Lemma 6. If K denotes the number of inner iterations between two successive updates of µ, then t γ 0 κγ K 790pΨ +p +p) 0. Proof. The definition of K implies Ψ K > τ and, according to Theorem 3, Ψ K τ and Ψ k+ Ψ k κψ k ) γ, k = 0,,,K, with κ = +p 790p and γ = +p). Application of Lemma 5, with t k = Ψ k yields the desired inequality. Using ψ 0 L, where the number L is as given in 30), and Lemma 6 we obtain the following upper bound on the total number of iterations: 790pL +p +p) θ 6.. Large-update log n ǫ. 3) We just established that 3) is an upper bound for the total number of iterations, using ψt) = t and ), by substitution in 30) we obtain L n + 4 ) ) π tan p, for t, p pπ t+ τ n) θ ) n θ+ τ θ) n + τ ) n

86 M. El Ghami = θn+ τn+τ ) θ). Using 3), thus the total number of iterations is bounded above by K θ log n ǫ θ 790p θ) +p +p) ) θn+ ) +p +p) τn+τ log n ǫ. A large-update methods uses τ = ) On) and θ = Θ). The right-hand side expression is then O pn +p +p) log n ǫ, as easily may be verified. 6.. Small-update For small-update methods one has τ = O) and θ = Θ n ). Using Lemma 3, with ψ ) = pπ+8 4, we then obtain L npπ +8) 8 ρ τ n) θ ). Using ), then L npπ +8) 8 + τ n θ. Using θ θ = + pπ+8) ). θ, this leads to L θ 8 θ) θ n+ τ We conclude that the total number of iterations is bounded above by K θ log n ǫ 790pπ +8) +p +p) θ8 θ)) +p +p) θ n+ )+p +p τ log n ǫ. Thus the right-hand side expression is then O nlog n ǫ). 7. Concluding Remarks In this paper we extended the results obtained for kernel-function-based IPMs in [5] for LO to semidefinite optimization problems. The analysis in this paper is new and different from the one using for LO. Several new tools and techniques are derived in this paper. The proposed function has a trigonometric barrier term but the function is not logarithmic and not self- regular.

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 87 For this parametric kernel function, we have shown that the best result of iteration bounds for large-update methods and small-update can be achieved, namely O logn nlog n ǫ), for large-update and O nlog n ǫ) for small-update methods. References [] F. Alizadeh, Combinatorial Optimization with Interior Point Methods and Semi-Definite Matrices, PhD thesis, University of Minnesota, Minneapolis, Minnesota, USA 99). [] F. Alizadeh, Interior point methods in semidefinite programming with applications to combinatorial optimization. SIAM Journal on Optimization, 5 No 995), 3 5, doi: 0.37/080500. [3] Y. Q. Bai, M. El Ghami, C. Roos, A new efficient large-update primal-dual interiorpoint method based on a finite barrier. SIAM Journal on Optimization, 3 No 3 003), 766 78, doi: 0.37/S0563403983. [4] Y. Q. Bai, M. El Ghami, C. Roos, A comparative study of kernel functions for primaldual interior-point algorithms in linear optimization. SIAM Journal on Optimization, 5 No 004), 0 8, doi: 0.37/S0563403434. [5] M. Bouafia, D. Benterki, A. Yassine, An efficient primal-dual interior point method for linear programing problems based on a new kernel function with a trigonometric barrier term. Journal of Optimization Theory and Applications, 70 No 06), 58-545, doi: 0.007/s0957-06-0895-0. [6] X.Z. Cai, G.Q. Wang, M. El Ghami, Y.J. Yue, Complexity analysis of primal-dual interior-point methods for linear optimization based on a parametric kernel function with a trigonometric barrier term. Abstr. Appl. Anal. 04), 7058, doi: 0.55/04/7058. [7] M. El Ghami, Z.A. Guennoun, S. Bouali, T. Steihaug, Primal-Dual Interior-Point Methods for Linear Optimization Based on a Kernel Function with Trigonometric Barrier Term. Journal of Computational and Applied Mathematics, 36 No 5 0), 363-363, doi: 0.06/j.cam.0.05.036. [8] M. El Ghami, A kernel function approach for interior point methods: Analysis and Implementation. LAP Lambert Academic Publishing, Germany, NP. 68, ISBN-NR: 978-3- 8443-3333-6 0). [9] M. El Ghami, C. Roos, Generic Primal-dual Interior Point Methods Based on a New Kernel Function. International Journal RAIRO-Operations Research, 4 No 008), 99-3, doi: 0.05/ro:008009. [0] M. El Ghami, G.Q. Wang, Interior-pointmethods for P κ)-linear complementarity problem based on generalized trigonometric barrier functio. International Journal of Applied Mathematics, 30 No 07), -33, doi: 0.73/ijam.v30i.. [] Y.E. Nesterov, A.S. Nemirovskii, Interior Point Polynomial Methods in Convex Programming : Theory and Algorithms, SIAM, Philadelphia, USA 993). [] E. de Klerk, Aspects of Semidefintie Programming, volume 65 of Applied Optimization, Kluwer Academic Publishers, Dordrecht, The Netherlands 00).

88 M. El Ghami [3] J. Peng, C. Roos, T. Terlaky, Self-Regularity: A New Paradigm for Primal-Dual Interior- Point Algorithms, Princeton University Press 00). [4] Y.E. Nesterov, M.J., Todd, Self-scaled barriers and interior-point methods for convex programming. Mathematics of Operations Research, No 997), 4. [5] J.F. Sturm, S. Zhang, Symmetric primal-dual path following algorithms for semidefinite programming. Applied Numirical Mathematics, 9 999), 30 35, doi: 0.06/S068-97498)00099-3. [6] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK 985). [7] W. Rudin, Principles of Mathematical Analysis, Mac-Graw Hill Book Company, New York 978). [8] R.A. Horn, C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press 99). [9] C. Roos, T. Terlaky, J.-Ph. Vial, Theory and Algorithms for Linear Optimization. An Interior-Point Approach, Springer Science 005).