A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

Similar documents
Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A new primal-dual path-following method for convex quadratic programming

CCO Commun. Comb. Optim.

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Local Self-concordance of Barrier Functions Based on Kernel-functions

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

New Primal-dual Interior-point Methods Based on Kernel Functions

Finding a Strict Feasible Dual Initial Solution for Quadratic Semidefinite Programming

Semidefinite Programming

A priori bounds on the condition numbers in interior-point methods

DEPARTMENT OF MATHEMATICS

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

Lecture 10. Primal-Dual Interior Point Method for LP

Limiting behavior of the central path in semidefinite optimization

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Interior Point Methods for Nonlinear Optimization

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Interior Point Methods in Mathematical Programming

Interior-Point Methods

Primal-dual IPM with Asymmetric Barrier

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Largest dual ellipsoids inscribed in dual cones

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

א K ٢٠٠٢ א א א א א א E٤

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method


IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

arxiv: v1 [math.oc] 26 Sep 2015

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

On well definedness of the Central Path

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

New Interior Point Algorithms in Linear Programming

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

12. Interior-point methods

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

The Q Method for Symmetric Cone Programmin

Interior Point Methods for Mathematical Programming

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

Second-order cone programming

15. Conic optimization

Real Symmetric Matrices and Semidefinite Programming

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Using Schur Complement Theorem to prove convexity of some SOC-functions

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION

A Simpler and Tighter Redundant Klee-Minty Construction

New stopping criteria for detecting infeasibility in conic optimization

Interior-Point Methods for Linear Optimization

Self-Concordant Barrier Functions for Convex Optimization

We describe the generalization of Hazan s algorithm for symmetric programming

New Infeasible Interior Point Algorithm Based on Monomial Method

On Mehrotra-Type Predictor-Corrector Algorithms

Department of Social Systems and Management. Discussion Paper Series

On the Sandwich Theorem and a approximation algorithm for MAX CUT

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras

Uniqueness of the Solutions of Some Completion Problems

On self-concordant barriers for generalized power cones

Chapter 6 Interior-Point Approach to Linear Programming

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

A Smoothing Newton Method for Solving Absolute Value Equations

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Constructing self-concordant barriers for convex cones

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A New Self-Dual Embedding Method for Convex Programming

LP. Kap. 17: Interior-point methods

Lecture 17: Primal-dual interior-point methods part II

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES

Nonsymmetric potential-reduction methods for general cones

Convex Optimization : Conic Versus Functional Form

Transcription:

Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar Djeffal b a Department of Mathematics, University of Batna 2, Batna, Algeria email: djeffal elamir@yahoo.fr b Department of Mathematics, University of Batna 2, Batna, Algeria email: lakdar djeffal@yahoo.fr Abstract. In this paper, we deal to obtain some new complexity results for solving semidefinite optimization SDO problem by interior-point methods IPMs. We define a new proximity function for the SDO by a new kernel function. Furthermore we formulate an algorithm for a primal dual interior-point method IPM for the SDO by using the proximity function and give its complexity analysis, and then we show that the worst-case iteration bound for our IPM is O6m + 3m+4 m > 4. m+2 2m+ 2m+ Ψ0 θ nµ0 log ε, where Keywords: quadratic programming, convex nonlinear programming, interior point methods. AMS Subject Classification: 90C25, 90C30, 90C5, 90C20. Introduction Infeasible interior-point methods IIPMs start with an arbitrary positive point and feasibility is reached as optimality is approached. The choice of the starting point in IPMs is crucial for the performance. Lustig [8] and Tanabe [6] were the first to present IPMs for LO. Kojima et al. [6] were the first that proved the global convergence of a primal dual IPM for LO. Corresponding author. Received: 3 December 205 / Revised: 28 May 206 / Accepted: 29 May 206. c 206 University of Guilan http://jmm.guilan.ac.ir

36 El Amir Djeffal and Lakhdar Djeffal Zhang [8] was the first who presented a primal dual IPM with polynomial iteration complexity for LO. Primal dual interior-point methods IPMs for semidefinite optimization have been widely studied, the reader is referred to Klerk [4]. Recently a full-newton step infeasible interior-point algorithm for linear programming LP was presented by Roos [4]. The result of polynomial complexity coincides with the best known one for IIPMs. Mansouri and Roos [9, 0] extended this algorithm to semidefinite optimization by using a specific feasibility step. The barrier function is determined by a simple univariate function, called its kernel function. Bai et al. [2] introduced a new barrier function which is not a barrier function in the usual sense. In this paper, we define a new proximity function for the SDO by a new kernel function. Also, we formulate an interior-point algorithm for SDO by using a new proximity function and give its complexity analysis, and then we show that the worst-case iteration bound for our IPM is O6m + 3m+4 m+2 2m+ 2m+ Ψ0 θ log nµ0 ε. Furthermore, the complex- 2m log T rx0 S 0 and ity bounds obtained by the algorithm are Om 3m+ 2m n m+ n log T rx 0 S 0 Om 3m+ 2m ε, for large and small-update methods, respectively. The paper is organized as follows. In Section 2, we recall the preliminaries. In Section 3, we define a new kernel function and give its properties which are essential for the complexity analysis. In Section 4, we derive the complexity result for both large-update and small-update methods. Finally, concluding remarks are given in Section 5. Some of the notations used throughout the paper are as follows: R n, R n + and R n ++ denote the set of vectors with n components, the set of nonnegative vectors and the set of positive vectors, respectively. R n n denotes the set of n n real matrices.. F and. 2 denote the Frobenius norm and the spectral norm for matrices, respectively. S n, S n + and S n ++ denote the cone of symmetric, symmetric positive semidefinite and symmetric positive definite n n matrices, respectively. E denotes n n identity matrix. The Löwner partial order or on positive semidefinite or positive definite matrices means A B or A B if A B is positive semidefinite or positive definite. We use the matrix inner product A B = T ra T B. For any Q S n ++, the expression Q 2 or Q denotes its symmetric square root. For any V S n +, we define λ min V to be the minimal eigenvalue of V. ε

A path following IPMs for SDO problem based on new kernel function 37 2 Preliminaries 2. The central path We consider the semidefinite optimization problem in the following form: min {T rcx : T ra i X = b i, i =,..., m, X 0}, SDO and its dual max { b t y : } m y i A i + S = C, S 0, SDD i= where A i R m n, i =,..., m and C are symmetric n n matrices, and b, y R m. Throughout the paper, we make the following assumptions: Assumption : The matrices A i R m n, i =,..., m are linearly independent. Assumption 2: The initial iterate X 0, y 0, S 0 is strictly feasible: T ra i X 0 = b i, i =,..., m, X 0 0, m yi 0 A i + S 0 = C, S 0. i= We have the following well-known lemma: Lemma. [3] The following statements are equivalent: X 0, S 0 and T rxs = 0, 2 X 0, S 0 and X 2 S 2 2 = 0, 3 X 0, S 0 and XS = 0. It is well known that finding an optimal solution X, y, S of SDO and SDD is equivalent to solve the following system: T ra i X = b i, i =,..., m, X 0, m y i A i + S = C, S 0, i= XS = 0. The basic idea of primal dual IPMs is to replace the third equation in, the so-called complementarity condition for SDO and SDD, by the

38 El Amir Djeffal and Lakhdar Djeffal parameterized equation XS = µe with µ > 0, where E denotes the n n identity matrix. Thus we consider the system: T ra i X = b i, i =,..., m, X 0, m y i A i + S = C, S 0, i= XS = µe. If both SDO and SDD satisfy IPC, then for each µ > 0 the parameterized system 2 has a unique solution Xµ, yµ, Sµ see [7, 7], which is called µ-center of SDO and SDD. The set of µ-centers, that is, Λ = {Xµ, yµ, Sµ /µ > 0}, is called the central path of SDO and SDD. The central path converges to the solution pair of SDO and SDD as reduces to zero [, 5]. In general, IPMs for the SDO consist of two strategies: The first one, which is called the inner iteration scheme, is to keep the iterative sequence in a certain neighborhood of the central path or to keep the iterative sequence in a certain neighborhood of the µ-center and the second one is called the outer iteration scheme, is to decrease the parameter µ to µ + = θµ, for some θ 0,. 2 2.2 The search directions IPMs follow the central path approximately. We briefly describe the usual approach. Without loss of generality, we assume that Xµ, yµ, Sµ is known for some positive µ. For example, due to the above assumption, we may assume this for µ 0 = T rx0 S 0 n, with X 0 0 and S 0 0. We then decrease µ to µ + = θµ, for some θ 0,, and solve the following Newton system: T ra i X = 0, i =,..., m, m y i A i + S = 0, i= X S + XS = µe XS. We consider the symmetrization scheme that yields the Nesterov-Todd NT direction, let us define the matrix: P = X 2 X 2 SX 2 2 X 2 = S 2 S 2 XS 2 2 S 2, 4 and also define D = P 2, the matrix D can be used to scale X and S to same matrix V because 3

A path following IPMs for SDO problem based on new kernel function 39 V = µ D XD = µ DSD. 5 Note that the matrices D and V are symmetric and positive semidefinte, similary to the SDO [0]. We can conclude that the system has a unique symmetric solution, let use define A i := µ DA i D, i =,..., m, D X := µ D XD, D S := µ D SD. The NT search direction are obtained from the system: T ra i D X = 0, i =,..., m, m y i A i + D S = 0, i= D X + D S = V V = Ψ l V. 6 7 Clearly, T rd X D S = 0 which is concluded from the first and second equations of 7 or from the orthogonality of X and S. The classical kernel function defined as follows: ψ l t = 2 t2 log t. 2.3 The generic interior-point algorithm for SDO We call ψ l t the kernel function of the logarithmic barrier function Ψ l V. In this paper, we replace ψ l t with a new kernel function ψt which is defined in the next Section and assume that τ. The new interior-point algorithm works as follows. Assume that we are given a strictly feasible point X, y, S which is in a τ-neighborhood of the given µ-center. Then we decrease µ to µ + = θµ, for some fixed θ 0, and then we solve the Newton system 3 to obtain the unique search direction. The positivity condition of a new iterate is ensured with the right choice of the step size α which is defined by some line search rule. This procedure is repeated until we find a new iterate X +, y +, S + that is in a τ-neighborhood of the µ + -center. Then µ is again reduced by the factor θ and we solve the Newton system targeting at the new µ + -center, and so on. This process is repeated until µ is small enough, i.e., nµ ε. The parameters τ, θ and the step size α should be chosen in such a way that the algorithm is optimized in the sense that the number of iterations

40 El Amir Djeffal and Lakhdar Djeffal required by the algorithm is as small as possible. The choice of the socalled barrier update parameter θ plays an important role both in theory and practice of IPMs. Usually, if θ is a constant independent of the dimension n of the problem, for instance θ = 2, then we call the algorithm a large-update or long-step method. If θ depends on the dimension of the problem, such as θ = n, then the algorithm is named a small-update or short-step method. The algorithm for our primal-dual IPM for the SDO is given as follows: Primal-Dual IPM for the SDO Begin algorithm Input: An accuracy parameter ε > 0, An update parameter θ, 0 < θ <, A threshold parameter τ, 0 < τ <, A strictly feasible point X 0, y 0, S 0 and µ 0 = T rx0 S 0 n such that ΨX 0 S 0, µ 0 τ. begin X := X 0, S := S 0, µ := µ 0, While nµ ε do begin mu = θµ While ΨV > τ do begin Solve system 3 to obtain X, y, S, Determine a step size α X := X + α X y := y + α y S := S + α S End While. End While End algorithm. In the next section, we define a new kernel function and give its properties which are essential to our complexity analysis.

A path following IPMs for SDO problem based on new kernel function 4 3 The new kernel function We call ψ : R ++ R + a kernel function if ψ is twice differentiable and satisfies the following conditions []: ψ = ψ = 0, ψ t > 0, 8 lim = lim t 0 +ψt ψt = 0. t For our IPM, we use the following new kernel function: ψt = m + t 2 m + 2t +, for all t > 0, 9 tm where m > 4. Then we have the following: ψ t = 2m + t m + 2 mt m, ψ t = 2m + m m t m 2, ψ t = m m m 2t m 3. From 8, ψt is clearly a kernel function and 0 ψ t > 2m +, for all t > 0. In this paper, we replace the function Ψ l V in 7 with the function ΨV as follows: D X + D S = ΨV, 2 where ΨV = T rψv = n ψλ i V, ψt is defined in 9. Hence, the i= new search direction X, y, S is obtained by solving the following modified Newton system: T ra i X = 0, i =,..., m, m y i A i + S = 0, 3 i= X S + XS = µv ΨV. Note that D X and D S are orthogonal because matrix D X belongs to null space and matrix D S to the row space of the matrix A i, i =,..., m. Since D X and D S are orthogonal, we have D X = D S = 0 ΨV = 0 V = E X = Xµ, S = Sµ. 4

42 El Amir Djeffal and Lakhdar Djeffal We use ΨV as the proximity function to measure the distance between the current iterate and the µ-center for given µ > 0. We also define the norm-based proximity measure, δv : R ++ R +, as follows: δxs, µ = δv = 2 ΨV = 2 D X + D S. 5 Lemma 2. For ψt, we have the following. i ψt is exponentially convex for all t > 0, ii ψ t is monotonically decreasing for all t > 0, iii tψ t ψ t > 0, for all t > 0. Proof. For i, by lemma in [], it suffices to show that the function ψt satisfies tψ t + ψ t 0, for all t > 0. Using 0, we have tψ t + ψ t = t 2m + m m t m 2 + 2m + t m + 2 mt m = 4m + t + m 2 t m m + 2. Let Then gt = 4m + t + m 2 t m m + 2. g t = 4m + m 2 m + t m 2, g t = m 2 m + m + 2t m 3 > 0, for all t > 0. Let g t = 0, we get t = m2 4 m+2. Since gt is strictly convex and has a global minimum, g m2 4 m+2 > 0. And by lemma in [3], we have the result. For ii, using 0, we have ψ For iii, using 0, we have t > 0, so we have the result. tψ t ψ t = mm + 2t m + m + 2 > 0, for all t > 0. Lemma 3. For ψt, we have the following. m + t 2 ψt 4m + ψ t 2, for all t > 0, 6 ψt m + m + 2 t 2, for all t. 7 2

A path following IPMs for SDO problem based on new kernel function 43 Proof. For 6, using 8 and, we have also, ψt = t ξ t ψ ζdζdξ 2m + ψt = = = = t ξ 2m + 2m + 2m + ψ ζdζdξ t ξ t t 4m + ψ t 2. For 7, using Taylor s Theorem, we have ξ dζdξ = m + t 2, ψ ξψ ζdζdξ ψ ξψ ξ dξ ψ ξ dψ ξ ψt = ψ + ψ t + 2 ψ t 2 + 6 ψ ξξ 3 = 2 ψ t 2 + 6 ψ ξξ 3 2 ψ t 2 = m + m + 2 t 2. 2 Now, we define γ : 0,,, be the inverse function of ψt for all t, and ρ : 0, 0,, be the inverse function of 2 ψ t for all t 0,. Then we have the following lemma. Lemma 4. For ψt, we have the following s s m + + γs +, s 0, 8 m +

44 El Amir Djeffal and Lakhdar Djeffal and m ρs 2s + m m+, s 0. 9 Proof. For 8, let s = ψt, t, i.e., γs = t, t, then we have m + t 2 = s + m + 2t t m. Because m + 2t t m is monotone increasing with respect to t, we have m + t 2 s + m +, which implies that By 6, we have so s t = γs m + +. s = ψt m + t 2, s t = γs + m +. For 9, let z = 2 ψ t for all t 0,. By the definition of ρ, we have ρz = t and 2z = ψ t. Then mt m = 2z + 2m + t m + 2. Because 2m+t m+2 is monotone increasing with respect to t 0,, we have mt m 2z + m, which implies that m ρz = t 2z + m m+. Lemma 5. Let γ : 0,,,be the inverse function of ψt for all t. Then we have ΨV ΨβV nψ βγ, V, β. 20 n Proof. Using Theorem 3.2 in [], we get the result. proof. This completes the

A path following IPMs for SDO problem based on new kernel function 45 Lemma 6. Let 0 θ, V + = θ V. If ΨV τ, then we have ΨV + m + m + 2 2 θ Proof. Since θ and γ ΨV, we have γ n nθ 2 τ +. 2 m + ΨV n θ lemma 5, with β = θ, 7, 8 and ΨV τ, we have ΨV ΨV + nψ γ θ m + m + 2 n 2 m + m + 2 = n 2 θ m + m + 2 n 2 θ m + m + 2 n 2 θ m + m + 2 n 2 θ n γ θ ΨV 2 n θ ΨV 2 γ n ΨV + m + n θ τ 2 θ + m + n nθ τ 2 +. m + n. Using 2 Denote m + m + 2 Ψ 0 = Ln, θ, τ = n 2 θ nθ τ 2 +, 22 m + n then Ψ 0 is an upper bound for ΨV during the process of the algorithm. 4 Analysis of algorithm The aim of this paper is to define a new kernel function and to obtain new complexity results for an SDO problem using the proximity function defined by the kernel function and following the approach of Bai et al. []. Using the concept of a matrix function [3], the definition of kernel function ψ can be extended to any diagonalizable matrix with positive eigenvalues. In particular, for a given eigen-decomposition

46 El Amir Djeffal and Lakhdar Djeffal V = Q V diag λ V, λ 2 V,..., λ n V Q V, of V with a nonsingular matrix Q V, the matrix function ψv is defined by ψv = Q V diag ψλ V, ψλ 2 V,..., ψλ n V Q V. 23 In this section, we compute a proper step size α and the decrease of the proximity function during an inner iteration and give the complexity results of the algorithm. 4. Determining a default step size Taking a step size α, we have new iterates X + = X + α X, y + = y + α y and S + = S + α S. Let X + = X I + α X = X I + α D X = X X V V V + αd X, S + = S I + α S = S I + α D S = S S V V V + αd S. So, we have V + = V + αd X 2 V + αds V + αd X 2 2. Since the proximity after one step is defined by ΨV + = Ψ V + αd X 2 V + αds V + αd X 2 2. By i in Lemma 2, we have ΨV + = Ψ V + αd X V + αd S 2, 2 Ψ V + αd X + ΨV + αd S. Define, for α > 0 fα = ΨV + ΨV. Therefore, we have fα f α, where

A path following IPMs for SDO problem based on new kernel function 47 Obviously, f α = 2 Ψ V + αd X + ΨV + αd ΨV. 24 f0 = f 0 = 0. Now, we deal with another concepts relevant to matrix functions in matrix theory [5]. Definition. [5] A matrix Xt is said to be a matrix of functions if each entry of Xt is a function of t, that is, Xt = [X ij t]. The concepts of continuity, differentiability and integrability naturally extended to matrix-valued functions of a scalar by interpreting them componentwise. Thus we can say that d dt Xt = d dt [X ijt] = X t. Suppose that the matrix-valued functions Ht and Gt are differentiable with respect to t. Then we have d d T rgt = T r dt dt Gt = T r G t, d dt GtHt = G tht + GtH t. For any function ψt, let us denote by ψt the divided difference of ψt : ψt, t 2 = ψt ψt 2 t t 2, t t 2 R. If t = t 2, we simply write ψt, t 2 = ψ t. Let us define that Q α is orthogonal matrix such that V + αd X = Q T α diag λ V + αd X, λ 2 V + αd X,..., λ n V + αd X Q α, and let us denote by D i the diagonal matrix that has one in its i, i position and all other components of D i equal to zero. It follows from [5,2,3] that

48 El Amir Djeffal and Lakhdar Djeffal d dα ψv + αd X = Q T α n ψ λ j V + αd X, λ k V + αd X D j Qα V + αd X Q T α Dk Q α. j,k= 25 Now by the choice of D i, it holds T r D j Q α V + αd X Q T α D k = 0, for j k. Thus it follows that d dα ψv + αd X = n ψ λ i V + αd X D i Qα V + αd X Q T α Di i= n = T r Q T α D i ψ λ i V + αd X D i Q α V + αd X i= = T r ψ V + αd X D X, and d dα ψv + αd S = Q T α n j,k= ψλ j V + αd S, λ k V + αd S D j Q α V + αd S Q T α D k Q α. 26 Now by the choice of D i, it holds T r D j Q α V + αd S Q T α D k for j k. Thus it follows that d dα ψv + αd S n = T r ψ λ i V + αd S D i Q α V + αd S Q T α D i i= n = T r Q T α D i ψ λ i V + αd S D i Q α V + αd S i= = T r ψ V + αd S D S. Now, we can write This gives f α = 2 T r ψ V + αd X D X + ψ V + αd S D S. f 0 = 2δ 2. = 0,

A path following IPMs for SDO problem based on new kernel function 49 Furthermore, d 2 T r ψ V + αd dα 2 X = T r d dα ψ V + αd X D X n = T r Q T α ψ λ j V + αd X, λ k V + αd X D j j,k= Q α V + αd X Q T α D k Q α D X = T r n j,k= ψ λ j V + αd X, λ k V + αd X D j Q α D X Q T α D k Q α D X Q T α n = T r ψ λ j V + αd X, λ k V + αd X Q α D X Q T α 2 jk j,k= { } ψ max λ j V + αd X, λ k V + αd X, j, k =,..., n D X 2. Also, we have d 2 T r ψ V + αd dα 2 S = T r d dα ψ V + αd S D S n = T r Q T α ψ λ j V + αd S, λ k V + αd S D j j,k= Q α V + αd S Q T α D k Q α D S = T r n j,k= ψ λ j V + αd S, λ k V + αd S D j Q α D S Q T α D k Q α D S Q T α n = T r ψ λ j V + αd S, λ k V + αd S Q α D S Q T α 2 jk j,k= { } ψ max λ j V + αd S, λ k V + αd S, j, k =,..., n D S 2. We denote by { } ψ ω = max λ j V + αd X, λ k V + αd X, j, k =,..., n,

50 El Amir Djeffal and Lakhdar Djeffal and { } ψ ω 2 = max λ j V + αd S, λ k V + αd S, j, k =,..., n. Therefore, d 2 f α = 2 dα T r ψ V + αd 2 X + ψ V + αd S ω D X 2 + ω 2 D S 2. 2 27 Lemma 7. Let δv be as defined in 5. Then we have Proof. Using 6, we have ΨV = T rψv = = δv m + ΨV. n ψλ i V i= n i= 4m + ΨV 2 = m + δv 2. 4m + ψ λ i V 2 This gives δv m + ΨV. Throughout the paper, we assume that τ. Using lemma 7 and the assumption that ΨV τ, we have δv m +. We have the following lemma. Lemma 8. Let f α be as defined in 24, δv be as defined in 5 and ψ the kernel function be as defined in 9. Then we have Proof. We have f α 2δ 2 ψ λ min V 2αδ. f α 2 max {ω, ω 2 } δ 2. It suffices to prove the following inequality: max {ω, ω 2 } ψ λ min V 2αδ. We can choose j, k {, 2,..., n} such that ψ λ j V + αd X ψ λ k V + αd X ω = λ j V + αd X λ k V + αd X.

A path following IPMs for SDO problem based on new kernel function 5 Using the Mean value theorem, there exists [ η min λ j V + αd X, λ k V + αd X, such that, max λ j V + αd X, λ k V + αd X ψ η = ψ λ j V + αd X, λ k V + αd X. Since D X are symmetric matrices, from the definition of δ and Frobenius norm, we have because, ψ η min {λ j V + αd X, λ k V + αd X } λ min V 2αδ, is monotonically decreasing, then we obtain ω ψ λ min V 2αδ. By the same method to get ω, we obtain ψ λ j V + αd S ψ λ k V + αd S ω 2 = λ j V + αd S λ k V + αd S. Using the Mean value theorem, there exists [ η min λ j V + αd S, λ k V + αd S, such that max λ j V + αd S, λ k V + αd S ψ η = ψ λ j V + αd S, λ k V + αd S. Since D S are symmetric matrices, from the definition of δ and Frobenius norm, we have because, ψ Therefore, This gives η min {λ j V + αd S, λ k V + αd S } λ min V 2αδ, is monotonically decreasing, then we obtain ω 2 ψ λ min V 2αδ. max {ω, ω 2 } ψ λ min V 2αδ. f α 2δ 2 ψ λ min V 2αδ. ], ],

52 El Amir Djeffal and Lakhdar Djeffal Since f 0 = 0 and f 0 = 2δV 2, we have fα f α := f 0 + f 0α + α ξ f 2 α := f 0 + f 0α + 2δ 2 Note that f 2 0 = 0. Furthermore, since 0 f ζdζdξ 0 α ξ 0 0 ψ λ min V 2ζδdζdξ. f 2α = 2δ 2 + δ ψ λ min V ψ λ min V 2αδ, then, we have f 2 0 = 2δ2, which is the same value of the f [ 0 ] and f 2 α = 2δ2 ψ λ min V 2αδ, which is increasing on α 0, λ minv 2δ. So we can rewrite f 2 α as follows: f 2 α = f 2 0 + f 20α + 2δ 2 α ξ 0 0 ψ λ min V 2ζδdζdξ. Now, using f 0 = f 2 0 and f α f 2 α, we can easily check that This gives that α f α = f 0 + f ξdξ f 2α. 0 f α 0, if f 2α 0. For each µ > 0, we compute a feasible iterate such that the proximity measure is decreasing. We want to compute the step size α which satisfies that f 2 α 0 holds with α as large as possible. Since f 2 α > 0, that is, f 2 α is monotonically increasing at α, the largest possible value at α satisfying f 2 α 0 occurs when f 2 α = 0, that is, ψ λ min V ψ λ min V 2αδ = 2δ. 28 Since ψ t is monotonically decreasing, the derivative of the left-hand side in 28 with respect to λ min V is

A path following IPMs for SDO problem based on new kernel function 53 ψ λ min V ψ λ min V 2αδ 0. So, the left-hand side in 28 is decreasing at λ min V. This implies that if λ min V gets smaller, then α gets smaller with fixed δ. Note that δ = n ψ λ i V 2 ψ λ min V ψ λ min V, i= and the equality is true if and only if λ min V is only the cordinate in λ V, λ 2 V,..., λ n V which is different from and λ min V <, that is, ψ λ min V < 0. Hence, the worst situation for the largest step size α occurs when λ min V satisfies ψ λ min V = δ. 29 In that case, the largest satisfying 28 is minimal. For our purpose, we need to deal with the worst case and so we assume that 29 holds. This implies By using 28 and 29 we immediately obtain λ min V = ρδ. 30 ψ λ min V 2αδ = 4δ. By the definition of ρ and using 30, the largest step size of the worst case is given as follows: α ρδ ρ2δ =. 3 2δ Lemma 9. Let the definition of ρ and α be as defined in 3, then we have α. m + m + 2 m+2 m+ Proof. Using lemma 4.4 in [], the definition of ψ t and 9, we have α ψ ρ2δ = 2m + + mm+ ρ2δ m+2 2m + + mm + 4δ+m m m+2 m+ 3m + m + 2δ m+2 m+, 2m + δ + 3mm + 2δ m+2 m+

54 El Amir Djeffal and Lakhdar Djeffal which completes the proof. For using α as the default step size in the algorithm, define the α as follows α = 3m + m + 2δ m+2 m+. 32 4.2 Decrease of the proximity function during an inner iteration Now, we show that our proximity function Ψ with our default step size α is decreasing. It can be easily established by using the following result: Lemma 0. [2] Let ht be a twice differentiable convex function with h0 = 0, h 0 < 0 and let ht attain its global minimum at t > 0. If h t is increasing for t [0, t ], then ht = th 0. 2 Let the univariate function h be such that h0 = f 0 = 0, h 0 = f 0 = 2δ 2, h α = 2δ 2 ψ λ min V 2αδ. Since f 2 α satisfies the condition of the above lemma, fα f α f 2 α f 2 0 2 α, for all 0 α α. We can obtain the upper bound for the decreasing value of the proximity in the inner iteration by the above lemma. Theorem. Let α be a step size as defined in 32 and δ = ΨV τ =. Then we have fα m + m 2 2m+ 3m + 2 ΨV m 2m+.

A path following IPMs for SDO problem based on new kernel function 55 Proof. For all α α, we have fα αδ 2 = δ 2 3m + m + 2δ m+2 m+ = 3m + m + 2 δ m m+ m m+ m + ΨV 3m + m + 2 m + m 2m+ 3m + m + 2 ΨV m 2m+ m + m 2 2m+ 3m + 2 ΨV m 2m+. 4.3 Iteration bound We need to count how many inner iterations are required to return to the situation where ΨV τ after a µ update. We denote the value of ΨV after µ update as Ψ 0 the subsequent values in the same outer iteration are denoted as Ψ k, k =, 2,... If K denotes the total number of inner iterations in the outer iteration, then we have and according to 20, Ψ 0 L = On, θ, τ, Ψ K > τ, 0 Ψ K τ. Ψ k+ Ψ k m + m 2 2m+ 3m + 2 Ψ m 2m+ k. At this stage we invoke the following lemma from [2] Lemma. that [2] Let t 0, t,..., t k be a sequence of positive numbers such where β > 0, 0 < ν. Then t k+ t k βt ν k, k = 0,,..., K, Letting K tν 0 βν. t k = Ψ k, β = we can get the following lemma. m + m 2 2m+ 3m + 2 and ν = m + 2 2m +,

56 El Amir Djeffal and Lakhdar Djeffal Lemma 2. Let K be the total number of inner iterations in the outer iteration. Then we have K 6m + 3m+4 Proof. Using lemma, we have K Ψν 0 βν m+2 2m+ 2m+ Ψ0. = 6m + 3m+4 m+2 2m+ 2m+ Ψ0. Now we estimate the total number of iterations of our algorithm. Theorem 2. If τ, the total number of iterations is not more than 6m + 3m+4 m+2 2m+ 2m+ Ψ0 nµ0 log θ ε. Proof. In the algorithm, nµ ε, µ k = θ k µ 0 and µ 0 = T rx0 S 0 n. By simple computation, we have K θ log nµ0 ε. Therefore, the number of outer iterations is bounded above by nµ0 θ log ε. Multiplying the number of outer iterations by the number of inner iterations, we get an upper bound for the total number of iterations, namely, 6m + 3m+4 m+2 2m+ 2m+ Ψ0 nµ0 log θ ε. 5 Conclusion We propose a new barrier function and primal dual interior point algorithms for SDO problems and analyze the iteration complexity of the algorithm based on the kernel function. We have Om 3m+ 2m n m+ 2m log T rx0 S 0 ε for large-update methods and Om 3m+ 2m n log T rx 0 S 0 ε for small-update methods which are the best known iteration bounds for such methods. Future research might focus on the extension to symmetric cone optimization. Finally, the numerical test is an interesting work for investigating the behavior of the algorithms so as to be compared with other existing

A path following IPMs for SDO problem based on new kernel function 57 approaches. Acknowledgements The authors thank the referees for their careful reading and their precious comments. Their help is much appreciated. References [] Y.Q. Bai, M El. Ghami and C. Roos, A comparative study of kernel functions for primal dual interior-point algorithms in linear optimization, SIAM J. Optim. 5 2004 0 28. [2] Y.Q. Bai, M. El Ghami and C. Roos, A new efficient large-update primal dual interior point method based a finite barrier, SIAM J. Optim. 3 2002 766 782. [3] R. Bellman, Introduction to Matrix Analysis, in: Classics in Applied Mathematics, SIAM, Philadelphia, 995. [4] E. de Klerk, Aspects of Semidefinite Programming: Interior Point Methods and Selected Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2002. [5] R.A. Horn and C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, New York, 99. [6] M. Kojima, N. Megiddo and S. Mizuno, A primal dual infeasibleinterior-point algorithm for linear programming, Math. Program. 6 993 263 280. [7] M. Kojima, M. Shida and S. Hara, Interior point methods for the monotone semidefinite linear complementarity problem in symmetric matrices, SIAM J. Optim. 7 997 86 25. [8] I.J. Lustig, Feasible issues in a primal dual interior point method for linear programming, Math. Program. 49 990-99 45 62. [9] H. Mansouri and C. Roos, A new full-newton step On infeasible interior-point algorithm for semidefinite optimization, Numer. Algorithms 52 2009 225 255.

58 El Amir Djeffal and Lakhdar Djeffal [0] H. Mansouri and C. Roos, Simplified OnL infeasible interior-point algorithm for linear optimization using full Newton steps, Optim. Methods Softw. 22 2007 59 530. [] Y.E. Nesterov and A.S. Nemirovskii, Interior Point Polynomial Algorithms in Convex Programming, SIAM Stud. Appl. Math. SIAM, Philadelphia, 994. [2] J. Peng, C. Roos and T. Terlaky, Self-regular functions and new search directions for linear and semidefinite optimization, Math. Program. 93 2002 29 7. [3] J. Peng, C. Roos and T. Terlaky, Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms, Princeton University Press, Princeton, NJ, 2002. [4] C. Roos, A full-newton step On infeasible interior-point algorithm for linear optimization, SIAM J. Optim. 6 2006 0 36. [5] J. Sturm, Theory and algorithms of semidefinite programming, in: H. Frenk, C. Roos, T. Terlaky, S. Zhang Eds., High Performance Optimization, Kluwer Academic Publishers, Dordrecht, The Netherlands, 999. [6] K. Tanabe, Centered Newton method for linear programming: interior and exterior point method Japanese, in: New Methods for Linear Programming 3, ed. K. Tone, Inst. Statist. Math. Tokyo, Japan, 990. [7] L. Tuncel, Potential reduction and primal dual methods, in: H. Wolkowicz, R. Saigal, L. Vandenberghe Eds., Theory, Algorithms and Applications, in: Handbook of Semidefinite Programming, Kluwer Academic Publishers, Boston, MA, 2000. [8] Y. Zhang, On the convergence of a class of infeasible-interior-point methods for the horizontal linear complementarity problem, SIAM J. Optim. 4 994 208 227.