A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

Size: px
Start display at page:

Download "A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization"

Transcription

1 A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, PO Box 503, 600 GA Delft, The Netherlands yqbai@satffshueducn, guoq wang@hotmailcom, croos@ewitudelftnl Abstract We present a primal-dual interior-point algorithm for second-order conic optimization problems based on a specific class of kernel functions This class has been investigated earlier for the case of linear optimization problems In this paper we derive the complexity bounds O N log N) log N ɛ ) for large- and O N log N ɛ ) for small- update methods, respectively Here N denotes the number of second order cones in the problem formulation Keywords: second-order conic optimization, interior-point methods, primal-dual method, large- and small-update methods, polynomial complexity AMS Subject Classification: 90C, 90C3 Introduction Second-order conic optimization SOCO) problems are convex optimization problems because their objective is a linear function and their feasible set is the intersection of an affine space with the Cartesian product of a finite number of second-order also called Lorentz or ice cream) cones Any second order cone in R n has the form { } n K = x, x,, x n ) R n : x x i, x 0, ) where n is some natural number Thus a second-order conic optimization problem has a form i= P ) min { c T x : Ax = b, x K }, where K R n is the Cartesian product of several second-order cones, ie, K = K K K N, ) The first author kindly acknowledges the support of Dutch Organization for Scientific Researches NWO grant )

2 with K j R n j for each j, and n = N j= n j We partition the vector x accordingly: x = x, x,, x N ) with x j K j Furthermore, A R m n, c R n and b R m The dual problem of P ) is given by D) max { b T y : A T y + s = c, s K }, Without loss of generality we assume that A has full rank: rank A = m As a consequence, if the pair y, s) is dual feasible then y is uniquely determined by s Therefore, we will feel free to say that s is dual feasible, without mentioning y It is well-known that SOCO problems include linear and convex quadratic programs as special cases On the other hand, SOCO problems are special cases of semidefinite optimization SDO) problems, and hence can be solved by using an algorithm for SDO problems Interior-point methods IPMs) that exploit the special structure of SOCO problems, however, have much better complexity than when using an IPM for SDO for solving SOCO problems In the last few years the SOCO problem has received considerable attention from researchers because of its wide range of applications see, eg, [8, ]) and because of the existence of efficient IPM algorithms see, eg, [, 3, 7, 6, 8, 9]) Many researchers have studied SOCO and achieved plentiful and beautiful results Several IPMs designed for LO see eg, [4]) have been successfully extended to SOCO Important work in this direction was done by Nesterov and Todd [0, ] who showed that the primal-dual algorithm maintains its theoretical efficiency when the nonnegativity constrains in LO are replaced by a convex cone, as long as the cone is homogeneous and self-dual Adler and Alizadeh [] studied a unified primal-dual approach for SDO and SOCO, and proposed a direction for SOCO analogous to the AHO-direction for SDO Later, Schmieta and Alizadeh [5] presented a way to transfer the Jordan algebra associated with the second-order cone into the so-called Clifford algebra in the cone of matrices and then carried out a unified analysis of the analysis for many IPMs in symmetric cones Faybusovich [6], using Jordan algebraic techniques, analyzed the Nesterov-Todd method for SOCO Monteiro [9] and Tsuchiya [9] applied Jordan algebra to the analysis of IPMs for SOCO with specialization to various search directions Other researchers have worked on IPMs for special cases of SOCO, such as convex quadratic programming, minimizing a sum of norms, etc For an overview of these results we refer to [] and its related references Recently, J Peng et al [] designed primal-dual interior-point algorithms for LO, SDO and SOCO based on so-called self-regular SR) proximity functions Moreover, they derived a O N log N) log N ɛ complexity bound for SOCO with large-update methods, the currently best bound for such methods Their work was extended in [4, 5] to other proximity functions based on univariate so-called kernel functions Motivated by [] and [4, 5], in this paper we present a primal-dual IPM for SOCO problems based on kernel functions of the form ψt) := tp+ p + + t q, t > 0, 3) q where p [0, ] and q > 0 are the parameters; p is called the growth degree and q the barrier degree of the kernel function

3 One may easily verify that ψ) = ψ ) = 0, lim t 0 ψt) = lim t ψt) = + and that ψt) is strictly convex It is worth pointing out that only when p = the function ψt) is SR Thus the functions considered here do not belong to the class of functions studied in [], except if p = The same class of kernel functions 3) has been studied for the LO case in [4], and for the SDO [0] in case p = 0 As discussed in [4], every kernel function ψt) gives rise to an IPM We will borrow several tools for the analysis of the algorithm in this paper from [4], and some of them from [] These analytic tools reveal that the iteration bound highly depends on the choice of ψt), especially on the inverse functions of ψt) and its derivatives Our aim will be to investigate the dependence of the iteration bound on the parameters p and q We will consider both large- and small-update methods The outline of the paper is as follows In Section, after briefly recalling some relevant properties of the second-order cone and its associated Euclidean Jordan algebra, we review some basic concepts for IPMs for solving the SOCO problem, such as central path, NT-search direction, etc We also present the primal-dual IPM for SOCO considered in this paper at the end of Section In Section 3, we study the properties of the kernel function ψt) and its related vector-valued barrier function and real-valued barrier function The step size and the resulting decrease of the barrier function are discussed in Section 4 In Section 5, we analyze the algorithm to derive the complexity bound for large- and small-update methods Finally, some concluding remarks follow in Section 6 Some notations used throughout the paper are as follows R n, R n + and R n ++ denote the set of all vectors with n components), the set of nonnegative vectors and the set of positive vectors, respectively As usual, denotes the Frobenius norm for matrices, and the -norm for vectors The Löwner partial ordering of R n defined by a second-order cone K is defined by x K s if x s K The interior of K is denoted as K + and we write x K s if x s K + Finally, E n denotes the n n identity matrix Preliminaries Algebraic properties of second-order cones In this section we briefly recall some algebraic properties of the second-order cone K as defined by ) and its associated Euclidean Jordan algebra Our main sources for this section are [, 7, 9,, 5, 8] If, for any two vectors x, s R n, the bilinear operator is defined by x s := x T s; x s + s x ; ; x s n + s x n ), then R n, ) is a commutative Jordan algebra Note that the map s x s is linear The matrix of this linear map is denoted as Lx), and one may easily verify that it is an arrow-shaped matrix: Lx) := x x T :n x :n x E n, 4) We use here and elsewhere Matlab notation: u; v) is the column vector obtained by concatenating the column vectors u and v 3

4 where x :n = x ; ; x n ) We have the following five properties x K if and only if Lx) is positive semidefinite; x s = Lx)s = Ls)x = s x; 3 x K if and only if x = y y for some y K 4 e x = x for all x K, where e = ; 0; ; 0); 5 if x K + then Lx) e = x, ie, x Lx) e = e The first property implies that x Lx) provides a natural embedding of K into the cone of positive semidefinite matrices, which makes SOCO essentially a specific case of SDO From the third property we see that K is just the set of squares in the Jordan algebra Due to the fourth property, the vector e is the unique) unit element of the Jordan algebra Let us point out that the cone K is not closed under the product and that this product is not associative The maximal and minimal eigenvalues of Lx) are denoted as λ max x) and λ min x), respectively These are given by It readily follows that λ max x) := x + x :n, λ min x) := x x :n 5) x K λ min x) 0, x K + λ min x) > 0 Lemma If x R n, then λ max x) x and λ min x) x Proof: By 5), we have λ maxx) + λ min x) = x, which implies the lemma The trace and the determinant of x R n are defined by Trx) := λ max x) + λ min x) = x, detx) := λ max x)λ min x) = x x :n 6) From the above definition, for any x, s K, it is obvious that It is also straightforward to verify that Lemma Let x, s R n Then Trx s) = x T s, Trx x) = x Trx s) t) = Trx s t)) 7) λ min x + s) λ min x) s Recall that the Jordan product itself is not associative But one may easily verify that the first coordinate of x s) t equals x s t nx + x s it i + s x it i + t x is i) = x T s + x T t + s T t x s t, i= which is invariant under permutations of x, s and t, and hence 7) follows 4

5 Proof: By using the well-known triangle inequality we obtain λ min x + s) = x + s x + s) :n x + s x :n s :n = λ min x) + λ min s) By Lemma, we have λ min s) s This implies the lemma Lemma 3 Lemma 63 in []) Let x, s R n Then λ max x)λ min s) + λ min x)λ max s) Trx s) λ max x)λ max s) + λ min x)λ min s), det x s) det x) det s) The last inequality holds with equality if and only if the vectors x :n and s :n are linearly dependent Corollary 4 Let x R n and s K Then λ min x) Trs) Trx s) λ max x) Trs) Proof: Since s K, we have λ min s) 0, and hence also λ max s) 0 Now Lemma 3 implies that λ min x) λ max s) + λ min s)) Trx s) λ max x) λ max s) + λ min s)) Since λ max s) + λ min s) = Trs), this implies the lemma We conclude this section by introducing the so-called spectral decomposition of a vector x R n This is given by x := λ max x) z + λ min x) z, 8) where z := ; ) x :n, z := ; x :n ) x :n x :n Here by convention x :n / x :n = 0 if x :n = 0 Note that the vectors z and z belong to K but not to K + ) The importance of this decomposition is that it enables us to extend the definition of any function ψ : R ++ R + to a function that maps the interior of K into K In particular this holds for our kernel function ψt), as given by 3) Definition 5 Let ψ : R R and x R n With z and z as defined in 8), we define ψx) := ψλ max x)) z + ψλ min x)) z In other words, ψλmax x)) + ψλ min x)) ; ψλ ) maxx)) ψλ min x)) x :n if x :n 0 ψx) = x :n ψλ max x)); 0; ; 0) if x :n = 0 Obviously, ψx) is a well-defined vector-valued function In the sequel we use ψ ) both to denote a vector function if the argument is a vector) and a univariate function if the argument is a scalar) The above definition respects the Jordan product in the sense described by the following lemma 5

6 Lemma 6 Lemma 65 in []) Let ψ and ψ be two functions ψt) = ψ t)ψ t) for t R Then one has ψx) = ψ x) ψ x), x K The above lemma has important consequences For example, for ψt) = t then ψx) = x, where x is the inverse of x in the Jordan algebra if it exists); if ψt) = t then ψx) = x x, etc Lemma 7 Let ψ : R ++ R + and x K + Then ψx) K Proof: Since x K +, its eigenvalues are positive Hence ψλ max x)) and ψλ min x)) are welldefined and nonnegative Since z and z belong to K, also ψλ max x)) z + ψλ min x)) z K If ψt) is twice differentiable, like our kernel function, then the derivatives ψ t) and ψ t) exist for t > 0, and we also have vector-valued functions ψ x) and ψ x), namely: ψ x) = ψ λ max x)) z + ψ λ min x)) z, ψ x) = ψ λ max x)) z + ψ λ min x)) z If x :n 0 then z and z satisfy z = z = / and z T z = 0 From this one easily deduces that ψx) = ψλmax x)) + ψλ min x)) 9) Note that 9) also holds if x :n = 0, because then λ max x) = λ min x) = x, whence ψx) = x and ψλ max x)) + ψλ min x)) = x From now we assume that ψt) is the kernel function 3) We associate to ψt) a real-valued barrier function Ψx) on K + as follows Ψx) := Trψx)) = ψx)) = ψλ max x)) + ψλ min x)), x K + 0) Obviously, since ψt) 0 for all t > 0, and λ max x) λ min x) > 0, we have Ψx) 0 for all x K + Moreover, since ψt) = 0 if and only if t =, we have Ψx) = 0 if and only if λ max x) = λ min x) =, which occurs if and only if x = e One may also verify that ψe) = 0 and that e is the only vector with this property Similarly, one easily verifies that ψ x) = 0 holds if and only if ψλ max x)) + ψλ min x)) = 0 and ψλ max x)) ψλ min x)) = 0 This is equivalent to ψλ max x)) = ψλ min x)) = 0, which holds if and only if λ max x) = λ min x) =, ie, if and only if x = e Summarizing, we have Ψx) = 0 ψx) = 0 ψ x) = 0 x = e ) We recall the following lemma without proof Lemma 8 cf Proposition 69 in []) Since ψt) is strictly convex for t > 0, Ψx) is strictly convex for x K + In the analysis of our algorithm, that will be presented below, we need to consider derivatives with respect to a real parameter t of the functions ψ xt)) and Ψxt)), where xt) = 6

7 x t); x t); ; x n t)) The usual concepts of continuity, differentiability and integrability can be naturally extended to vectors of functions, by interpreting them entry-wise Denoting we then have x t) = x t); x t); ; x nt)), d dt xt) st)) = x t) st) + xt) s t), ) d ψxt)) Trψxt))) = d = ψ xt)) T x t) = Trψ xt)) x t)) dt dt 3) Re-scaling the cone When defining the search direction in our algorithm, we need a re-scaling of the space in which the cone lives Given x 0, s 0 K +, we use an automorphism W x 0, s 0 ) of the cone K such that W x 0, s 0 ) x 0 = W x 0, s 0 ) s 0 The existence and uniqueness of such an automorphism is well-known To make the paper selfcontaining, a simple derivation of this result is given in the Appendix In the sequel we denote W x 0, s 0 ) simply as W Let us point out that the matrix W is symmetric and positive definite For any x, s R n we define x := W x, s := W s We call this Nesterov-Todd NT)-scaling of R n, after the inventors In the following lemma we recall several properties of the NT-scaling scheme Lemma 9 cf Proposition 633 in []) For any x, s R n one has i) Tr x s) = Trx s); ii) det x) = λ det x), det s) = λ det s), where λ = iii) x K 0, x K 0) x K 0, x K 0) dets 0 ) detx 0 ) ; Proof: The proof of i) is straightforward: Tr x s) = Tr W x W s ) = W x) T W s ) = x T W T W s = x T s = Trx s) For the proof of ii) we need the matrix Q = diag,,, ) R n n 4) Obviously, Q = E n where E n denotes the identity matrix of size n n Moreover, det x) = x T Qx, for any x By Lemma A and Lemma A4 we have W QW = λw Hence we may write det x) = x T Q x = W x) T Q W x) = x T W QW x = λ x T Qx = λ det x) In a similar way we can prove det s) = λ det s) Finally, since W is an automorphism of K, we have x K if and only x K; since det x) > 0 if and only det x) > 0, by ii), also x K + if and only x K + 7

8 3 The central path for SOCO To simplify the presentation in this and the first part of the next section, for the moment we assume that N = in ) So K is itself a second-order cone, which we denote as K We assume that both P ) and D) satisfy the interior-point condition IPC), ie, there exists x 0, y 0, s 0 ) such that Ax 0 = b, x 0 K +, A T y 0 + s 0 = c, s 0 K + Assuming the IPC, it is well-known that the optimality conditions for the pair of problems P ) and D) are Ax = b, x K, A T y + s = c, s K, 5) Lx) s = 0 The basic idea of primal-dual IPMs is to replace the third equation in 5) by the parameterized equation Lx) s = µe with µ > 0 Thus we consider the following system Ax = b, x K, A T y + s = c, s K, 6) Lx) s = µe For each µ > 0 the parameterized system 6) has a unique solution xµ), yµ), sµ)) and we call xµ) the µ-center of P ) and yµ), sµ)) the µ-center of D) Note that at the µ-center we have xµ) T sµ) = Trx s) = TrLx) s) = Trµe) = µ 7) The set of µ-center gives a homotopy path, which is called the central path If µ 0 then the limit of the central path exists and since the limit points satisfy the complementarity condition Lx) s = 0, it naturally yields optimal solution for both P ) and D) see, eg, []) 4 New search direction for SOCO IPMs follow the central path approximately and find an approximate solution of the underlying problems P ) and D) as µ goes to zero The search directions is usually derived from a certain Newton-type system We first want to point out that a straightforward approach to obtain such a system fails to define unique search directions For the moment we assume N = By linearizing system 6) we obtain the following linear system for the search directions A x = 0, A T y + s = 0, 8) Lx) s + Ls) x = µe Lx)s This system has a unique solution if and only if the matrix A Ls) Lx)A T is nonsingular as one may easily verify Unfortunately this might not be the case, even if A has full rank This is due to the fact that the matrix Z = Ls) Lx), which has positive eigenvalues, may not be symmetric As a consequence, the symmetrized matrix Z + Z T may have negative eigenvalues For a discussion of this phenomenon and an example we refer to [, page 43] The above discussion makes clear that we need some symmetrizing scheme There exist many such schemes See, eg, [, 3, 8] In this paper we follow the approach taken in [] and we 8

9 choose for the NT-scaling scheme, as introduced in Section This choice can be justified by recalling that until now large-update IMPs based on the NT search direction have the best known theoretical iteration bound Thus, letting W denote the unique) automorphism of K that satisfies W x = W s we define v := W x = W ) s 9) µ µ and Ā := µ AW, d x := W x µ, d s := W s µ 0) Lemma 0 One has µ det v ) = det x) det s), µ Trv ) = Trx s) Proof: Defining x = W x and s = W s, Lemma 9 yields det x) = det x) det s), which implies µ det v) = det x) det s) Since det v)) = det v v), by Lemma 3, the first equality follows The proof of the second equality goes in the same way The system 8) can be rewritten as follows: Ād x = 0, Ā T y + d s = 0, LW v)w d s + LW v)w d x = e LW v)w v Since this system is equivalent to 8), it may not have a unique solution To overcome this difficulty we replace the last equation by which is equivalent to Lv) d s + Lv) d x = e Lv) v, d s + d x = Lv) e v = v v Thus the system defining the scaled search directions becomes Ād x = 0, Ā T y + d s = 0, ) d s + d x = v v Since the matrix ĀT Ā is positive definite, this system has a unique solution We just outlined the approach to obtain the uniquely defined classical search direction for SOCO The approach in this paper differs only in one detail: we replace the right hand side in the last equation by ψ v), where ψt) is the kernel function given by 3) Thus we will use the following system to define our search direction: Ād x = 0, Ā T y + d s = 0, ) d s + d x = ψ v) 9

10 Since ) has the same matrix of coefficients as ), also ) has a unique solution 3 from ) that Recall Ψv) = 0 ψv) = 0 ψ v) = 0 v = e Since d x and d s are orthogonal, we will have d x = d s = 0 in ) if and only if ψ v) = 0, ie, if and only if v = e The latter implies x s = µe, which means that x = xµ) and s = sµ) This is the content of the next lemma Lemma One has ψ v) = 0 if and only if x s = µe Proof: We just established that ψ v) = 0 if and only if v = e According to 9), v = e holds if and only if x = µ W e and s = µ W e By Lemma A4 the matrix W has the form W = λ a ā ā T E n + āāt +a where a = a ; ā) is a vector such that det a) = and λ > 0 Using that W = λ W Qa, with Q as defined in 4), so Qa == a ; ā) see Lemma A3), it follows that v = e holds if and only if µ x = a, s = µ λ a λ ā ā If x and s have this form then x s = µ µ λ λ a āt ā a ā + a ā) = µ det a) 0 = µ e, and conversely, if x s = µe then there must exist a λ > 0 and a vector a such that x and s are of the above form This proves the lemma We conclude from Lemma that if x, y, s) xµ), yµ), sµ)) then ψ v) 0 and hence x, y, s) is nonzero We proceed by adapting the above definitions and those in the previous sections to the case where N >, when the cone underlying the given problems P ) and D) is the cartesian product of N cones K j, as given in ) First we partition any vector x R n according to the dimensions of the successive cones K j, so x = x ; ; x N), x j R n j, and we define the algebra R n, ) as a direct product of Jordan algebras: x s := x s ; x s ; ; x N s N) 3 It may be worth mentioning that if we use the kernel function of the classical logarithmic barrier function, ie, ψt) = t ) log t, then ψ t) = t t, whence ψ v) = v v, and hence system ) then coincides with the classical system ) 0

11 Obviously, if e j K j is the unit element in the Jordan algebra for the j-th cone, then the vector is the unit element in R n, ) e = e ; e ; ; e N ) The NT-scaling scheme in this general case is now obtained as follows Let x, s K +, and for each j, let W j denote the unique) automorphism of K j that satisfies W j x j = W j) s j and Let us denote λ j = det s j ) det x j ) W := diag W,, W N ) Obviously, W = W T and W is positive definite The matrix W can be used to re-scale x and s to the same vector v, as defined in 9) Note that we then may write v = ) v,, v N), v j := W j x j = W j s j, µ µ Modifying the definitions of ψx) and Ψx) as follows, ψv) = ψv ); ψv ); ; ψv N )), Ψv) = N Ψv j ), 3) and defining the matrix Ā as in 0) the scaled search directions are then defined by the system ), with ψ v) = ψ v ); ψ v ); ; ψ v N )) By transforming back to the x- and s-space, respectively, using 0), we obtain search directions x, y and s in the original spaces, with x = µ W d x, s = µ W d s By taking a step along the search direction, with a step size α defined by some line search rule, that will be specified later on, we construct a new triple x, y, s) according to x + = x + α x, j= y + = y + α y, 4) s + = s + α s 5 Primal-dual interior-point algorithm for SOCO Before presenting our algorithm, we define a barrier function in terms of the original variables x and s, and the barrier parameter µ, according to Φx, s, µ) := Ψv) The algorithm now is as presented in Figure It is clear from this description that closeness

12 Primal-Dual Algorithm for SOCO Input: A threshold parameter τ > ; an accuracy parameter ε > 0; a fixed barrier update parameter θ 0, ); a strictly feasible pair x 0, s 0 ) and µ 0 > 0 such that Ψx 0, s 0, µ 0 ) τ begin x := x 0 ; s := s 0 ; µ := µ 0 ; while Nµ ε do begin µ := θ)µ; while Φx, s, µ) > τ do begin x := x + α x; s := s + α s; y := y + α y; end end end Figure : Algorithm of x, y, s) to xµ), yµ), sµ)) is measured by the value of Ψv), with τ as a threshold value: if Ψv) τ then we start a new outer iteration by performing a µ-update, otherwise we enter an inner iteration by computing the search directions at the current iterates with respect to the current value of µ and apply 4) to get new iterates The parameters τ, θ and the step size α should be chosen in such a way that the algorithm is optimized in the sense that the number of iterations required by the algorithm is as small as possible The choice of the so-called barrier update parameter θ plays an important role both in theory and practice of IPMs Usually, if θ is a constant independent of the dimension n of the problem, for instance θ =, then we call the algorithm a large-update or long-step) method If θ depends on the dimension of the problem, such as θ = n, then the algorithm is named a small-update or short-step) method The choice of the step size α 0 α ) is another crucial issue in the analysis of the algorithm It has to be taken such that the closeness of the iterates to the current µ-center improves by a sufficient amount In the theoretical analysis the step size α is usually given a value that depends on the closeness of the current iterates to the µ-center Note that at the µ-center the duality gap equals Nµ, according to 7) The algorithm stops if the duality gap at the µ-center is less than ε For the analysis of this algorithm we need to derive some properties of our scaled) barrier function Ψv) This is the subject of the next section

13 3 Properties of the barrier function Ψv) 3 Properties of the kernel function ψt) First we derive some properties of ψt) as given by 3) The first three derivatives of ψt) are ψ t) = t p, 5) tq+ ψ t) = pt p + q + > 0, 6) tq+ ψ t) = pp )t p q + )q + ) t q+3 < 0 7) As mentioned before ψt) is strictly convex and ψ t) is monotonically decreasing for t 0, ) The following lemmas list some properties of ψt) that are needed for analysis of the algorithm When a proof is known in the literature we simply give a reference and omit the proof Lemma 3 If t > 0, t > 0, then one has ψ t t ) ψt ) + ψt )) Proof: By Lemma in [], we know that the property in the lemma holds if and only if tψ t) + ψ t) 0, whenever t > 0 Using 5), one has This implies the lemma tψ t) + ψ t) = pt p + q + t q+ + tp t q+ = p + )tp + q > 0 tq+ Lemma 3 If t, then ψt) p + q + t ) Proof: By using the Taylor expansion around t = and using ψ t) < 0 we obtain the inequality in the lemma, since ψ) = ψ ) = 0 and ψ ) = p + q + Lemma 33 If q p and t then t + tψt) Proof: Defining ft) := tψt) t ), one has f) = 0 and f t) = ψt) + tψ t) t ) Hence f ) = 0 and f t) = ψ t) + tψ t) = p + ) t p + q t q+ ptp + q t q+ p t p ) t q+ 0 Thus we obtain tψt) t ) 8) 3

14 Since t 0, this implies the lemma In the sequel ϱ : [0, ) [, ) will denote the inverse function of ψt) for t and ρ : [0, ) 0, ] the inverse function of ψ t) for t 0, ] Thus we have ϱs) = t ψt) = s, s 0, t, ρs) = t ψ t) = s, s 0, 0 < t 9) Lemma 34 For each q > 0 one has Moreover, if q p then p + )s + ) p+ ϱs) p + )s + p + q + ) p+, s 0, 30) q ϱs) + s Proof: Let ϱ s) = t, t Then s = ψt) Hence, using t, p + ) s + p + q + ) p+), s 0 3) q t p+ p + q s = tp+ p + + t q tp+ q p + The left inequality gives ϱ s) = t + + p) s + )) q p+ = p + )s + p + q + q ) p+ 3) and the right inequality proving 8) ϱ s) = t + + p)s) p+, Now we turn to the case that q p By Lemma 33 we have t + tψt) = + ts Substituting the upper bound for t given by 3) we obtain 3) The proof of the lemma is now completed Remark 35 Note that at s = 0 the upper bound 3) for ϱ s) is tighter than in 8) This will be important later on when we deal with the complexity of small-update methods Lemma 36 Theorem 49 in [4]) For any positive vector z R n one has n n )) ψ z i )) ψ ϱ ψz i ) i= Equality holds if and only if z i for at most coordinate i and z i i= 4

15 Lemma 37 Theorem 3 in [4]) For any positive vector z R n and any β we have )) n n ψβz i ) nψ βϱ ψz i ) n i= Equality holds if and only if there exists a scalar z such that z i = z for all i i= Lemma 38 One has ρs) s + ) q+, s > 0 Proof: write which implies proving the lemma Let ρs) = t, for some s 0 Then s = ψ t), with t 0, ] Using 5) we may s = t q+ tp, tq+ ρs) = t s + ) q+, 3 Properties of the barrier function Ψv) The following lemma is a consequence of Lemma 3 Lemma 39 Proposition 69 in []) If x, s, and v K + satisfy det v ) = det x) det s), Trv ) = Trx s), then Ψv) Ψx) + Ψs)) Below we use the following norm-based proximity δv) for v δv) = N ψ v j ) 33) The following lemma gives a lower bound for δv) in terms of Ψv) Lemma 30 If v K +, then δv) i= + p)ψv) + + p)ψv)) +p 5

16 Proof: Due to 33) and 9) we have δv) = N ψ λ max v j ))) + ψ λ min v j )))) ) i= and, due to 3) and 0), Ψv) = N ψλmax v j )) + ψλ min v j )) ) j= This makes clear that δv) and Ψv) depend only on the eigenvalues λ max v j ) and λ min v j ) of the vectors v j, for j = : N This observation makes it possible to apply Lemma 36, with z being the vector in R N consisting of all the eigenvalues This gives δv) ψ ϱ Ψv))) Since ψ t) is monotonically increasing for t, we may replace ϱ Ψv)) by a smaller value Thus, by using the left hand side inequality in 30 in Lemma 34 we find Finally, using 5) and q > 0, we obtain δv) p + )Ψv) + ) p p+ p + )Ψv) + ) p p+ δv) ψ p + )Ψv) + ) p+ ) p + )Ψv) + ) q+ p+ p + )Ψv) + ) p+ ) ) = p + )Ψv) p + )Ψv) + ) p+ This proves the lemma Lemma 3 If v K + and β, then Ψβv) Nψ )) Ψv) βϱ N Proof: Due to 0), and 5), we have Ψβv) = N ψβλmax v j )) + ψβλ min v j )) ) j= As in the previous lemma, the variables are essentially only the eigenvalues λ max v j ) and λ min v j ) of the vectors v j, for j = : N Applying Lemma 37, with z being the vector in R N consisting of all the eigenvalues, the inequality in the lemma immediately follows 6

17 Corollary 3 If Ψv) τ and v + = v θ, with 0 θ, then one has Ψv + ) Nψ ϱ τ ) ) N θ Proof: By taking β = / θ in Lemma 3, and using that ϱs) is monotonically increasing, this immediately follows As we will show in the next section, each subsequent inner iteration will give rise to a decrease of the value of Ψv) Hence during the course of the algorithm the largest values of Ψv) occur after the µ-updates Therefore, Corollary 3 provides a uniform upper bound for Ψv) during the execution of the algorithm As a consequence, by using the upper bounds for ϱs) in 8) and 3) we find that the numbers L and L introduced below are upper bounds for Ψv) during the course of the algorithm p + ) τ L := Nψ θ N + p+q+ q ) p+, q > 0, 34) + τ N p + ) ) ) τ N + p+q+ p+) q L := Nψ, q p 35) θ 4 Analysis of complexity bound for SOCO 4 Estimate of decrease of Ψv) during an inner iteration In each iteration we start with a primal-dual pair x, s) and then get the new iterates x + = x + α x and s + = s + α s from 4) The step size α is strictly feasible if and only if x + α x K + and s + α s K +, ie, if and only if x j + α x j K j + and sj + α s j K j + for each j Let W j denote the automorphism of K j that satisfies W j x j = W j s j and v j = W j x j / µ Then we have W j x j + α x j ) = µ v j + αd j ) x W j s j + α s j ) = µ v j + αd j s) Since W j is an automorphism of K j, we conclude that step size α is strictly feasible if and only if v j + αd j x K j and v j + αd j s K j, for each j Using Lemma 9, with x = ) µ v j + αd j x and s = ) µ v j + αd j s, we obtain µ det v j + αd j x) det v j + αd j ) s = det x j + α x j) det s j + α s j) µ Tr v j + αd j x) v j + αd j s)) = Tr x j + α x j) s j + α s j)) 7

18 On the other hand, if W j + is the automorphism that satisfies W j + x+j = W j + s +j and v j + = W j + x+j / µ, then, by Lemma 0 we have µ det v j + )) = det x j + α x j) det s j + α s j) µ Tr v j + )) = Tr x j + α x j) s j + α s j) ) We conclude from this that, for each j, det v j + )) = det v j + αd j x) det v j + αd j ) s Tr v j + )) = Tr v j + αd j x) v j + αd j s)) By Lemma 39 this implies that, for each j, Ψv j + ) Taking the sum over all j, j N, we get Ψv j + αd j x) + Ψv j + αd j s) ) Ψv + ) Ψv + αd x) + Ψv + αd s )) Hence, denoting the decrease in Ψv) during an inner iteration as fα) := Ψv + ) Ψv) we have fα) f α) := Ψv + αd x) + Ψv + αd s )) Ψv) One can easily verify that f0) = f 0) = 0 In the sequel we derive an upper bound for the function f α) in stead of fα), thereby taking advantage of the fact that f α) is convex fα) may be not convex!) We start by considering the first and second derivatives of f α) Using 3), ) and 7), one can easily verify that f α) = Trψ v + αd x ) d x + ψ v + αd s ) d s ) 36) and f α) = Trd x d x ) ψ v + αd x ) + d s d s ) ψ v + αd s )) 37) Using 36), the third equality in system ), and 33), we find f 0) = Trψ v) d x + d s )) = Trψ v) ψ v)) = δv) 38) To facilitate the analysis below, we define λ max v) = max { λ max v j ) : j N }, and λ min v) = min { λ min v j ) : j N } Furthermore, we denote δv) simply as δ 8

19 Lemma 4 One has f α) δ ψ λ min v) αδ) Proof: Since d x and d s are orthogonal, and d x + d s = ψ v), we have d x + d s = d x + d s = δ, whence d x δ and d s δ Hence, by Lemma, we have λ max v + αd x ) λ min v + αd x ) λ min v) αδ, λ max v + αd s ) λ min v + αd s ) λ min v) αδ By 6) and 7), ψ v) is positive and monotonically decreasing, respectively We thus obtain 0 < ψ λ max v + αd x )) ψ λ min v) αδ), 0 < ψ λ max v + αd s )) ψ λ min v) αδ) Applying Corollary 4, and using that Trz z) = z T z = z, for any z, we obtain Tr d x d x ) ψ v + αd x ) ) ψ λ min v) αδ) Trd x d x ) = ψ λ min v) αδ) d x, Tr d x d x ) ψ v + αd s ) ) ψ λ min v) αδ) Trd s d s ) = ψ λ min v) αδ) d s Substitution into 37) yields that f α) ψ λ min v) αδ) d x + d s ) = δ ψ λ min v) αδ) This prove the lemma Our aim is to find the step size α for which f α) is minimal The last lemma is exactly the same as Lemma 4 in [4] with v := λ min v)), where we dealt with the case of linear optimization As a consequence, as we use below the same arguments as in [4], we present several lemmas without repeating their proofs, and simply refer the reader to the corresponding result in [4] 4 Selection of step size α In this section we find a suitable default step size for the algorithm Recall that α should be chosen such that x + and s + are strictly feasible and such that Ψv + ) Ψv) decreases sufficiently Let us also recall that f α) is strictly convex, and f 0) < 0 due to 38), assuming that δ > 0) whereas f α) approaches infinity if the new iterates approach the boundary of the feasible region Hence, the new iterates will certainly be strictly feasible if f α) 0 Ideally we should find the unique α that minimizes f α) Our default step size will be an approximation for this ideal step size Lemma 4 Lemma 4 in [4]) f α) 0 certainly holds if the step size α satisfies ψ λ min v) αδ) + ψ λ min v)) δ 9

20 Lemma 43 Lemma 43 in [4]) The largest possible the step size α satisfying the inequality in Lemma 4 is given by ρδ) ρδ) ᾱ :=, δ with ρ as defined in 9) Lemma 44 Lemma 44 in [4]) With ρ and ᾱ as in Lemma 43, one has ᾱ ψ ρδ)) Since ψ t) is monotonically decreasing, Lemma 44 implies, when using also Lemma 38, ᾱ ψ ρδ)) ) ψ + 4δ) q+ Using 6) we obtain ᾱ p + 4δ) p q+ + q + ) + 4δ) q+ q+ p + q + ) + 4δ) q+ q+ We define our default step size as α by the last expression: α := p + q + ) + 4δ) q+ q+ 39) Lemma 45 Lemma 45 in [4]) If the step size α is such that α ᾱ, then The last lemma implies that fã) fα) αδ δ p + q + ) + 4δ) q+ q+ 40) According to the algorithm, at the start of each inner iteration we have Ψv) > τ So, assuming τ, and using Lemma 30, we conclude that δv) + p)ψv) + + p)ψv)) +p Ψv) + Ψv)) +p Ψv) 3Ψv)) +p 6 Ψv)) p +p Since the right hand side expression in 40) is monotonically decreasing in δ we obtain f α) 36p + q + ) Ψ p +p + 3 Ψ p p+ ) q+ q+ where we omitted the argument v in Ψv) Ψ p +p 36p + q + ) 5 3 Ψ p p+ ) q+ q+ Ψ pq q+)p+) 00p + q + ), 0

21 5 Iteration bounds 5 Complexity of large-update methods We need to count how many inner iterations are required to return to the situation where Ψv) τ We denote the value of Ψv) after the µ-update as Ψ 0 and the subsequent values in the same outer iteration as Ψ i, i =,,, T, where T denotes the total number of inner iterations in the outer iteration Recall that Ψ 0 L, with L as given by 34) Since ψt) t p+ / p + ) for t and 0 p, we obtain ) τ N + q+ ) p+ q τ + q+ q N Ψ 0 Nψ θ p + ) θ) p+ According to the expression for f α), we have where β = 00p+q+) and γ = Ψ i+ Ψ i βψ i ) γ, i = 0,,, T, 4) p+q+ q+)p+) Lemma 5 Proposition in []) If t 0, t,, t T is a sequence of positive numbers and with β > 0 and 0 < γ, then T tγ 0 βγ Hence, due to this lemma we obtain t i+ t i βt γ i, i = 0,,, T, ) p+q+ q+)p+) T 00p + )q + )Ψ0 00p + )q + ) τ + q+ q N p + ) θ) p+ p+q+ q+)p+) 4) The number of outer iterations is bounded above by θ log N ɛ, as one easily may verify By multiplying the number of outer iterations and the upper bound for the number T of inner iterations, we obtain an upper bound for the total number of iterations, namely, ) 00p + )q + ) τ + q+ q N θ θ) p+q+ q+) p + p+q+ q+)p+) log N ɛ 43) Large-update methods have θ = Θ), and τ = ON) Note that our bound behaves bad for small values of q, ie if q approaches 0 Assuming q, the iteration bound becomes ) O qn p+q+ q+)p+) log N ɛ It is interesting to consider the special case where τ = q q Then, when omitting expressions that are Θ) and the factor log N ɛ, the bound becomes p+q+ p + )q + ) 4N) q+)p+) 44)

22 Given q, the value for p that minimizes the above expression turns out to be p = q log4n) q + which in the cases of interest lies outside the interval [0, ] Thus, let us take p = Then the best value of q is given by q = log 4N, and this value of q reduces 44) to, log 4N 4N) +log 4N log 4N = 4Nlog 4N)) 4N) log4n) = exp) 4N log 4N)) Thus we may conclude that for a suitable choice of the parameters p and q the iteration bound for large-update methods is ) 4N N O log 4N)) log ɛ Note that this bound closely resembles the bound obtained in [4] for the kernel functions ψ 3 t) and ψ 7 t) in that paper 5 Complexity of small-update methods Small-update methods are characterized by θ = Θ/ N) and τ = O) The exponent of the expression between brackets in the iteration bound 43) is always larger than Hence, if θ = Θ/ N), the exponent of N in the bound will be at least For small-update methods we can do better, however For this we need the upper bound L for Ψv), as given by 35) Recall that this bound is valid only if q p Thus, using also Lemma 3, we find Ψ 0 p + q + ) N + τ p+)τ N N + p+q+ q ) p+) θ Since θ) θ, this can be simplified to Ψ 0 p + q + τ p + )τ N θ + θ N N + p + q + ) ) p+) q which, using p 0, can be further simplified to Ψ 0 p + q + θ N + τ θ) p + )τ N + p + q + ) ) q By using the first inequality in 4) we find that the total number of iterations per outer iteration is bounded above by 00p + )q + ) p + q + θ) θ N + τ p + )τ N + p + q + ) ) q p+q+ q+)p+)

23 Hence, using p+q+ q+)p+), we find that the total number of iterations is bounded above by 50p + )q + )p + q + ) θ θ) θ N + τ p + )τ N + p + q + ) ) log N q ɛ Taking, eg, p = 0, q =, τ = and θ = / N, this yields the iteration bound N ) which is O log N ε ) 00 + θ θ) N + log N ɛ 333 θ θ) log N ɛ, 6 Conclusions and remarks We presented a primal-dual interior-point algorithm for SOCO based on a specific kernel function and derived the complexity analysis for the corresponding algorithm, both with large- and smallupdate methods Besides, we developed some new analysis tools for SOCO Our analysis is a relatively simple and straightforward extension of analogous results for LO Some interesting topics for further research remain First, the search directions used in this paper are based on the NT-symmetrization scheme It may be possible to design similar algorithms using other symmetrization schemes and still obtain polynomial-time iteration bounds Finally, numerical tests should be performed to investigate the computational behavior of the algorithms in comparison with other existing approaches Acknowledgement The authors thank Manual Vieirra for some comments on an earlier version of the paper A Some technical lemmas Lemma A For any vector a R n one has En + aa T ) aa T = E n a T a Proof: Let A = E n +aa T The eigenvalues of A are +a T a multiplicity ) and multiplicity n ) Hence, if e,, e n is any orthonormal basis of the orthocomplement of a, then Therefore, A = + a T a ) aa T n a T a + e i i= A n = + a T a aat a T a + e i = i= + a T a aat a T a +A + a T a ) aa T ) aa T a T a = E n+ + a T a a T a 3

24 Multiplication of the numerator and denominator in the last term with + + a T a yields the lemma Obviously, we have for every x K that Qx K, and QK) = K This means that Q is an automorphism of K Without repeating the proof we recall the following lemma Lemma A Proposition 3 in [9]) The automorphism group of the cone K consists of all positive definite matrices W such that W QW = λq for some λ > 0 To any nonzero a = a ; ā) K we associate the matrix W a = a ā ā T E n + āāt +a = Q + e + a)e + a)t + a Note that W e = E n and W a e = a Moreover, the matrix W a is positive definite The latter can be made clear as follows By Schur s lemma W a is positive definite if and only if This is equivalent to E n + āāt + a āāt a 0 a + a )E n āā T 0, which certainly holds if a E n āā T 0 This holds if and only if for every x R n one has a x ā T x) By the Cauchy-Schwartz inequality the latter is true since a ā Lemma A3 Let the matrix W 0 be such that W QW = λq for some λ > 0 Then there exists a = a ; ā) K with deta) = such that W = λ W a Moreover, W = λ W Qa Proof: First assume λ = Since Q = E n, W QW = Q implies QW Q = W Without loss of generality we may assume that W has the form W = a ā ā T C, where C denotes a symmetric matrix Thus we obtain W = QW Q = a ā This holds if and only if ā T C By Lemma A, the last equation implies a ā T ā =, a ā Cā = 0, C āā T = E n 45) C = E n + āā T + + ā T ā = E n + āāt + a, 4

25 where the last equality follows from the first equation in 45); here we used that W 0, which implies a > 0 One may easily verify that this C also satisfies the second equation in 45), ie, Cā = a ā Note that deta) = a āt ā = This completes the proof if λ = If λ, then multiplication of W a by λ yields the desired expression Lemma A4 Let x, s K + Then there exists a unique automorphism W of K such that W x = W s Using the notation of Lemma A3, this automorphism is given by W = λ W a, where s + λ Qx det s) a = λ x T s +, λ = det x) det x) det s) Proof: By Lemma A3 every automorphism of K has the form W = λ W a, with λ > 0 and a = a ; ā) K with deta) = Since W x = W s holds if and only if W x = s, we need to find λ and a such that W x = s Some straightforward calculations yield W = λ a a ā T a ā E n + āā T So, denoting x = x ; x) and s = s ; s), we need to find λ and a such that This is equivalent to λ a a ā T x = s a ā E n + āā T x s a ) x + a ā T x = s λ a x ā + x + ā T x) ā = s λ Using a x + ā T x = a T x, this can be written as Thus we find a = a ā a T x) a = s λ + x a T x) ā = s λ x = a T x s λ + x s λ x = s + λqx λ a T x 46) Now observe that what we have shown so far is that if W = λ W a and W x = s then a satisfies 46) Hence, since W = λ W Qa and W s = x, we conclude that Qa can be expressed as follows Qa = x + λ Qs λ Qa)T s = λ x + Qs a T Qs 5

26 Multiplying both sides with Q, while using that Q = E n, we obtain Comparing this with the expression for a in 46) we conclude that a = s + λqx a T Qs 47) λ a T x = a T Qs 48) Taking the inner product of both sides in 47) with x and Qs, respectively, we get Since λ a T x = a T Qs, this implies which gives a T x = xt s + λx T Qx a T Qs, a T Qs = st Qs + λx T s a T Qs λ x T s + λx T Qx ) = s T Qs + λx T s, λ x T Qx = s T Qs Observing that x T Qx = det x) and s T Qs = det s)), and λ > 0, we obtain det s) λ = det x) 49) Now using that deta) = a T Qa = we derive from 47) that This implies or, equivalently, s + λqx) T Q s + λqx) 4 a T Qs) = λ x T Qx + s T Qs + λx T s = 4 a T Qs ), λ det x) + det s) + λx T s = 4 a T Qs ) Note that the expression at the left is positive, because x K + and s K + Because a K + and Qs K + also a T Qs > 0 Substituting the value of λ we obtain det s) det s) + det x) xt s = 4 a T Qs ), whence a T Qs = det s) det s) + det x) xt s = 4 Hence, with λ as defined in 49), the vector a is given by det s) det x) s + λ Qx a = λ x T s + det x) det s) x T s + det x) det s) This proves the lemma 6

27 Lemma A5 Let x, s K + and let W be unique automorphism W of K such that W x = W s Then W x = W s = α, where Proof: By Lemma A4, W x = λw a x = λ a ā W s = W Qa s = a λ λ with and hence Since W x = W s we get u x+u s)+α x s+s x) α+u s +u x u = λ, α = x T s + det x) det s) ā ā T E n + āāt +a ā T E n + āāt +a x x = u a T x x ā + x + s s = u s ā + s + αāt x) ā α+ux +u s a T Qs, αāt s) ā α+ux +u s a = αu s + λ Qx) = u s + u Qx ) 50) α + a = + α u ) α + ux + u s s + u x = α, W x = a u T x x ā + x + αāt x) ā α+ux +u s = ua T x) + u a T Qs) Due to 48) one has + u s ā + s + ux u s )ā + u x + u s + α āt u x+u s) ā α+ux +u s a T Qs αāt s) ā α+ux +u s Moreover, using 50), ua T x) + u a T Qs) = ua T x) + u λa T x) = u a T x ua T x = a T ux) = u s + u Qx ) T ux) = s T x + λ x T Qx ) = s T x + λ det x) ) α α α = s T x + ) det x) det s) = α α α = α Using 48) once more we obtain ā T u x + u s ) = uā T x + u ā T s = u a T x a x ) + u a s a T Qs ) = ua T x) u a T Qs) + u a x ) + u a s ) = ua T x) u λa T x) a ux u s ) = a u s ux ) 7

28 From 50) we derive that a = u s + u x )/α Thus we find that ā T u x + u s ) = u s ux ) u s + u x α = u s u x α Substitution of these results in the above expression for W gives, W x = α ux u s )ā + u x + u s + u s u x α+u s +u x ā The rest of the proof consists of simplifying the lower part of the above vector We may write ux u s )ā + u x + u s + u s u x α + u ā s + u x = ux u s + u s u x α + u s + u x = α ux u s ) α + u s + u x ā + u x + u s ) ā + u x + u s = ux u s u s α + u u x ) + u x + u s s + u x = + ux u ) s α + u u s + s + u x = α + ux ) u s + α + u ) s u x α + u s + u x = α u x + u s ) + x s + s x α + u s + u x ux u s α + u s + u x ) u x Substitution of the last expression gives W x = α αu x+u s)+x s+s x α+u s +u x = α u x+u s)+α x s+s x) α+u s +u x, which completes the proof References [] I Adler and F Alizadeh Primal-dual interior point algorithms for convex quadratically constrained and semidefinite optimization promblems Technical Report RRR--95, Rutger Center for Operations Research, Brunswick, NJ, 995 [] ED Andersen, J Gondzio, Cs Mészáros, and X Xu Implementation of interior point methods for large scale linear programming In T Terlaky, editor, Interior Point Methods of Mathematical Programming, pages 89 5 Kluwer Academic Publishers, Dordrecht, The Netherlands, 996 [3] ED Andersen, C Roos, and T Terlaky On implementating a primal-dual interior-point method for conic quadratic optimization Mathematical Programming Series B), 95:49 77, 003 8

29 [4] YQ Bai, M El Ghami, and C Roos A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization SIAM Journal on Optimization, 5):0 8 electronic), 004 [5] YQ Bai and C Roos A primal-dual interior-point method based on a new kernel function with linear growth rate, 00 To appear in Proceedings Perth Industrial Optimization meeting [6] L Faybusovich Linear system in jordan algebras and primal-dual interior-point algorithms Journal of Computational and Applitied Mathematics, 86):49 75, 997 [7] M Fukushima, ZQ Luo, and P Tseng A smoothing function for second-order-cone complementary problems SIAM Journal on Optimization, : , 00 [8] MS Lobo, L Vandenberghe, SE Boyd, and H Lebret Applications of the second-order cone programming Linear Algebra and Its Applications, 84:93 8, 998 [9] RDC Monteiro and T Tsuchiya Polynomial convergence of primal-dual algorithms for the secondorder program based the MZ-family of directions Mathematical Progamming, 88:6 93, 000 [0] YE Nesterov and MJ Todd Self-scaled barriers and interior-point methods for convex programming Mathematics of Operations Research, ): 4, 997 [] YE Nesterov and MJ Todd Primal-dual interior-point methods for self-scaled cones SIAM Journal on Optimization, 8):34 364, 998 [] J Peng, C Roos, and T Terlaky Self-Regularity A New Paradigm for Primal-Dual Interior-Point Algorithms Princeton University Press, 00 [3] S M Robinson Some continuity properties of polyhedral multifunctions Math Programming Stud, 4:06 4, 98 Mathematical Programming at Oberwolfach Proc Conf, Math Forschungsinstitut, Oberwolfach, 979) [4] C Roos, T Terlaky, and J-Ph Vial Theory and Algorithms for Linear Optimization An Interior- Point Approach John Wiley & Sons, Chichester, UK, 997 [5] SH Schmieta and F Alizadeh Associative and jordan algebras, and polynimal time interior-point algorithms for symmetric cones Mathematics of Operations Research, 63): , 00 [6] J F Sturm Implementation of interior point methods for mixed semidefinite and second order cone optimization problems Optimization Methods & Software, 76):05 54, 00 [7] JF Sturm Using SeDuMi 0, a MATLAB toolbox for optimization over symmetric cones Optimization Methods & Software, :65 653, 999 [8] T Tsuchiya A polynomial primal-dual path-following algorithm for second-order cone programming Research Memorandum No 649, The Institute of Statistical Mathematics, Tokyo, Japan, 997 [9] T Tsuchiya A convergence analysis of the scaling-invariant primal-dual path-following algorithms for second-order cone programing Optimization Mehtods & Software, -:4 8, 999 [0] GQ Wang, YQ Bai, and C Roos Primal-dual interior-point algorithms for semidefinite optimization based on a simple kernel function, 004 To appear in International Journal of Mathematical Algorithms [] H Wolkowicz, R Saigal, and L Vandenberghe, editors Handbook of semidefinite programming International Series in Operations Research & Management Science, 7 Kluwer Academic Publishers, Boston, MA, 000 Theory, algorithms, and applications 9

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION International Journal of Pure and Applied Mathematics Volume 4 No. 4 07, 797-88 ISSN: 3-8080 printed version); ISSN: 34-3395 on-line version) url: http://www.ijpam.eu doi: 0.73/ijpam.v4i4.0 PAijpam.eu

More information

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization Appl Math Inf Sci 9, o, 973-980 (015) 973 Applied Mathematics & Information Sciences An International Journal http://dxdoiorg/101785/amis/0908 A Weighted-Path-Following Interior-Point Algorithm for Second-Order

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1 L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following

More information

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Primal-dual path-following algorithms for circular programming

Primal-dual path-following algorithms for circular programming Primal-dual path-following algorithms for circular programming Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan July, 015 Abstract Circular programming problems are a

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

IMPLEMENTATION OF INTERIOR POINT METHODS

IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS FOR SECOND ORDER CONIC OPTIMIZATION By Bixiang Wang, Ph.D. A Thesis Submitted to the School of Graduate Studies in Partial

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Positive semidefinite matrix approximation with a trace constraint

Positive semidefinite matrix approximation with a trace constraint Positive semidefinite matrix approximation with a trace constraint Kouhei Harada August 8, 208 We propose an efficient algorithm to solve positive a semidefinite matrix approximation problem with a trace

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

The Jordan Algebraic Structure of the Circular Cone

The Jordan Algebraic Structure of the Circular Cone The Jordan Algebraic Structure of the Circular Cone Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan Abstract In this paper, we study and analyze the algebraic structure

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

On implementing a primal-dual interior-point method for conic quadratic optimization

On implementing a primal-dual interior-point method for conic quadratic optimization On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

On the Sandwich Theorem and a approximation algorithm for MAX CUT

On the Sandwich Theorem and a approximation algorithm for MAX CUT On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/

More information

Department of Social Systems and Management. Discussion Paper Series

Department of Social Systems and Management. Discussion Paper Series Department of Social Systems and Management Discussion Paper Series No. 1262 Complementarity Problems over Symmetric Cones: A Survey of Recent Developments in Several Aspects by Akiko YOSHISE July 2010

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider

More information

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS A ISRN KTH/OPT SYST/FR 02/12 SE Coden: TRITA/MAT-02-OS12 ISSN 1401-2294 Characterization of the limit point of the central path in semidefinite programming by Göran Sporre and Anders Forsgren Optimization

More information

New Primal-dual Interior-point Methods Based on Kernel Functions

New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

A Polynomial Column-wise Rescaling von Neumann Algorithm

A Polynomial Column-wise Rescaling von Neumann Algorithm A Polynomial Column-wise Rescaling von Neumann Algorithm Dan Li Department of Industrial and Systems Engineering, Lehigh University, USA Cornelis Roos Department of Information Systems and Algorithms,

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications Edited by Henry Wolkowicz Department of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo,

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Second-Order Cone Programming

Second-Order Cone Programming Second-Order Cone Programming F. Alizadeh D. Goldfarb 1 Introduction Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Lingchen Kong, Levent Tunçel, and Naihua Xiu 3 (November 6, 7; Revised December, 7) Abstract Recently, Gowda et al. [] established

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

Strict diagonal dominance and a Geršgorin type theorem in Euclidean Strict diagonal dominance and a Geršgorin type theorem in Euclidean Jordan algebras Melania Moldovan Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland

More information

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION J Syst Sci Complex (01) 5: 1108 111 A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION Mingwang ZHANG DOI: 10.1007/s1144-01-0317-9 Received: 3 December 010 / Revised:

More information

1. Introduction and background. Consider the primal-dual linear programs (LPs)

1. Introduction and background. Consider the primal-dual linear programs (LPs) SIAM J. OPIM. Vol. 9, No. 1, pp. 207 216 c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER,

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications L. Faybusovich T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

The Q Method for Symmetric Cone Programming

The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programming Farid Alizadeh Yu Xia October 5, 010 Communicated by F. Potra Abstract The Q method of semidefinite programming, developed by Alizadeh, Haeberly and Overton,

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National

More information

Typical Problem: Compute.

Typical Problem: Compute. Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Introduction to Semidefinite Programs

Introduction to Semidefinite Programs Introduction to Semidefinite Programs Masakazu Kojima, Tokyo Institute of Technology Semidefinite Programming and Its Application January, 2006 Institute for Mathematical Sciences National University of

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry assoc. prof., Ph.D. 1 1 UNM - Faculty of information studies Edinburgh, 16. September 2014 Outline Introduction Definition

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

Permutation invariant proper polyhedral cones and their Lyapunov rank

Permutation invariant proper polyhedral cones and their Lyapunov rank Permutation invariant proper polyhedral cones and their Lyapunov rank Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu

More information

א K ٢٠٠٢ א א א א א א E٤

א K ٢٠٠٢ א א א א א א E٤ المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. Al-Homidan and

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information