A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization

Size: px
Start display at page:

Download "A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization"

Transcription

1 Appl Math Inf Sci 9, o, (015) 973 Applied Mathematics & Information Sciences An International Journal A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization Jingyong Tang 1,, Li Dong 1, Liang Fang and Jinchuan Zhou 3 1 College of Mathematics and Information Science,Xinyang ormal University, Xinyang 6000, P R China College of Mathematics and System Science, Taishan University, Tai an 7101, P R China 3 Department of Mathematics, Shandong University of Technology, Zibo 5509, P R China Received: 19 Jun 01, Revised: 17 Sep 01, Accepted: 19 Sep 01 Published online: 1 Mar 015 Abstract: We present a weighted-path-following interior-point algorithm for solving second-order cone optimization This algorithm starts from an initial point which is not on the central path It generates iterates that simultaneously get closer to optimality and closer to centrality At each iteration, we use only full esterov-todd step; no line searches are required We derive the complexity bound of ( log ) the algorithm with small-update method, namely, O ε, where denotes the number of second order cones in the problem formulation and ε the desired accuracy This bound is the currently best known iteration bound for second-order cone optimization Keywords: Second-order cone optimization, interior-point method, small-update method, polynomially complexity 1 Introduction The second-order cone (SOC) in R n, also called the Lorentz cone or the ice-cream cone, is defined as { L := (x 1,x,,x n ) R n : x n } 1 x j, x 1 0, (1) j= where n is some natural number Second-order cone optimization (SOCO) is convex optimization problem in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order cones In this paper we consider SOCO in standard format (P) min { c T x : Ax=b, x K }, where A R m n,c R n and b R m, and K R n is the Cartesian product of second-order cones, ie, K = L 1 L, with L i R n i for each i,,,, The dual problem of(p) is given by (D) max { b T y : A T y+s= c, s K } Without loss of generality, throughout the paper, we assume that rows of the matrix A are linearly independent Hence, if the pair (y,s) is dual feasible then y is uniquely determined by s Therefore, we will feel free to say that s is dual feasible, without mentioning y The study of SOCO is vast important because it covers linear optimization, convex quadratic optimization, quadratically constraint convex quadratic optimization as well as other problems [1] In the last few years, the SOCO problem has received considerable attention from researchers for its wide range of applications in many fields, such as engineering, optimal control and design, machine learning, robust optimization and combinatorial optimization and so on We refer the interested readers to the survey paper [9] and the references therein Many researchers have worked on interior-point methods (IPMs) for solving SOCO (eg, [, 11, 13, 15]) otice that the IPMs in those papers required the initial point and iteration points to be on, or close to, the central path However, practical implementations often don t use perfectly centered starting points Therefore, it is worth analyzing the case when the starting point is not on the central path Recently, Jansen et al [7] presented a class of primal-dual target-following interior-point methods for linear optimization This class starts from an initial non-central point and generates iterates that Corresponding author jingyongtang@163com c 015 SP atural Sciences Publishing Cor

2 97 J Tang et al : A Weighted-Path-Following Interior-Point Algorithm simultaneously get closer to optimality and closer to centrality Darvay [, 5] investigated the weighted-path-following interior-point method, a particular case of target-following methods, for linear optimization These methods were also studied by Ding et al [3] for linear complementarity problems, by Jin et al [8] for convex quadratic optimization and by Roos et al [1] for linear optimization Motivated by their work, in this paper we propose a weighted-path-following interior-point algorithm for SOCO We adopt the basic analysis used in [,5] to the SOCO case The currently best known iteration bound for the ( algorithm with small-update method, namely, log ) O ε, is obtained To compare with the path-following (primal-dual) interior-point methods, our method has the following advantages: (i) it can start from any strictly feasible initial point, and works for optimality and centrality at the same time (ii) it uses only full esterov-todd step and no any line searches are required at each iteration While some pathfollowing (primal-dual) interior point methods (eg, [, 11, 13,15]) need do some line searches to find a suitable step size which keeps the iterations be feasible and be on, or close to, the central path The paper is organized as follows In Section, we recall some useful results on second-order cones and their associated Jordan algebra In Sections 3, we present the new search directions for SOCO In Section, we propose the weighted-path-following interior-point algorithm In Section 5, we analyze the algorithm and derive the currently best known iteration bound with small-update method The conclusions are given in Section 6 Some notations used throughout the paper are as follows R n,r n + and Rn ++ denote the set of vectors with n components, the set of nonnegative vectors and the set of positive vectors, respectively We use ; for adjoining vectors in a column For instance, for column vectors x,y and z we have (x; y; z) := (x T, y T, z T ) T As usual, I denotes the identity matrix with suitable dimension denotes the -norm of the vector x and the index set J is J = {1,,,} Finally, for any x,y R n, we write x K y (respectively, x K y) if x y K (respectively, x y intk, where intk denotes the interior of K) Algebraic properties of second-order cones In this section we briefly recall some algebraic properties of the SOC L as defined by (1) and its associated Euclidean Jordan algebra For any vectors x,s R n, their Jordan product associated with the SOC L is defined by x s :=(x T s; x 1 s :n + s 1 x :n ), where x :n := (x ;;x n ) and s :n := (s ;;s n ) (R n, ) is an Euclidean Jordan algebra with the vector e := (1; 0;; 0) as the identity element Obviously, e T (x s) = x T s In the sequel, we denote the vector (x ;;x n ) shortly as x :n So x = (x 1 ;x :n ) Given an element x R n, define the symmetric matrix L(x) :=[ x1 x T :n x :n x 1 I ] R n n, where I represents the (n 1) (n 1) identity matrix otice that x s=l(x)s for any x,s R n The so-called spectral decomposition of a vector x R n is given by x=λ min (x)u (1) + λ max (x)u (), () where λ min (x),λ max (x) and u (1),u () are the spectral values and the associated spectral vectors of x given by λ min (x) := x 1 x :n, λ max (x) := x 1 + x :n, ) 1 (1;( 1) i x :n u (i) x :n, x :n 0, = ),, 1 (1;( 1) i υ, x :n = 0, with υ R n 1 being any vector satisfying υ =1 Using (), for each x R n we define the following [1]: square root: x 1 = (λ min (x)) 1 u (1) +(λ max (x)) 1 u (), for any x L; inverse: x 1 =(λ min (x)) 1 u (1) +(λ max (x)) 1 u (), for any x intl; square: x =(λ min (x)) u (1) +(λ max (x)) u (), for any x R n Indeed, one has x = x x = ( x ;x 1 x :n ), and (x 1 ) = x If x 1 is defined, then x x 1 = e, and we call x be invertible Lemma 1 Let x,s R n, one has (i) λ min (x) + λ min (s) λ min (x + s) λ max (x + s) λ max (x)+λ max (s); (ii) λ min (x )=λ min (x), λ max (x )=λ max (x) ; (iii) If x is invertible, then λ min (x 1 ) = λ max (x) 1, and λ max (x 1 )=λ min (x) 1 The trace and the determinant of x R n are Tr(x) := λ min (x)+λ max (x)=x 1, det(x) := λ min (x)λ max (x)=x 1 x :n Lemma [] For all x,s,t L, one has Tr((x s) t)= Tr(x (s t)) The natural inner product is given by x,s := Tr(x s)=x T s, x,s R n Hence, the norm induced by this inner product, which is denoted as F, satisfies x F = x x = Tr(x x) c 015 SP atural Sciences Publishing Cor

3 Appl Math Inf Sci 9, o, (015) / wwwnaturalspublishingcom/journalsasp 975 It is obvious that = λ min (x) + λ max (x) = x (3) λ min (x) x F, and λ max (x) x F () Using Lemma, one can easily verify that (x s) t) F = (x (s t)) F, x,s,t L (5) Lemma 3 [16] For any x,s R n, one has (i) x F x F ; equality holds if and only if x 1 = x :n ; (ii) Tr [ (x s) ] Tr(x s ); (iii) x s F λ max (x) s F x F s F Lemma Let s R n and x intl If λ min (x) > s F, then x s L 0 Proof From (), it follows that then λ min ( s) s F = s F, s F λ min ( s) s F Thus, using Lemma 1 (i), we have λ min (x s) λ min (x)+λ min ( s) λ min (x) s F > 0, which implies that x s L 0 This proves the lemma Lemma 5 Let ρ L, for any v L, one has ρ v F ρ ρ v v F λ min (ρ)+λ min (v) Proof From Lemma 1, Lemma 3 (iii) and (5), we have ρ v F = ((ρ+ v) 1 (ρ+ v)) (ρ v) F = (ρ+ v) 1 ((ρ+ v) (ρ v)) F = (ρ+ v) 1 (ρ ρ v v) F λ max ((ρ+ v) 1 ) ρ ρ v v F = ρ ρ v v F λ min (ρ+ v) ρ ρ v v F λ min (ρ)+λ min (v) The proof is completed ow we proceed by adapting the above definitions to the general case K = L 1 L and > 1 First we partition vectors x,s R n according to the dimensions of the successive cones L i, so x := (x 1 ;;x ) and s := (s 1 ;;s ), and we define the algebra (R n, ) as a direct product of Jordan algebras: x s :=(x 1 s 1 ;;x s ) If e i L i is the unit element in the Jordan algebra for the ith cone, then e := (e 1 ;;e ) is the unit element in (R n, ) Since L(x) := diag(l(x 1 ),,L(x )), it is easy to verify that [1] λ min (x)=λ min (L(x))=min { λ min (x i ),i J }, λ max (x)=λ max (L(x))=max { λ max (x i ),i J } Furthermore, Tr(x) = x F = det(x) = Tr(x i )= x i F = det(x i )= [ λmin (x i )+λ max (x i ) ], ( λmin (x i ) + λ max (x i ) ), λ min (x i )λ max (x i ) 3 The new search directions for SOCO Throughout the paper, we assume that (P) and(d) satisfy the interior-point condition (IPC), ie, there exists (x 0,y 0,s 0 ) such that Ax 0 = b, x 0 intk, A T y 0 + s 0 = c, s 0 intk In fact, by using the self-dual embedding technique (eg, [10]), we may (and will) assume that x 0 = s 0 = e It is well known that finding an optimal solution of(p) and(d) is equivalent to solving the following system: Ax=b, x K, A T y+s= c, s K, (6) L(x)s=0 The basic idea of IPMs is to replace the third equation in system (6), the so-called complementarity condition for (P) and (D), by the parameterized equation L(x)s = µe, with µ > 0 Thus we consider the system Ax=b, x K, A T y+s= c, s K, (7) L(x)s=µe For each µ > 0, the parameterized system (7) has a unique solution (x(µ), y(µ), s(µ)) We call x(µ) the µ-center of (P) and(y(µ),s(µ)) the µ-center of (D) The set of µ-center (with µ running through all positive real numbers) gives a homotopy path, which is called the central path of (P) and (D) If µ 0, then the limit of the central path exists and since the limit points satisfy the complementarity condition L(x)s = 0, it naturally yields optimal solution for(p) and(d) The weighted-path-following approach starts from the observation that system (7) can be generalized by replacing the vector µe with an arbitrary vector κ intk Thus we obtain the following system Ax=b, x K, A T y+s= c, s K, (8) L(x)s=κ c 015 SP atural Sciences Publishing Cor

4 976 J Tang et al : A Weighted-Path-Following Interior-Point Algorithm Since the IPC holds and A has full rank, system (8) has a unique solution Hence we can apply ewton s method for system (8) to develop a weighted-path-following algorithm Suppose that we have Ax = b and A T y+s = c for a triple (x,y,s) such that x intk and s intk, hence x and s are strictly feasible By linearizing system (8) we obtain the following system for the search directions: A x=0, A T y+ s=0, (9) L(x) s+l(s) x=κ L(x)s This system has a unique solution if and only if the matrix AL(s) 1 L(x)A T is nonsingular Unfortunately, this might not be the case, even if A has full rank This is due to the fact that x and s do not operator commute in general (ie L(x)L(s) L(s)L(x)) However, the system can be solved by using some scaling schemes In this paper we take the T-scaling scheme which results in the well-known Tdirection Below, we only briefly outline the T-scaling scheme The readers who are interested in this may find the detail description in [15] For any x i,s i intl i and i J, we denote the T-scaling matrix W i as: ( ) (s i 1 ω i = ) s i 1 :n (x i 1 ) x i, :n s i =( s i 1, si :n )=ω 1 i (s i 1,si :n ), xi =( x i 1, xi :n )=ω i(x i 1,xi :n ), ζ i =(ζ i 1,ζ i :n)=( x i 1+ s i 1, s i :n x i :n), α i = ζ i 1 γ(ζ i ), β i = ζ i :n γ(ζ i ), γ(ζ i )= W i = ω i [ αi β T i β i I+ β iβi T 1+α i (ζ i 1 ) (ζ i :n )T ζ i :n, where I represents the n i n i identity matrix For the above choices one has (see, Proposition 76 in [15]) W i x i =(W i ) 1 s i, i J We define Then Let us further denote ] W := diag(w 1,,W ) v :=(v 1 ;;v )= W x (= W 1 s) (10) A := AW 1, d x := W x, d s := W 1 s (11) Applying (10) and (11), system (9) can be rewritten as follows: Ad x = 0, A T y+d s = 0, (1), L(W 1 v)w d s + L(Wv)W 1 d x = κ L(W 1 v)wv Since this system is equivalent to system (9), it may not have a unique solution To overcome this difficulty, in the same way as Bai et al did in [], we replace the third equation in system (1) by L(v)d s + L(v)d x = κ L(v)v, which, after multiplying of both sides from the left with L(v) 1, becomes d s + d x = L(v) 1 κ v (13) Thus the system defining the scaled search directions becomes Ad x = 0, A T y+d s = 0, (1) d s + d x = L(v) 1 κ v Since the matrix Ā T Ā is positive definite, this system has a unique solution In this paper, in order to avoid calculating the inverse matrix of L(v) in system (1), motivated by [,5], we replace the right hand side in the last equation in system (1) by (κ v) Thus we will use the following system to define our new search direction: Ad x = 0, A T y+d s = 0, (15) d s + d x = (κ v) Since system (15) has the same matrix of coefficients as system (1), also system (15) has a unique solution By transforming back to the x and s space, respectively, using (11), we obtain search directions x and s in the original spaces, with x= W 1 d x, s= Wd s (16) By taking a full T-step, we construct a new triple (x +,y +,s + ) according to x + = x+ x, y + = y+ y, s + = s+ s The weighted-path-following interior-point algorithm We define p v := (κ v)= d x + d s (17) Since κ intk, we have λ min (κ)>0 ow for any vector v intk, we define the following proximity measure σ(v;κ) := σ(x,s;κ) := p v F λ min (κ) = κ v F λ min (κ) (18) c 015 SP atural Sciences Publishing Cor

5 Appl Math Inf Sci 9, o, (015) / wwwnaturalspublishingcom/journalsasp 977 where We introduce another measure σ c (κ) := λ max(κ) λ min (κ), (19) λ max (κ)=max { λ max (κ i ), i J }, λ min (κ)=min { λ min (κ i ), i J } It is obvious that σ c (κ) 1 Equality holds if and only if λ max (κ)=λ min (κ), ie, κ = ηe, where η > 0 is a positive constant, which implies that κ is centered Thus σ c (κ) can be used to measure the distance of κ to the central path Furthermore, let us introduce the notation q v := d x d s (0) From the first two equations of the system (15), we obtain that d x and d s are orthogonal, that is d T x d s = d T s d x = 0 Thus, we get p v F = q v F Consequently, the proximity measure can be written in the following form Hence, we have σ(v;κ) := q v F λ min (κ) (1) d x = p v+ q v, d s = p v q v, () d x d s = p v p v q v q v (3) We now give the weighted-path-following interior-point algorithm for SOCO as follows Algorithm 1(The weighted-path-following interiorpoint algorithm for SOCO) Step 1: Let (x 0,y 0,s 0 ) be the strictly feasible interior point, and let κ 0 = W 0 x 0 = (W 0 ) 1 s 0 Choosing an accuracy parameter ε > 0, and a fixed update parameter θ (0,1) Set x := x 0,y := y 0,s := s 0,κ := κ 0 Step : If x T s < ε, then stop Otherwise, perform the following steps Step 3: Calculate κ :=(1 θ)κ Step : Solve the system (15) and via (16) to obtain ( x, y, s) Step 5: Update (x,y,s) := (x,y,s)+( x, y, s) and go to Step In the next section we will prove that this algorithm is well defined, and we will also give an upper bound for the number of iterations performed by the algorithm 5 Analysis of the algorithm 51 Feasibility of the full T-step In this subsection, we find a condition that guarantees feasibility of the iterates after a full T-step As before, let x,s intk, and let W be the T-scaling matrix Using (10) and (11), we obtain that x + = x+ x= W 1 (v+d x ), () s + = s+ s= W(v+d s ) (5) Since W and its inverse W 1 are automorphisms of K, x + and s + belong to intk if and only if v+d x and v+d s belong to intk Let 0 α 1, we define v x (α)=v+αd x and v s (α)=v+αd s Lemma 51 Let σ(v;κ) < 1 Then the full T-step is strictly feasible, hence x + intk and s + intk Proof From () and (3), we have v x (α) v s (α) = (v+αd x ) (v+αd s ) = v v+αv (d x + d s )+α d x d s = v v+αv p v + α p v p v q v q v = (1 α)v v+α(v v+v p v )+α p v p v q v q v Moreover, (17) yields v+ p v = κ, and thus v v+v p v = κ κ p v p v Consequently ( v x (α) v s (α)=(1 α)v v+α κ κ (1 α) p v p v α q v q v (6) Since 0 α 1, using σ(v;κ)<1, we have (1 α) p v p v + α q v q v F (1 α) p v p v F + α q v q v F Thus, it follows from Lemma that κ κ (1 α) p v p v (1 α) p v F + α q v F = σ(v;κ) λ min (κ) < λ min (κ) = λ min (κ κ) α q v q v K 0 Since the set of the second-order cones is cone, we can conclude that ( (1 α)v v+ κ κ (1 α) p v p v α q ) v q v K 0 Thus, we have det(v x (α) v s (α)) > 0, which, together with Lemma 3 in [], implies that for each α [0,1], det(v x (α))det(v s (α)) det(v x (α) v s (α))>0 Therefore, by Lemma 61 in [16] we get v x (1)=v+d x K 0 and v s (1)=v+d s K 0 This completes the proof The following lemma gives an upper bound for the duality gap obtained after a full T-step Lemma 5 Let σ(x,s;κ)<1 Then (x + ) T s + = κ q v, ) c 015 SP atural Sciences Publishing Cor

6 978 J Tang et al : A Weighted-Path-Following Interior-Point Algorithm and hence(x + ) T s + κ Proof Due to () and (5), we may write (x + ) T s + = ( W 1 (v+d x ) ) T ( W(v+d s ) ) =(v+d x ) T (v+d s ) From (6) with α = 1, we get (v+d x ) (v+d s )=κ κ q v q v Thus, x T +s + = e T ((v+d x ) (v+d s )) = e T (κ κ) et (q v q v ) = κ q v This completes the proof 5 Quadratic convergence In this subsection, we prove that the same condition, namely σ(x, s; κ) < 1, is sufficient for the quadratic convergence of the ewton process when taking full T-steps Lemma 53 Let σ := σ(x,s;κ)<1 Then σ(x +,s + ;κ) σ 1+ 1 σ Thus σ(x +,s + ;κ) < σ, which shows the quadratic convergence of the T-steps Proof Taking α = 1 in (6), we have v + v + = κ κ q v q v, (7) then it follows from (1) that λ min (v + ) = λ min (v + v + ) Thus = λ min (κ κ q v q v ) λ min (κ) q v F = λ min (κ) (1 σ ) λ min (v + ) λ min (κ) 1 σ (8) From (7) and (8), also using Lemma 5, one has σ(x +,s + ;κ) = κ v + F λ min (κ) κ κ v + v + F λ min (κ)(λ min (κ)+λ min (v + )) q v q v F (λ min (κ)) (1+ 1 σ ) ( ) 1 1+ qv F 1 σ λ min (κ) σ = 1+ 1 σ Consequently, we have σ(x +,s + ;κ)< σ This completes the proof 53 Iteration bound Lemma 5 Let σ(x,s;κ)<1, and κ + =(1 θ)κ, where 0<θ < 1 Then σ(x +,s + ;κ + ) θ σc (κ)+ 1 1 θ 1 θ σ(x +,s + ;κ) Furthermore, if σ(x,s;κ) 1, θ = 1 5 and, σ c (κ) then σ(x +,s + ;κ + ) 1 Proof With triangle inequality, we have σ(x +,s + ;κ + ) = κ + v + F λ min (κ + ) Since κ F = by (19) we have κ + κ F λ min (κ + ) + κ v + F λ min (κ + ) = θ κ F 1 θ λ min (κ) + 1 κ v + F 1 θ λ min (κ) = θ κ F 1 θ λ min (κ) θ σ(x +,s + ;κ) (λ min (κ i ) + λ max (κ i ) ) λ max (κ i ) λ max (κ), σ(x +,s + ;κ + ) θ σc (κ)+ 1 1 θ 1 θ σ(x +,s + ;κ) Thus the first part of the lemma is proved ow, let σ(x,s;κ) 1, θ = 1 5 and Since σ c (κ) 1 σ c (κ) 1, we have θ = 5 1 σ c (κ) 10 Furthermore, if σ(x,s;κ) 1, then from Lemma 53 we deduce that σ(x +,s + ;κ) 1 Thus, the above relations yield σ(x +,s + ;κ + ) 1 The proof is completed Remark 55 According to Algorithm 1, at the start of the algorithm we choose a strictly feasible pair (x 0,s 0 ) such that σ(x 0,s 0 ;κ 0 ) < 1 ote that σ c(κ)=σ c (κ 0 ) for all iterates produced by the algorithm Thus, from Lemma 1 51 and Lemma 5 we know that, for θ = 5, the σ c (κ) conditions x intk,s intk and σ(x, s;κ) 1 are maintained throughout the algorithm Hence the algorithm is well defined In the next lemma, we give an upper bound for the total number of iterations performed by Algorithm 1 Lemma 56 Assume that x 0 and s 0 are strictly feasible c 015 SP atural Sciences Publishing Cor

7 Appl Math Inf Sci 9, o, (015) / wwwnaturalspublishingcom/journalsasp 979 and κ 0 = W 0 x 0 = (W 0 ) 1 s 0 Moreover, let x k and s k be the vectors obtained after k iterations Then the inequality (x k ) T s k 1 ) ε is satisfied for k T s 0 θ log(x0 ε Proof After k iterations, we get κ = (1 θ) k κ 0 From Lemma 5 we have (x k ) T s k κ =(1 θ) k κ 0 =(1 θ) k (x 0 ) T s 0 Thus the inequality(x k ) T s k ε holds if (1 θ) k (x 0 ) T s 0 ε Taking logarithms, we obtain that klog(1 θ)+log((x 0 ) T s 0 ) logε Using the inequality log(1 θ) θ, we deduce that the above relation holds if kθ log((x 0 ) T s 0 ) logε = log (x0 ) T s 0 ε This proves the lemma Theorem 57 Suppose that x 0 and s 0 are strictly feasible and κ 0 = W 0 x 0 = (W 0 ) 1 s 0 If θ = 1 5 σ c (κ 0 ), then Algorithm 1 requires at most 5 σ c(κ 0 ) log (x0 ) T s 0 ε iterations The output is a primal-dual pair (x, s) satisfying x T s ε Remark 58 Theorem 57 shows that the best iteration bound is obtained by following the central path Indeed, we have σ c (κ 0 ) = 1 in this case, and we get the well known iteration bound, namely 5 log (x 0 ) T s 0 ε If the starting point is not perfectly centered, then σ c (κ 0 )>1 and thus the iteration bound is worse Corollary 59 If one takes x 0 = s 0 = e, then the iteration bound becomes O ( log ) ε, which is the currently best known iteration bound for the algorithm with small-update method Conclusions We have developed a weighted-path-following algorithm for SOCO with full T-step and derived the currently best known iteration bound for the algorithm with small-update method Our analysis is a relatively simple and straightforward extension of analogous results for linear optimization Some interesting topics remain for further research Firstly, the search direction used in this paper is based on the T-symmetrization scheme It may be possible to design similar algorithms using other symmetrization schemes and to obtain polynomial-time iteration bounds Secondly, the extension to symmetric cone optimization deserves to be investigated In addition, it is well-known that large-update methods are much more efficient than small-update methods in practice which have a worst case iteration bound Jansen et al [6] proposed a long-step target-following method for linear optimization It is an interesting question whether we can extend their method for solving SOCO by using Jordan algebra techniques Acknowledgement This paper was partly supported by ational atural Science Foundation of China (111018), Research Award Foundation for Outstanding Young Scientists of Shandong Province (BS011SF0, BS01SF05), Science Technology Research Projects of Education Department of Henan Province (13A110767) and Basic and Frontier Technology Research Project of Henan Province ( ) References [1] F Alizadeh, D Goldfarb, Second-order cone programming, Mathematical Programming, Ser B 95, 3 51 (003) [] YQ Bai, GQ Wang, C Roos, Primal-dual interior-point algorithms for second-order cone optimization based on kernel functions, onlinear Analysis, 70, (009) [3] J Ding, TY Li, An algorithm based on weighted logarithmic barrier functions for linear complementarity problems, Arabian Journal for Science and Engineering, 15, (1990) [] Z Darvay, ew interior-point algorithms in linear optimization, Advanced Modelling and Optimization, 5, 51 9 (003) [5] Z Darvay, A new algorithm for solving self-dual linear optimization problems, Studia Universitatis Babe-Bolyai, Series Informatica, 7, 15 6 (00) [6] B Jansen, C Roos, T Terlaky, Long-step primal-dual targetfollowing algorithms for linear programming, Mathematical Methods of Operations Research,, (1996) [7] B Jansen, C Roos, T Terlaky, Vial J-Ph, Primal-dual target-following algorithms for linear programming, Annals of Operations Research, 6, (1996) [8] ZJ Jin, YQ Bai, BS Han, A weighted-path-following interior-point algorithm for convex quadratic optimization, OR Transactions, 1, (010) (in Chinese) [9] MS Lobo, L Vandenberghe, SE Boyd, H Lebret, Applications of the second-order cone programming, Linear Algebra and Its Applications, 8, (1998) [10] ZQ Luo, JF Sturm, SZ Zhang, Conic convex programming and self-dual embedding, Optimization Methods and Software, 1, (000) [11] RDC Monteiro, T Tsuchiya, Polynomial convergence of primal-dual algorithms for the second order program based the MZ-family of directions, Mathematical Programming, 88, (000) c 015 SP atural Sciences Publishing Cor

8 980 J Tang et al : A Weighted-Path-Following Interior-Point Algorithm [1] YE esterov, MJ Todd, Primal-dual interiorpoint methods for self-scaled cones, SIAM Journal on Optimization, 8, 3 36 (1998) [13] J Peng, C Roos, T Terlaky, A new class of polynomial primal-dual interior-point methods for second-order cone optimization based on self-regular proximities, SIAM Journal on Optimization, 13, (00) [1] C Roos, D den Hertog, A polinomial method of approximate weighted centers for linear programming, Technical Report 89-13, Faculty of Technical Mathematics and Informatics, TU Delft, L-68 BL Delft, The etherlands, 199 [15] T Tsuchiya, A convergence analysis of the scaling-invariant primal-dual path-following algorithms for second-order cone programming, Optimization Methods and Software, 11, (1999) [16] GQ Wang, YQ Bai, C Roos, A primal-dual interiorpoint algorithm for second-order cone optimization with full esterov-todd step, Applied Mathematics and Computation, 15, (009) Jingyong Tang is a lecturer at Xinyang ormal University He obtained his PhD from Shanghai Jiaotong University in July, 01 His research interests are in the areas of nonlinear programming and optimization problems over symmetric cones Liang Fang is an associate professor at Taishan University He obtained his PhD from Shanghai Jiaotong University in June, 010 His research interests are in the areas of cone optimizations and complementarity problems Jinchuan Zhou is a lecturer at Shandong University of Technology He obtained his PhD from Beijing Jiaotong University in July, 009 His research interests are in the areas of nonlinear programming, variational analysis, and cone optimizations Li Dong is a lecturer at Xinyang ormal University She obtained her MSc at Zhengzhou University in July, 006 Her research interests are in the areas of combinatorial optimization: theory and applications c 015 SP atural Sciences Publishing Cor

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1 L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Primal-dual path-following algorithms for circular programming

Primal-dual path-following algorithms for circular programming Primal-dual path-following algorithms for circular programming Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan July, 015 Abstract Circular programming problems are a

More information

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics

More information

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software and Second-Order Cone Program () and Algorithms for Optimization Software Jared Erickson JaredErickson2012@u.northwestern.edu Robert 4er@northwestern.edu Northwestern University INFORMS Annual Meeting,

More information

IMPLEMENTATION OF INTERIOR POINT METHODS

IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS FOR SECOND ORDER CONIC OPTIMIZATION By Bixiang Wang, Ph.D. A Thesis Submitted to the School of Graduate Studies in Partial

More information

On implementing a primal-dual interior-point method for conic quadratic optimization

On implementing a primal-dual interior-point method for conic quadratic optimization On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

The Jordan Algebraic Structure of the Circular Cone

The Jordan Algebraic Structure of the Circular Cone The Jordan Algebraic Structure of the Circular Cone Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan Abstract In this paper, we study and analyze the algebraic structure

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

arxiv: v1 [math.oc] 14 Oct 2014

arxiv: v1 [math.oc] 14 Oct 2014 arxiv:110.3571v1 [math.oc] 1 Oct 01 An Improved Analysis of Semidefinite Approximation Bound for Nonconvex Nonhomogeneous Quadratic Optimization with Ellipsoid Constraints Yong Hsia a, Shu Wang a, Zi Xu

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Lingchen Kong, Levent Tunçel, and Naihua Xiu 3 (November 6, 7; Revised December, 7) Abstract Recently, Gowda et al. [] established

More information

On the Sandwich Theorem and a approximation algorithm for MAX CUT

On the Sandwich Theorem and a approximation algorithm for MAX CUT On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/

More information

Fixed point theorems of nondecreasing order-ćirić-lipschitz mappings in normed vector spaces without normalities of cones

Fixed point theorems of nondecreasing order-ćirić-lipschitz mappings in normed vector spaces without normalities of cones Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 18 26 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa Fixed point theorems of nondecreasing

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS A ISRN KTH/OPT SYST/FR 02/12 SE Coden: TRITA/MAT-02-OS12 ISSN 1401-2294 Characterization of the limit point of the central path in semidefinite programming by Göran Sporre and Anders Forsgren Optimization

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

The eigenvalue map and spectral sets

The eigenvalue map and spectral sets The eigenvalue map and spectral sets p. 1/30 The eigenvalue map and spectral sets M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland

More information

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

arxiv: v1 [math.oc] 23 May 2017

arxiv: v1 [math.oc] 23 May 2017 A DERANDOMIZED ALGORITHM FOR RP-ADMM WITH SYMMETRIC GAUSS-SEIDEL METHOD JINCHAO XU, KAILAI XU, AND YINYU YE arxiv:1705.08389v1 [math.oc] 23 May 2017 Abstract. For multi-block alternating direction method

More information

Conic Linear Optimization and its Dual. yyye

Conic Linear Optimization and its Dual.   yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

An E cient A ne-scaling Algorithm for Hyperbolic Programming

An E cient A ne-scaling Algorithm for Hyperbolic Programming An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

GENERALIZED second-order cone complementarity

GENERALIZED second-order cone complementarity Stochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods Mei-Ju Luo and Yan Zhang Abstract In this paper, we reformulate the

More information

Permutation invariant proper polyhedral cones and their Lyapunov rank

Permutation invariant proper polyhedral cones and their Lyapunov rank Permutation invariant proper polyhedral cones and their Lyapunov rank Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Error bounds for symmetric cone complementarity problems

Error bounds for symmetric cone complementarity problems to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information