Appl Math Inf Sci 9, o, 973-980 (015) 973 Applied Mathematics & Information Sciences An International Journal http://dxdoiorg/101785/amis/0908 A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization Jingyong Tang 1,, Li Dong 1, Liang Fang and Jinchuan Zhou 3 1 College of Mathematics and Information Science,Xinyang ormal University, Xinyang 6000, P R China College of Mathematics and System Science, Taishan University, Tai an 7101, P R China 3 Department of Mathematics, Shandong University of Technology, Zibo 5509, P R China Received: 19 Jun 01, Revised: 17 Sep 01, Accepted: 19 Sep 01 Published online: 1 Mar 015 Abstract: We present a weighted-path-following interior-point algorithm for solving second-order cone optimization This algorithm starts from an initial point which is not on the central path It generates iterates that simultaneously get closer to optimality and closer to centrality At each iteration, we use only full esterov-todd step; no line searches are required We derive the complexity bound of ( log ) the algorithm with small-update method, namely, O ε, where denotes the number of second order cones in the problem formulation and ε the desired accuracy This bound is the currently best known iteration bound for second-order cone optimization Keywords: Second-order cone optimization, interior-point method, small-update method, polynomially complexity 1 Introduction The second-order cone (SOC) in R n, also called the Lorentz cone or the ice-cream cone, is defined as { L := (x 1,x,,x n ) R n : x n } 1 x j, x 1 0, (1) j= where n is some natural number Second-order cone optimization (SOCO) is convex optimization problem in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order cones In this paper we consider SOCO in standard format (P) min { c T x : Ax=b, x K }, where A R m n,c R n and b R m, and K R n is the Cartesian product of second-order cones, ie, K = L 1 L, with L i R n i for each i,,,, The dual problem of(p) is given by (D) max { b T y : A T y+s= c, s K } Without loss of generality, throughout the paper, we assume that rows of the matrix A are linearly independent Hence, if the pair (y,s) is dual feasible then y is uniquely determined by s Therefore, we will feel free to say that s is dual feasible, without mentioning y The study of SOCO is vast important because it covers linear optimization, convex quadratic optimization, quadratically constraint convex quadratic optimization as well as other problems [1] In the last few years, the SOCO problem has received considerable attention from researchers for its wide range of applications in many fields, such as engineering, optimal control and design, machine learning, robust optimization and combinatorial optimization and so on We refer the interested readers to the survey paper [9] and the references therein Many researchers have worked on interior-point methods (IPMs) for solving SOCO (eg, [, 11, 13, 15]) otice that the IPMs in those papers required the initial point and iteration points to be on, or close to, the central path However, practical implementations often don t use perfectly centered starting points Therefore, it is worth analyzing the case when the starting point is not on the central path Recently, Jansen et al [7] presented a class of primal-dual target-following interior-point methods for linear optimization This class starts from an initial non-central point and generates iterates that Corresponding author e-mail: jingyongtang@163com c 015 SP atural Sciences Publishing Cor
97 J Tang et al : A Weighted-Path-Following Interior-Point Algorithm simultaneously get closer to optimality and closer to centrality Darvay [, 5] investigated the weighted-path-following interior-point method, a particular case of target-following methods, for linear optimization These methods were also studied by Ding et al [3] for linear complementarity problems, by Jin et al [8] for convex quadratic optimization and by Roos et al [1] for linear optimization Motivated by their work, in this paper we propose a weighted-path-following interior-point algorithm for SOCO We adopt the basic analysis used in [,5] to the SOCO case The currently best known iteration bound for the ( algorithm with small-update method, namely, log ) O ε, is obtained To compare with the path-following (primal-dual) interior-point methods, our method has the following advantages: (i) it can start from any strictly feasible initial point, and works for optimality and centrality at the same time (ii) it uses only full esterov-todd step and no any line searches are required at each iteration While some pathfollowing (primal-dual) interior point methods (eg, [, 11, 13,15]) need do some line searches to find a suitable step size which keeps the iterations be feasible and be on, or close to, the central path The paper is organized as follows In Section, we recall some useful results on second-order cones and their associated Jordan algebra In Sections 3, we present the new search directions for SOCO In Section, we propose the weighted-path-following interior-point algorithm In Section 5, we analyze the algorithm and derive the currently best known iteration bound with small-update method The conclusions are given in Section 6 Some notations used throughout the paper are as follows R n,r n + and Rn ++ denote the set of vectors with n components, the set of nonnegative vectors and the set of positive vectors, respectively We use ; for adjoining vectors in a column For instance, for column vectors x,y and z we have (x; y; z) := (x T, y T, z T ) T As usual, I denotes the identity matrix with suitable dimension denotes the -norm of the vector x and the index set J is J = {1,,,} Finally, for any x,y R n, we write x K y (respectively, x K y) if x y K (respectively, x y intk, where intk denotes the interior of K) Algebraic properties of second-order cones In this section we briefly recall some algebraic properties of the SOC L as defined by (1) and its associated Euclidean Jordan algebra For any vectors x,s R n, their Jordan product associated with the SOC L is defined by x s :=(x T s; x 1 s :n + s 1 x :n ), where x :n := (x ;;x n ) and s :n := (s ;;s n ) (R n, ) is an Euclidean Jordan algebra with the vector e := (1; 0;; 0) as the identity element Obviously, e T (x s) = x T s In the sequel, we denote the vector (x ;;x n ) shortly as x :n So x = (x 1 ;x :n ) Given an element x R n, define the symmetric matrix L(x) :=[ x1 x T :n x :n x 1 I ] R n n, where I represents the (n 1) (n 1) identity matrix otice that x s=l(x)s for any x,s R n The so-called spectral decomposition of a vector x R n is given by x=λ min (x)u (1) + λ max (x)u (), () where λ min (x),λ max (x) and u (1),u () are the spectral values and the associated spectral vectors of x given by λ min (x) := x 1 x :n, λ max (x) := x 1 + x :n, ) 1 (1;( 1) i x :n u (i) x :n, x :n 0, = ),, 1 (1;( 1) i υ, x :n = 0, with υ R n 1 being any vector satisfying υ =1 Using (), for each x R n we define the following [1]: square root: x 1 = (λ min (x)) 1 u (1) +(λ max (x)) 1 u (), for any x L; inverse: x 1 =(λ min (x)) 1 u (1) +(λ max (x)) 1 u (), for any x intl; square: x =(λ min (x)) u (1) +(λ max (x)) u (), for any x R n Indeed, one has x = x x = ( x ;x 1 x :n ), and (x 1 ) = x If x 1 is defined, then x x 1 = e, and we call x be invertible Lemma 1 Let x,s R n, one has (i) λ min (x) + λ min (s) λ min (x + s) λ max (x + s) λ max (x)+λ max (s); (ii) λ min (x )=λ min (x), λ max (x )=λ max (x) ; (iii) If x is invertible, then λ min (x 1 ) = λ max (x) 1, and λ max (x 1 )=λ min (x) 1 The trace and the determinant of x R n are Tr(x) := λ min (x)+λ max (x)=x 1, det(x) := λ min (x)λ max (x)=x 1 x :n Lemma [] For all x,s,t L, one has Tr((x s) t)= Tr(x (s t)) The natural inner product is given by x,s := Tr(x s)=x T s, x,s R n Hence, the norm induced by this inner product, which is denoted as F, satisfies x F = x x = Tr(x x) c 015 SP atural Sciences Publishing Cor
Appl Math Inf Sci 9, o, 973-980 (015) / wwwnaturalspublishingcom/journalsasp 975 It is obvious that = λ min (x) + λ max (x) = x (3) λ min (x) x F, and λ max (x) x F () Using Lemma, one can easily verify that (x s) t) F = (x (s t)) F, x,s,t L (5) Lemma 3 [16] For any x,s R n, one has (i) x F x F ; equality holds if and only if x 1 = x :n ; (ii) Tr [ (x s) ] Tr(x s ); (iii) x s F λ max (x) s F x F s F Lemma Let s R n and x intl If λ min (x) > s F, then x s L 0 Proof From (), it follows that then λ min ( s) s F = s F, s F λ min ( s) s F Thus, using Lemma 1 (i), we have λ min (x s) λ min (x)+λ min ( s) λ min (x) s F > 0, which implies that x s L 0 This proves the lemma Lemma 5 Let ρ L, for any v L, one has ρ v F ρ ρ v v F λ min (ρ)+λ min (v) Proof From Lemma 1, Lemma 3 (iii) and (5), we have ρ v F = ((ρ+ v) 1 (ρ+ v)) (ρ v) F = (ρ+ v) 1 ((ρ+ v) (ρ v)) F = (ρ+ v) 1 (ρ ρ v v) F λ max ((ρ+ v) 1 ) ρ ρ v v F = ρ ρ v v F λ min (ρ+ v) ρ ρ v v F λ min (ρ)+λ min (v) The proof is completed ow we proceed by adapting the above definitions to the general case K = L 1 L and > 1 First we partition vectors x,s R n according to the dimensions of the successive cones L i, so x := (x 1 ;;x ) and s := (s 1 ;;s ), and we define the algebra (R n, ) as a direct product of Jordan algebras: x s :=(x 1 s 1 ;;x s ) If e i L i is the unit element in the Jordan algebra for the ith cone, then e := (e 1 ;;e ) is the unit element in (R n, ) Since L(x) := diag(l(x 1 ),,L(x )), it is easy to verify that [1] λ min (x)=λ min (L(x))=min { λ min (x i ),i J }, λ max (x)=λ max (L(x))=max { λ max (x i ),i J } Furthermore, Tr(x) = x F = det(x) = Tr(x i )= x i F = det(x i )= [ λmin (x i )+λ max (x i ) ], ( λmin (x i ) + λ max (x i ) ), λ min (x i )λ max (x i ) 3 The new search directions for SOCO Throughout the paper, we assume that (P) and(d) satisfy the interior-point condition (IPC), ie, there exists (x 0,y 0,s 0 ) such that Ax 0 = b, x 0 intk, A T y 0 + s 0 = c, s 0 intk In fact, by using the self-dual embedding technique (eg, [10]), we may (and will) assume that x 0 = s 0 = e It is well known that finding an optimal solution of(p) and(d) is equivalent to solving the following system: Ax=b, x K, A T y+s= c, s K, (6) L(x)s=0 The basic idea of IPMs is to replace the third equation in system (6), the so-called complementarity condition for (P) and (D), by the parameterized equation L(x)s = µe, with µ > 0 Thus we consider the system Ax=b, x K, A T y+s= c, s K, (7) L(x)s=µe For each µ > 0, the parameterized system (7) has a unique solution (x(µ), y(µ), s(µ)) We call x(µ) the µ-center of (P) and(y(µ),s(µ)) the µ-center of (D) The set of µ-center (with µ running through all positive real numbers) gives a homotopy path, which is called the central path of (P) and (D) If µ 0, then the limit of the central path exists and since the limit points satisfy the complementarity condition L(x)s = 0, it naturally yields optimal solution for(p) and(d) The weighted-path-following approach starts from the observation that system (7) can be generalized by replacing the vector µe with an arbitrary vector κ intk Thus we obtain the following system Ax=b, x K, A T y+s= c, s K, (8) L(x)s=κ c 015 SP atural Sciences Publishing Cor
976 J Tang et al : A Weighted-Path-Following Interior-Point Algorithm Since the IPC holds and A has full rank, system (8) has a unique solution Hence we can apply ewton s method for system (8) to develop a weighted-path-following algorithm Suppose that we have Ax = b and A T y+s = c for a triple (x,y,s) such that x intk and s intk, hence x and s are strictly feasible By linearizing system (8) we obtain the following system for the search directions: A x=0, A T y+ s=0, (9) L(x) s+l(s) x=κ L(x)s This system has a unique solution if and only if the matrix AL(s) 1 L(x)A T is nonsingular Unfortunately, this might not be the case, even if A has full rank This is due to the fact that x and s do not operator commute in general (ie L(x)L(s) L(s)L(x)) However, the system can be solved by using some scaling schemes In this paper we take the T-scaling scheme which results in the well-known Tdirection Below, we only briefly outline the T-scaling scheme The readers who are interested in this may find the detail description in [15] For any x i,s i intl i and i J, we denote the T-scaling matrix W i as: ( ) (s i 1 ω i = ) s i 1 :n (x i 1 ) x i, :n s i =( s i 1, si :n )=ω 1 i (s i 1,si :n ), xi =( x i 1, xi :n )=ω i(x i 1,xi :n ), ζ i =(ζ i 1,ζ i :n)=( x i 1+ s i 1, s i :n x i :n), α i = ζ i 1 γ(ζ i ), β i = ζ i :n γ(ζ i ), γ(ζ i )= W i = ω i [ αi β T i β i I+ β iβi T 1+α i (ζ i 1 ) (ζ i :n )T ζ i :n, where I represents the n i n i identity matrix For the above choices one has (see, Proposition 76 in [15]) W i x i =(W i ) 1 s i, i J We define Then Let us further denote ] W := diag(w 1,,W ) v :=(v 1 ;;v )= W x (= W 1 s) (10) A := AW 1, d x := W x, d s := W 1 s (11) Applying (10) and (11), system (9) can be rewritten as follows: Ad x = 0, A T y+d s = 0, (1), L(W 1 v)w d s + L(Wv)W 1 d x = κ L(W 1 v)wv Since this system is equivalent to system (9), it may not have a unique solution To overcome this difficulty, in the same way as Bai et al did in [], we replace the third equation in system (1) by L(v)d s + L(v)d x = κ L(v)v, which, after multiplying of both sides from the left with L(v) 1, becomes d s + d x = L(v) 1 κ v (13) Thus the system defining the scaled search directions becomes Ad x = 0, A T y+d s = 0, (1) d s + d x = L(v) 1 κ v Since the matrix Ā T Ā is positive definite, this system has a unique solution In this paper, in order to avoid calculating the inverse matrix of L(v) in system (1), motivated by [,5], we replace the right hand side in the last equation in system (1) by (κ v) Thus we will use the following system to define our new search direction: Ad x = 0, A T y+d s = 0, (15) d s + d x = (κ v) Since system (15) has the same matrix of coefficients as system (1), also system (15) has a unique solution By transforming back to the x and s space, respectively, using (11), we obtain search directions x and s in the original spaces, with x= W 1 d x, s= Wd s (16) By taking a full T-step, we construct a new triple (x +,y +,s + ) according to x + = x+ x, y + = y+ y, s + = s+ s The weighted-path-following interior-point algorithm We define p v := (κ v)= d x + d s (17) Since κ intk, we have λ min (κ)>0 ow for any vector v intk, we define the following proximity measure σ(v;κ) := σ(x,s;κ) := p v F λ min (κ) = κ v F λ min (κ) (18) c 015 SP atural Sciences Publishing Cor
Appl Math Inf Sci 9, o, 973-980 (015) / wwwnaturalspublishingcom/journalsasp 977 where We introduce another measure σ c (κ) := λ max(κ) λ min (κ), (19) λ max (κ)=max { λ max (κ i ), i J }, λ min (κ)=min { λ min (κ i ), i J } It is obvious that σ c (κ) 1 Equality holds if and only if λ max (κ)=λ min (κ), ie, κ = ηe, where η > 0 is a positive constant, which implies that κ is centered Thus σ c (κ) can be used to measure the distance of κ to the central path Furthermore, let us introduce the notation q v := d x d s (0) From the first two equations of the system (15), we obtain that d x and d s are orthogonal, that is d T x d s = d T s d x = 0 Thus, we get p v F = q v F Consequently, the proximity measure can be written in the following form Hence, we have σ(v;κ) := q v F λ min (κ) (1) d x = p v+ q v, d s = p v q v, () d x d s = p v p v q v q v (3) We now give the weighted-path-following interior-point algorithm for SOCO as follows Algorithm 1(The weighted-path-following interiorpoint algorithm for SOCO) Step 1: Let (x 0,y 0,s 0 ) be the strictly feasible interior point, and let κ 0 = W 0 x 0 = (W 0 ) 1 s 0 Choosing an accuracy parameter ε > 0, and a fixed update parameter θ (0,1) Set x := x 0,y := y 0,s := s 0,κ := κ 0 Step : If x T s < ε, then stop Otherwise, perform the following steps Step 3: Calculate κ :=(1 θ)κ Step : Solve the system (15) and via (16) to obtain ( x, y, s) Step 5: Update (x,y,s) := (x,y,s)+( x, y, s) and go to Step In the next section we will prove that this algorithm is well defined, and we will also give an upper bound for the number of iterations performed by the algorithm 5 Analysis of the algorithm 51 Feasibility of the full T-step In this subsection, we find a condition that guarantees feasibility of the iterates after a full T-step As before, let x,s intk, and let W be the T-scaling matrix Using (10) and (11), we obtain that x + = x+ x= W 1 (v+d x ), () s + = s+ s= W(v+d s ) (5) Since W and its inverse W 1 are automorphisms of K, x + and s + belong to intk if and only if v+d x and v+d s belong to intk Let 0 α 1, we define v x (α)=v+αd x and v s (α)=v+αd s Lemma 51 Let σ(v;κ) < 1 Then the full T-step is strictly feasible, hence x + intk and s + intk Proof From () and (3), we have v x (α) v s (α) = (v+αd x ) (v+αd s ) = v v+αv (d x + d s )+α d x d s = v v+αv p v + α p v p v q v q v = (1 α)v v+α(v v+v p v )+α p v p v q v q v Moreover, (17) yields v+ p v = κ, and thus v v+v p v = κ κ p v p v Consequently ( v x (α) v s (α)=(1 α)v v+α κ κ (1 α) p v p v α q v q v (6) Since 0 α 1, using σ(v;κ)<1, we have (1 α) p v p v + α q v q v F (1 α) p v p v F + α q v q v F Thus, it follows from Lemma that κ κ (1 α) p v p v (1 α) p v F + α q v F = σ(v;κ) λ min (κ) < λ min (κ) = λ min (κ κ) α q v q v K 0 Since the set of the second-order cones is cone, we can conclude that ( (1 α)v v+ κ κ (1 α) p v p v α q ) v q v K 0 Thus, we have det(v x (α) v s (α)) > 0, which, together with Lemma 3 in [], implies that for each α [0,1], det(v x (α))det(v s (α)) det(v x (α) v s (α))>0 Therefore, by Lemma 61 in [16] we get v x (1)=v+d x K 0 and v s (1)=v+d s K 0 This completes the proof The following lemma gives an upper bound for the duality gap obtained after a full T-step Lemma 5 Let σ(x,s;κ)<1 Then (x + ) T s + = κ q v, ) c 015 SP atural Sciences Publishing Cor
978 J Tang et al : A Weighted-Path-Following Interior-Point Algorithm and hence(x + ) T s + κ Proof Due to () and (5), we may write (x + ) T s + = ( W 1 (v+d x ) ) T ( W(v+d s ) ) =(v+d x ) T (v+d s ) From (6) with α = 1, we get (v+d x ) (v+d s )=κ κ q v q v Thus, x T +s + = e T ((v+d x ) (v+d s )) = e T (κ κ) et (q v q v ) = κ q v This completes the proof 5 Quadratic convergence In this subsection, we prove that the same condition, namely σ(x, s; κ) < 1, is sufficient for the quadratic convergence of the ewton process when taking full T-steps Lemma 53 Let σ := σ(x,s;κ)<1 Then σ(x +,s + ;κ) σ 1+ 1 σ Thus σ(x +,s + ;κ) < σ, which shows the quadratic convergence of the T-steps Proof Taking α = 1 in (6), we have v + v + = κ κ q v q v, (7) then it follows from (1) that λ min (v + ) = λ min (v + v + ) Thus = λ min (κ κ q v q v ) λ min (κ) q v F = λ min (κ) (1 σ ) λ min (v + ) λ min (κ) 1 σ (8) From (7) and (8), also using Lemma 5, one has σ(x +,s + ;κ) = κ v + F λ min (κ) κ κ v + v + F λ min (κ)(λ min (κ)+λ min (v + )) q v q v F (λ min (κ)) (1+ 1 σ ) ( ) 1 1+ qv F 1 σ λ min (κ) σ = 1+ 1 σ Consequently, we have σ(x +,s + ;κ)< σ This completes the proof 53 Iteration bound Lemma 5 Let σ(x,s;κ)<1, and κ + =(1 θ)κ, where 0<θ < 1 Then σ(x +,s + ;κ + ) θ σc (κ)+ 1 1 θ 1 θ σ(x +,s + ;κ) Furthermore, if σ(x,s;κ) 1, θ = 1 5 and, σ c (κ) then σ(x +,s + ;κ + ) 1 Proof With triangle inequality, we have σ(x +,s + ;κ + ) = κ + v + F λ min (κ + ) Since κ F = by (19) we have κ + κ F λ min (κ + ) + κ v + F λ min (κ + ) = θ κ F 1 θ λ min (κ) + 1 κ v + F 1 θ λ min (κ) = θ κ F 1 θ λ min (κ) + 1 1 θ σ(x +,s + ;κ) (λ min (κ i ) + λ max (κ i ) ) λ max (κ i ) λ max (κ), σ(x +,s + ;κ + ) θ σc (κ)+ 1 1 θ 1 θ σ(x +,s + ;κ) Thus the first part of the lemma is proved ow, let σ(x,s;κ) 1, θ = 1 5 and Since σ c (κ) 1 σ c (κ) 1, we have θ = 5 1 σ c (κ) 10 Furthermore, if σ(x,s;κ) 1, then from Lemma 53 we deduce that σ(x +,s + ;κ) 1 Thus, the above relations yield σ(x +,s + ;κ + ) 1 The proof is completed Remark 55 According to Algorithm 1, at the start of the algorithm we choose a strictly feasible pair (x 0,s 0 ) such that σ(x 0,s 0 ;κ 0 ) < 1 ote that σ c(κ)=σ c (κ 0 ) for all iterates produced by the algorithm Thus, from Lemma 1 51 and Lemma 5 we know that, for θ = 5, the σ c (κ) conditions x intk,s intk and σ(x, s;κ) 1 are maintained throughout the algorithm Hence the algorithm is well defined In the next lemma, we give an upper bound for the total number of iterations performed by Algorithm 1 Lemma 56 Assume that x 0 and s 0 are strictly feasible c 015 SP atural Sciences Publishing Cor
Appl Math Inf Sci 9, o, 973-980 (015) / wwwnaturalspublishingcom/journalsasp 979 and κ 0 = W 0 x 0 = (W 0 ) 1 s 0 Moreover, let x k and s k be the vectors obtained after k iterations Then the inequality (x k ) T s k 1 ) ε is satisfied for k T s 0 θ log(x0 ε Proof After k iterations, we get κ = (1 θ) k κ 0 From Lemma 5 we have (x k ) T s k κ =(1 θ) k κ 0 =(1 θ) k (x 0 ) T s 0 Thus the inequality(x k ) T s k ε holds if (1 θ) k (x 0 ) T s 0 ε Taking logarithms, we obtain that klog(1 θ)+log((x 0 ) T s 0 ) logε Using the inequality log(1 θ) θ, we deduce that the above relation holds if kθ log((x 0 ) T s 0 ) logε = log (x0 ) T s 0 ε This proves the lemma Theorem 57 Suppose that x 0 and s 0 are strictly feasible and κ 0 = W 0 x 0 = (W 0 ) 1 s 0 If θ = 1 5 σ c (κ 0 ), then Algorithm 1 requires at most 5 σ c(κ 0 ) log (x0 ) T s 0 ε iterations The output is a primal-dual pair (x, s) satisfying x T s ε Remark 58 Theorem 57 shows that the best iteration bound is obtained by following the central path Indeed, we have σ c (κ 0 ) = 1 in this case, and we get the well known iteration bound, namely 5 log (x 0 ) T s 0 ε If the starting point is not perfectly centered, then σ c (κ 0 )>1 and thus the iteration bound is worse Corollary 59 If one takes x 0 = s 0 = e, then the iteration bound becomes O ( log ) ε, which is the currently best known iteration bound for the algorithm with small-update method Conclusions We have developed a weighted-path-following algorithm for SOCO with full T-step and derived the currently best known iteration bound for the algorithm with small-update method Our analysis is a relatively simple and straightforward extension of analogous results for linear optimization Some interesting topics remain for further research Firstly, the search direction used in this paper is based on the T-symmetrization scheme It may be possible to design similar algorithms using other symmetrization schemes and to obtain polynomial-time iteration bounds Secondly, the extension to symmetric cone optimization deserves to be investigated In addition, it is well-known that large-update methods are much more efficient than small-update methods in practice which have a worst case iteration bound Jansen et al [6] proposed a long-step target-following method for linear optimization It is an interesting question whether we can extend their method for solving SOCO by using Jordan algebra techniques Acknowledgement This paper was partly supported by ational atural Science Foundation of China (111018), Research Award Foundation for Outstanding Young Scientists of Shandong Province (BS011SF0, BS01SF05), Science Technology Research Projects of Education Department of Henan Province (13A110767) and Basic and Frontier Technology Research Project of Henan Province (130010318) References [1] F Alizadeh, D Goldfarb, Second-order cone programming, Mathematical Programming, Ser B 95, 3 51 (003) [] YQ Bai, GQ Wang, C Roos, Primal-dual interior-point algorithms for second-order cone optimization based on kernel functions, onlinear Analysis, 70, 358 360 (009) [3] J Ding, TY Li, An algorithm based on weighted logarithmic barrier functions for linear complementarity problems, Arabian Journal for Science and Engineering, 15, 679 685 (1990) [] Z Darvay, ew interior-point algorithms in linear optimization, Advanced Modelling and Optimization, 5, 51 9 (003) [5] Z Darvay, A new algorithm for solving self-dual linear optimization problems, Studia Universitatis Babe-Bolyai, Series Informatica, 7, 15 6 (00) [6] B Jansen, C Roos, T Terlaky, Long-step primal-dual targetfollowing algorithms for linear programming, Mathematical Methods of Operations Research,, 11 30 (1996) [7] B Jansen, C Roos, T Terlaky, Vial J-Ph, Primal-dual target-following algorithms for linear programming, Annals of Operations Research, 6, 197 31 (1996) [8] ZJ Jin, YQ Bai, BS Han, A weighted-path-following interior-point algorithm for convex quadratic optimization, OR Transactions, 1, 55 65 (010) (in Chinese) [9] MS Lobo, L Vandenberghe, SE Boyd, H Lebret, Applications of the second-order cone programming, Linear Algebra and Its Applications, 8, 193 8 (1998) [10] ZQ Luo, JF Sturm, SZ Zhang, Conic convex programming and self-dual embedding, Optimization Methods and Software, 1, 169 18 (000) [11] RDC Monteiro, T Tsuchiya, Polynomial convergence of primal-dual algorithms for the second order program based the MZ-family of directions, Mathematical Programming, 88, 61 93 (000) c 015 SP atural Sciences Publishing Cor
980 J Tang et al : A Weighted-Path-Following Interior-Point Algorithm [1] YE esterov, MJ Todd, Primal-dual interiorpoint methods for self-scaled cones, SIAM Journal on Optimization, 8, 3 36 (1998) [13] J Peng, C Roos, T Terlaky, A new class of polynomial primal-dual interior-point methods for second-order cone optimization based on self-regular proximities, SIAM Journal on Optimization, 13, 179 03 (00) [1] C Roos, D den Hertog, A polinomial method of approximate weighted centers for linear programming, Technical Report 89-13, Faculty of Technical Mathematics and Informatics, TU Delft, L-68 BL Delft, The etherlands, 199 [15] T Tsuchiya, A convergence analysis of the scaling-invariant primal-dual path-following algorithms for second-order cone programming, Optimization Methods and Software, 11, 11 18 (1999) [16] GQ Wang, YQ Bai, C Roos, A primal-dual interiorpoint algorithm for second-order cone optimization with full esterov-todd step, Applied Mathematics and Computation, 15, 107 1061 (009) Jingyong Tang is a lecturer at Xinyang ormal University He obtained his PhD from Shanghai Jiaotong University in July, 01 His research interests are in the areas of nonlinear programming and optimization problems over symmetric cones Liang Fang is an associate professor at Taishan University He obtained his PhD from Shanghai Jiaotong University in June, 010 His research interests are in the areas of cone optimizations and complementarity problems Jinchuan Zhou is a lecturer at Shandong University of Technology He obtained his PhD from Beijing Jiaotong University in July, 009 His research interests are in the areas of nonlinear programming, variational analysis, and cone optimizations Li Dong is a lecturer at Xinyang ormal University She obtained her MSc at Zhengzhou University in July, 006 Her research interests are in the areas of combinatorial optimization: theory and applications c 015 SP atural Sciences Publishing Cor