Primal-dual path-following algorithms for circular programming

Size: px
Start display at page:

Download "Primal-dual path-following algorithms for circular programming"

Transcription

1 Primal-dual path-following algorithms for circular programming Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan July, 015 Abstract Circular programming problems are a new class of convex optimization problems in which we minimize linear function over the intersection of an affine linear manifold with the Cartesian product of circular cones. It has very recently been discovered that, unlike what has previously been believed, circular programming is a special case of symmetric programming, where it lies between second-order cone programming and semidefinite programming. Monteiro [SIAM J. Optim. 7 (1997) ] introduced primal-dual path-following algorithms for solving semidefinite programming problems. Alizadeh and Goldfarb [Math. Program. Ser. A 95 (003) 3 51] introduced primal-dual path-following algorithms for solving second-order cone programming problems. In this paper, we utilize their work and use the machinery of Euclidean Jordan algebras to derive primal-dual path-following interior point algorithms for circular programming problems. We prove polynomial convergence of the proposed algorithms by showing that the circular logarithmic barrier is a strongly self-concordant barrier. Keywords: Circular cones; Second-order cone programming; Interior point methods; Euclidean Jordan algebra; Self-concordance 1 Introduction We introduce and study the primal-dual pair of circular programming (CP) problems: min c, x max b T y s.t. (Ax) = b, s.t. A T y + s = c, x Q n, s Qn, y Rm, (1) where (0, π/) is a given angle, the inner product c, x denotes the circular inner product between c and x, which we define as c, x := c T I,n x, the matrix-vector product (Ax) denotes the circular matrix-vector product between A and x, which we define as (Ax) := AI,n x, the matrix I,n is called the circular identity matrix, which we define as [ ] 1 0 T I,n := R 0 cot I n n n 1 Corresponding author: b.alzalg@ju.edu.jo 1

2 to act as a generalization of the (standard) identity matrix I n = I π 4,n, the cone Q n denotes the circular cone [1, ] of dimension n, which is defined as {[ ] Q n := x0 R R n 1 : x x 0 cot x, () and, finally, the norm denotes the standard Euclidean norm. An important special case is that in which = π 4. In this case, the circular cone Qn the well-known second-order cone Q n, which is defined as {[ ] Q n x0 := R R n 1 : x x 0 x = Q n π, 4 reduces to the circular identity matrix I,n reduces to the identity matrix I n, the circular inner product between c and x (i.e., c, x = c T I,n x) reduces to the standard inner product between c and x (i.e., c T x), the circular matrix-vector product between A and x (i.e., AI,n x) reduces to the (standard) matrix-vector product between A and x (i.e., Ax), and, therefore, the CP problems (1) reduces to second-order cone programming problems [3]. So, CP includes second-order cone programming as a special case. We will also see that the CP is a special case of semidefinite programming [4]. It is known that circular cone is a closed, convex, pointed, solid cone. It was also popularly known (see for example [1, ]) that the dual of the circular cone (), denoted by Q n, is the circular cone Q n π = {[ x0 x ] ( π ) {[ R R n 1 : x 0 cot x0 x = x ] R R n 1 : x 0 tan x. In [10], we have shown that, unlike popularly perceived, this fact (i.e., the fact that Q n = Q n π ) is no longer true. Moreover, we have shown that, under the circular inner product, the circular cone Q n is self-dual (i.e., Qn = Q n ; see Lemma ) and homogeneous, hence it becomes a symmetric cone. In fact, there is a one-to-one correspondence between Euclidean Jordan algebras and symmetric cones. As a result, the circular cone is indeed the cone of squares of some Euclidean Jordan algebra. This motivates us to establish a Euclidean Jordan algebra associated with the circular cone. In [10], we have set up the Jordan algebra associated with the circular cone by establishing a new spectral decomposition (different and much more efficient than what has been established in [1]) associated with this cone and establishing a circular Jordan multiplication associated with its algebra. We have also demonstrated that this algebra forms a Euclidean Jordan algebra with the circular inner product. Being a symmetric cone, circular programming becomes one of the special cases of symmetric programming [5]. In fact, linear programming, second-order cone programming, CP and semidefinite programming are considered the most important subclasses of symmetric programming. The applications of the circular cone lie in various real-world engineering problems, for example the optimal grasping manipulation problems for multi-fingered robots [6, 7, 8]. In addition, CP is applied in the perturbation analysis of second-order cone programming problems [9]. Monteiro [11] introduced primal-dual path-following algorithms for solving semidefinite programming (see also Zhang [1]). Alizadeh and Goldfarb [3] introduced primal-dual path-following algorithms for solving second-order cone programs. These primal-dual interior point algorithms and their analysis have been extended by Schmieta and Alizadeh [5] for solving optimization problems over all symmetric cones. The purpose of this paper is to utilize the work of Monteiro [11], Alizadeh and Goldfarb [3] and Schmieta and Alizadeh [5] to derive primal-dual path-following algorithms for the primal-dual pair of CP problems (1). We prove the polynomial convergence results

3 of the proposed algorithms by showing that the logarithmic barrier in the circular case is a strongly self-concordant barrier [13]. This paper is organized as follows. We end this section by introducing some notations that will be used in the sequel, and then proving the self-duality of the circular cone. The Euclidean Jordan algebra of the circular cones mentioned above is the substance of Section. In Section 3, we introduce the logarithmic barrier in the circular case for our problem formulation, and then state the self-concordance property of this barrier. Based on this property, we write the optimality conditions for the CP problems (1) and describe the commutative class of directions for the central path in Section 4, then we present short-, semi-long-, and long-step variants of the path-following algorithm for CP in Section 5. Section 6 is devoted to prove the self-concordance property stated in Section 3. The last section contains some concluding remarks. 1.1 Notations We use, for adjoining vectors and matrices in a row, and use ; for adjoining them in a column. So, for example, if x, y, and z are vectors, we have x y = (x T, y T, z T ) T = (x; y; z). z We use R to denote the field of real numbers. For each vector x R n whose first entry is indexed with 0, we write x for the subvector consisting of entries 1, through n 1 (therefore x = (x 0 ; x) R R n 1 ). We let E n denote the n dimensional real vector space R R n 1 whose elements x are indexed with 0. For a vector x E n and a matrix A R m n, we let x := (x 0 ; cot x) = I,n x, A := AI,n, x := (x 0 ; cot x) = I,n x, A := AI,n. (3) Hence x = cot x, and x = cot x. Therefore, we can now rewrite the circular cone Q n as follows Q n = {x En : x 0 x. As a result, x Q n if and only if x Q n. (4) We can also rewrite the circular inner product between two vectors x, y E n as follows x, y := x T I,n y = xt y = x T y = x T y = x 0 y 0 + cot x T y, (5) and rewrite the circular matrix-vector product between a matrix A R n n and a vector x E n as follows (Ax) := AI,n x = A x = A x. (6) In general, we define the circular matrix product between two matrices A, B R n n as follows (AB) := AI,n B = A B. 3

4 1. Self-duality of the circular cone Lemma 1 illustrates why the circular cone L n t, with the standard inner product, is not self-dual, and illustrates the reason why it is popularly known that the dual of Q n is Q n π. Lemma 1. Under the standard inner product, the circular cone Q n is not self-dual. Moreover, Q n = Q n π. Proof. Under the standard inner product, the dual of Q n is defined as Q n := {x E n : x T y 0, y Q n. We first show that Q n π Qn. Let x = (x 0; x) Q n π, we show that x Qn x T y 0 for any y Q n. So let y = (y 0; y) Q n. Then x T y = x 0 y 0 + x T y ( cot ( π ) x ) (cot y ) + x T y = (tan x ) (cot y ) + x T y = x y + x T y x T y + x T y 0, by verifying that where the first inequality follows from the fact that x Q n π and y Qn, and the last follows from Hölder s inequality. Thus, Q n π Qn. Now, we prove that Q n Q n π, and consider x := (tan y ; y) =. Let y = (y 0; y) Q n ( ( π ) ) cot y ; y Q n π. Then by using Hölder s inequality, where the equality is attained, we obtain 0 x T y = tan y 0 y y T y = tan y 0 y y = y (tan y 0 y ). This implies that tan y 0 y, or equivalently y 0 cot y. Hence, y Q n. This demonstrates that Q n π = Qn and completes the proof. In fact, the inner product that should be considered with the circular cones is the circular inner product defined in (5). Under the circular inner product, the dual of Q n is defined as Q n := {x E n : x T y 0, y Q n. The proof of the following lemma can be found in [10, Section 3] and is similar in its composition to the proof of Lemma 1. We prefer to present a shorter and more direct proof because it takes the advantage of using our notations. Lemma. Under the circular inner product, the circular cone Q n Proof. The proof follows from the following equivalences: is self-dual. That is, Qn = Q n. x Q n x Q n (by (4)) x Q n (as Q n = Q n ) x T y 0, y Q n (by definition of Q n ) x T y 0, y Q n (by (4)) x Q n (by definition of Q n ). 4

5 The Euclidean Jordan algebra of the circular cone Let x E n. The circular spectral decomposition of x with respect to the angle (0, π/) is obtained as follows [10] x = (x 0 + cot x ) ( ) 1 1 tan x + (x 0 cot x ) ( ) 1 1 tan x x x ( ) 1 1 ( ) = (x 0 + x ) x (x {{ 0 x ) {{ x (7), x x λ,1 (x) λ, (x) {{ c,1 (x) {{ c, (x) where x is defined in (3). The spectral decomposition (7) is associated with the circular cone, and it is viewed as a generalization of the spectral decomposition in [3, Section 4] which is associated with the second-order cone. Under the circular spectral decomposition (7), we have that trace(x) := λ,1 (x) + λ, (x) = x 0, det (x) := λ,1 (x)λ, (x) = x 0 x, and that c,1 + c, = e := (1; 0) which is the identity element of E n. It is quite easy to see that trace(e) =, det(e) = 1, λ,1 (c,1 ) = λ,1 (c, ) = 1, λ, (c,1 ) = λ, (c, ) = 0, c,1 = Rc,, and c, = Rc,1, where R is the reflection matrix [ 1 0 T R := 0 I n 1 ]. (8) For any real valued continuous function f, we define the image of x under f with respect to as f (x) := f (λ,1 (x))c,1 (x) + f (λ, (x))c, (x). For instance, for p R, x p := λ p,1 (x)c,1(x)+λ p, (x)c,(x). In particular, with a little calculation, we can obtain [ ] x 1 1 := λ,1 (x) c 1,1(x) + λ, (x) c 1 x0 R,(x) = = det (x) x det (x) x, which is called the inverse of x (provided that x is invertible, i.e., det (x) 0). The Frobenius norm with respect to of x is defined as x,f := λ,1 (x) + λ, (x), and the -norm of x with respect to is defined as x, := max{ λ,1 (x), λ, (x). It is clear that x, x,f. We call x E n positive semidefinite if x Q n (i.e., λ 1,(x) 0), and positive definite if x Int(Q n ) (i.e., λ 1,(x) > 0). We write x 0 to mean that x is positive semidefinite, and x 0 to mean that x is positive definite. We also write x y or y x to mean that x y 0, and x y or y x to mean that x y 0. For real symmetric matrices of order n, X and Y, we write X 0 (X 0) to mean that X is positive semidefinite (X is positive definite), and X Y (X Y ) or Y X (Y X) to mean that X Y 0 (X Y 0). The arrow-shaped matrix Arw (x) associated with x E n with respect to is defined as [10] [ x0 cot Arw (x) := x T ] [ x0 x = T ]. (9) x x 0 I n 1 x x 0 I n 1 5

6 Note that Arw (x)e = x, Arw (x)x = x and Arw(e) = I n. Note also that x 0 (x 0) if and only if the matrix Arw (x) 0 (Arw (x) 0). This explains why circular programming is a special case of semidefinite programming. The Jordan multiplication between two vectors x and y with respect to is defined as [10] [ (x y) := x T y x 0 y + y 0 x ] [ = x T y x 0 y + y 0 x ] = Arw (x)y = Arw (x)arw (y)e. (10) It is clear that the definitions of the circular arrow-shaped matrix in (9) and the circular Jordan multiplication in (10) generalize the corresponding ones in [3, Section 4] associated with the secondorder cone. Observe that c,1 = (c,1 c,1 ) = c,1, c, = (c, c, ) = c,, (c,1 c, ) = 0. Therefore, {c,1, c, is a Jordan frame. The vectors x and y of E n are simultaneously decomposed if they share a Jordan frame, i.e., x = λ,1 (x)c,1 (x) + λ, (x)c, (x) and y = ω,1 (x)c,1 (x) + ω, (x)c, (x) for a Jordan frame {c,1, c,. We say x and y operator commute with respect to if for all z E n, we have that (x (y z) ) = (y (x z) ). Two vectors in E n operator commute if and only if they are simultaneously decomposed (see [5, Theorem 7]). One can easily see that, for any α, β R, (x (αy + βz)) = α(x y) + β(x z) and ((αy + βz) x) = α(y x) + β(z x). Note that (x e) = x, (x x 1 ) = e, x p = (x p 1 x) for any nonnegative integer p 1, and (x p x q ) = x p+q for any nonnegative integer p, q 1. Therefore, the algebra (E n,, ) is power associative (it is not associative though). In fact, one can also see that (x y) = (y x) (commutativity) and (x (x y) ) = (x (x y) ) (Jordan s axiom). This shows that the algebra (E n,, ) is a Jordan algebra with the circular Jordan multiplication ( ) defined in (10). Moreover, we can also show that the Jordan algebra (E n,, ) is a Euclidean Jordan algebra under the circular inner product, defined in (5). We have the following theorem [10]. Theorem 1. The cone of squares of the Euclidean Jordan algebra (E n,, ) is the circular cone Q n. The quadratic operator Q,x,z : E n E n E n associated with the pair (x, z) E n E n with respect to is given by Q,x,z := [ Arw (x)arw(z) + Arw(z)Arw (x) Arw(x z) x = T z cot (x 0 z T + z 0 x T ) x 0 z + z 0 x x 0 z 0 I n 1 + ( x z T + z x T xt z I n 1 ) ]. by The quadratic representation Q,x : E n E n associated with x E n with respect to is given [ Q,x := Arw (x) Arw(x x ) = cot x 0 x T ] x 0 x det (x)i n 1 + x x T = xx T I,n det (x)r, where R is the reflection matrix defined in (8). Note that Q,x e = x, Q,x x 1 = x, and Q,e = I n. We finally present some handy tools needed for our computations. Lemma 3. [10, Theorem 7] Let x, u E n, and y = y(x) be a function of x in J. Then 6

7 1. The gradient x ln det x = I,n x 1 = (x 1 ), provided that det (x) is positive (so x is invertible). More generally, x ln det y = ( x y) T y 1 = ( x y) T (y 1 ), provided that det y is positive.. The Hessian xx ln det x = I,n Q,x 1. Hence the gradient xx 1 = Q,x 1, provided that x is invertible. More generally, x y 1 = Q,y 1 x y provided that y is invertible. 3 Problem formulation and self-concordance properties In CP problems, we minimize a linear function over the intersection of an affine linear manifold with the Cartesian product of circular cones: Q n ( 1,,..., := r) Qn 1 1 Q n Q nr r, where n = n 1 + n + + n r. In this paper, without loss of generality, we assume that r = 1, hence Q n 1,...,n r 1,..., r = Q n 1 1 = Q n. We use the Euclidean Jordan algebraic characterization of circular cones to define the primaldual pair of the CP problems. Let (0, π/) be a given angle, by using the notations introduced in Subsection 1.1, we define the CP problem and its dual as min c T x max b T y m s.t. a (i) T x = b i, i = 1,,..., m, s.t. y i a i + s = c, i=1 x 0, s 0, y R m, (11) where c, a (i) E n for i = 1,,..., m, b R m, x is the primal variable, and y and s are the dual variables. The pair (11) can be compactly rewritten as min c T x max b T y s.t. A x = b, s.t. A T y + s = c, x 0, s 0, y R m, (1) where A := (a (1) ; a () ;... ; a (m) ) is a matrix that maps E n into R m, and A T is its transpose. We call x E n primal feasible if A x = b and x 0. Similarly, (s, y) E n R m is called dual feasible if A T y + s = c and s 0. Now, we consider the self-concordance properties for CP. First, we define the logarithmic barrier for the circular case and compute the partial derivatives of this barrier. We then state and prove our self-concordance result. We define the following feasibility sets: Now we make two assumptions. F 0 (P ) := { x E n : A x = b, x 0 ; F 0 (D) := { (s, y) E n R m : A T y + s = c, s 0 ; F := F 0 (P ) F 0 (D). Assumption 1. The matrix A has a full row rank. 7

8 Assumption. F is not empty. Assumption 1 is for convenience. Assumption requires that both primal and dual CP problems (1) contain strictly feasible solutions (i.e., positive definite feasible points), which guarantees strong duality for CPs and therefore implies that the CP problems (1) have unique solutions. We define the logarithmic barrier [13] on the interior of the feasible set of the dual CP problem (1): max b T y s.t. s(y) := A T y c 0. Let H := { y R m : s(y) 0, hence Int(H ) = { y R m : s(y) 0. Under Assumption, the set Int(H ) is nonempty. The circular logarithmic barrier for Int(H ) is the function f : Int(H ) R defined as f (y) := ln det (s(y)), y Int(H ). To state the result on the self-concordance of the logarithmic barrier f ( ) for CP, we need the following definition. Definition 1. [13, Definition.1.1] Let E be a finite-dimensional real vector space, G be an open nonempty convex subset of E, and let g be a C 3, convex mapping from G to R. Then g is called α-self-concordant on G with the parameter α > 0 if for every y G and h E, the following inequality holds: 3 yyyg(y)[h, h, h] α 1/ ( yyg(y)[h, h]) 3/. An α-self-concordant function g on G is called strongly α-self-concordant if g tends to infinity for any sequence approaching a boundary point of G. We note that in the above definition the set G is assumed to be open. However, relative openness would be sufficient to apply the definition. See also [13, Item A, Page 57]. We now present the following important result in the self-concordance of the circular logarithmic barrier. Theorem. The logarithmic barrier function f ( ) is a strongly 1-self-concordant barrier for H. Theorem implies the existence of polynomial-time interior-point algorithms for circular programming [13]. This is the substance of Sections 4 and 5. We prove Theorem in Section 6. Proving Theorem establishes the polynomial convergence results of the proposed algorithms. 4 Newton s method and commutative directions In the primal dual pair of CP problems (1), the matrix A is defined to map E n into R m, and its transpose, A T, is defined to map R m into E n such that x T (A T y) = (A x) T y. Indeed, we can prove weak and strong duality properties for the pair (1) as justification for referring to them as a primal dual pair. The following lemma generalizes [3, Lemma 15]. Lemma 4. (Complementarity conditions) Suppose that x, s 0. Then x T s = x T s = 0 if and only if (x s) = 0. 8

9 Proof. Since [ (x s) = x T s x 0 s + s 0 x ] = [ x0 s 0 + cot x T s x 0 s + s 0 x it is enough to prove that x T s = x 0 s 0 + cot x T s = 0 implies that x 0 s + s 0 x = 0. If x 0 = 0 or s 0 = 0, then x = 0 or s = 0 and hence result is trivial. Therefore, we need only to consider the case where x 0 and s 0 are strictly greater than zero. By Cauchy-Schwartz inequality and the assumption that x 0 and s 0, we have cot x T s cot x T s cot x s x 0 s 0. (13) Consequently, x 0 s 0 + cot x T s 0. Now, x 0 s 0 +cot x T s = 0 if and only if cot x T s = x 0 s 0, therefore if and only if the inequalities in (13) are satisfied as equalities. But if this is true then either x = 0 or s = 0, in which case x 0 s + s 0 x = 0, or x 0 and s 0, x = αs, where α > 0, and x 0 = x = α s = αs 0, i.e., x + x 0 s 0 s = 0. The proof is complete. As a result, the complementary slackness condition for the CP problems (1) is given by the equation (x s) = 0. Thus, the corresponding linear system is: A x = b, A T y + s = c, (x s) = 0, x, s 0. The logarithmic barrier problem associated with the primal problem in (1) is the problem: ], min c T x µ ln det (x) s.t. A x = b, x 0, (14) where the barrier parameter µ := 1 xt s > 0 is the normalized duality gap. The Lagrangian dual of (14) is the problem: max b T y + µ ln det (s) s.t. A T y + s = c, s 0, (15) which is the logarithmic barrier problem associated with the dual problem in (1). As the CP problems (14) and (15) are, respectively, concave and convex, x and (y, s) are optimal solutions to (14) and (15), respectively, if and only if they satisfy the following optimality conditions: A x = b, A T y + s = c, (16) (x s) = σµ e, x, s 0. Now, we apply Newton s method to this system to get the following linear system: A x = b A x, A T y + s = c s A T y, ( x s) + (x s) = σµ e (x s), (17) where ( x, s, y) E n E n R m and σ [0, 1] is a centering parameter. 9

10 We study short-, semi-long-, and long-step algorithms associated with the following centrality measures defined for (x, s) Int(Q n ) Int(Qn ) with respect to : ( ) ( ), d,f (x, s) := Q,x 1/ s µ e F = λ,1(q s) µ,x1/ + λ, (Q,x 1/ s) µ { λ,1 λ, d, (x, s) := Q,x 1/ s µ e = max (Q,x 1/ s) µ, (Q,x 1/ s) µ, { d, (x, s) := µ min λ,1 (Q,x 1/ s), λ, (Q,x 1/ s). Let γ (0, 1) be a given constant. With respect to, we define the following neighborhoods of the central path for CP: N,F (γ) := { (x, s, y) F 0 (P ) F 0 (D) : d,f (x, s) γµ, N, (γ) := { (x, s, y) F 0 (P ) F 0 (D) : d,(x, s) γµ, N, (γ) := { (x, s, y) F 0 (P ) F 0 (D) : d, (x, s) γµ. (18) Note that, by item i of [5, Proposition 1], Q,s 1/ x and Q,s 1/ x operator commute, and thus the centrality measures d, (x, s) and their corresponding neighborhoods N, (γ) are symmetric with respect to x and s. Note also that, by item ii of [5, Proposition 1], Q,x 1/ s and Q, x 1/ s have the same eigenvalues, and that all neighborhoods can be defined in terms of the eigenvalues of Q,x 1/ s, therefore the three neighborhoods defined in (18) are scaling invariant, i.e. (x, s) is in the neighborhood if and only if ( x, s) is. Furthermore, it is easy to see that d,f (x, s) d, (x, s) d, (x, s), which implies that N,F (γ) N, (γ) N, (γ) Q n Qn. One can see that the vectors x and s may not operator commute, hence the statement that (x s) = σµ e implies x = σµ s 1 may not true. Such a statement holds if x and s operator commute (see [14, Chapter II]). So, we need to scale the optimality conditions (17) so that the scaled vectors operator commute (or equivalently, the scaled vectors are simultaneously decomposed). We use an effective way of scaling proposed originally by Monteiro [11] and Zhang [1] for semidefinite programming, and then generalized by Schmieta and Alizadeh [5] for general symmetric programming. From now on, with respect to and p 0, we define x := Q,p x, s := Q,p 1 s, c := Q,p 1 c, c := Q,p 1 c, à := A Q,p 1, and à := A Q,p 1. Note that, using item of [5, Lemma 8], we have Q,p Q,p 1 = Q,p (Q,p ) 1 = I n for p 0. With this change of variables, Problem (14) becomes min c T x µ ln det ( x) s.t. à x = b, x 0, (19) and Problem (15) becomes max s.t. b T y + µ ln det ( s) à T y + s = c, s 0, Note that Problems (14) and (19) have the same maximizer, but their optimal objective values are equal up to a constant. Similarly, Problems (15) and (0) have the same minimizer but their optimal objective values differ by a constant. We have the following lemma and proposition. 10 (0)

11 Lemma 5 (Lemma 8, [5]). Let p E n be invertible. Then (x s) = σµ e if and only if ( x s) = σµ e. Proposition 1. The point (x, s, y) satisfies the optimality conditions ( 16) if and only if the point ( x, s, y) satisfies the relaxed optimality conditions à x = b, à T y + s = c, ( x s) = σµ e, x, s 0. Proof. The proof follows from Lemma 5, the fact that Q,p (Q n ) = Qn, and likewise, as an operator, Q,p (Int(Q n )) = Int(Qn ), because Qn is a symmetric cone (see also [10, Theorem 6]). As a result of Proposition 1, we conclude that the set directions ( x, s, y) satisfies the optimality conditions (17) if and only if the set directions ( x, s, y) satisfies the following relaxed optimality conditions: à x = b à x, à T y + s = c s ÃT y, ( x s) + ( x s) = σµ e ( x s). (1) To compute the Newton directions ( x, s, y), we write the system of equations (1) in the following block matrix form: where à 0 T 0 T 0 à T I n Arw ( s) 0 T Arw ( x) x y s = r,p r d r,c r,p := b à x, r d := c s ÃT y, and r,c := σµ e ( x s). Solving () by applying block Gaussian elimination, we obtain, () y = (à Arw 1 ( s)arw( x)ãt ) 1 (r,p + s = r d ÃT y, x = Arw 1 (r ( s),c Arw ( x) s ). à Arw 1 ( s)(arw ( x)r d r,c )), (3) As we can see, each choice of p leads to a different search direction. As we mentioned earlier, we are interested in the class of p for which the scaled elements are simultaneously decomposed. Hence, it is suffices to choose p so that x and s operator commute. That is, we restrict our attention to the following set of scalings: We introduce the following definition [3]. C (x, s) := {p 0 : x and s operator commute. Definition. The set of directions ( x, s, y) arising from those p C(x, s) is called the commutative class of directions, and a direction in this class is called a commutative direction. The following three choices of p are the most common in practice [5]: 11

12 Choice I (The HRVW/KSH/M direction): We may choose p = s 1/ and obtain s = e. This choice is analogue of the XS direction in semidefinite programming, and is known as the HRVW/KSH/M direction (it was introduced by Helmberg, Rendl, Vanderbei, and Wolkowicz [15], and Kojima, Shindoh, and Hara [16] independently, and then rediscovered by Monteiro [11]). Choice II (The dual HRVW/KSH/M direction): We may choose p = x 1/ and obtain x = e. This choice of directions arises by switching the roles of X and S; it is analogue of the SX direction in semidefinite programming, and is known as the dual HRVW/KSH/M direction. Choice III (The NT direction): We choose p in such a way that x = s. In this case we choose ( ( ) 1/ 1/ ( ( ) 1/ 1/ p = Q,x 1/ Q,x 1/ s) = Q,s 1/ Q,s 1/ x). This choice of directions was introduced by Nesterov and Todd [17, 18] and is known as the NT direction. After we compute the Newton directions ( x, s, y) using (3), we can compute the Newton directions ( x, s, y) by applying the inverse scaling to ( x, s, y) (see Algorithm 1). The corresponding ( x, s, y) is different from the one obtained by solving system (17) directly. In fact, the former depends on p, while the latter is yielded as a special case when p = e. It is clear that p = e may not be in C(x, s). 5 The path-following algorithms for solving CPs In this section we introduce short-, semi-long-, and long-step path-following algorithms for solving the CP problems (1). This class is stated formally in Algorithm 1. Algorithm 1: The path-following algorithm for solving CP (1) Require: ɛ (0, 1), σ (0, 1), γ (0, 1), ( x (0), y (0), s (0)) N, (γ) set µ (0) = 1 x(0) T s (0) and k = 0 while µ (k) ɛµ (0) do ( choose a scaling vector p k C x (k), s (k)) let ( x (k), s (k), y (k)) ( ) = Q,pk x (k), Q,p 1s (k), y (k) k (k) (k) compute ( x, s, y (k)) using (3) ( let ( x (k), s (k), y (k)) = Q,p 1 k ) x (k) (k), Q,pk s, y (kj) choose the largest step length µ (k) such that ( x (k+1), y (k+1), s (k+1)) = ( x (k), y (k), s (k)) + µ (k) ( x (k), s (k), y (k)) N, (γ) set µ (k+1) end while = 1 x(k+1) T s (k+1) and k = k + 1 1

13 In Algorithm 1, ɛ is the desired accuracy of the solution. The general variant of the algorithm is determined by the choice σ and the neighborhood as follows: The short-step algorithm is obtained by choosing σ = 1 δ/ r, with δ (0, 1), and N,F (γ) as the neighborhood. The semi-long-step algorithm is obtained by choosing σ (0, 1) and N, (γ) as the neighborhood. The long-step algorithm is obtained by choosing σ (0, 1) and N, (γ) as the neighborhood. The following theorem gives polynomial convergence results for Algorithm 1. Theorem 3. Consider Algorithm 1. If the NT direction is used at every iteration, then the short-step algorithm will terminate in O( log ɛ 1 ) iterations, and the semi-long and long-step algorithm will terminate in O( log ɛ 1 ) iterations. If the HRVW/KSH/M direction or the dual HRVW/KSH/M direction is used at every iteration, then the short-step algorithm will terminate in O( log ɛ 1 ) iterations, the semi-long-step algorithm will terminate in O( log ɛ 1 ) iterations, and long-step algorithm will terminate in O( log ɛ 1 ) iterations. Theorem 3 is a consequence of Theorem (which will be proved in the remaining part of this paper) and [5, Theorem 37] where the underlying symmetric cone is the circular cone. 6 Proving self-concordance for the circular logarithmic barrier In this section, we prove Theorem. To do so, we first obtain a representation for the gradient, the Hessian and the third directional derivative of the logarithmic barrier in the circular case. Recall that the circular logarithmic barrier is defined as f (y) = ln det (s(y)) for all y in Int(H ), where s(y) = A T y c and H = {y R m : s(y) 0. Throughout the following proof, we let y R m be such that s(y) 0. Proof of Theorem. Note that y s(y) = y (A T y c) = A T. Then, by using item 1 of Lemma 3 and applying chain rule, we have y f (y) = y ln det (s(y)) = ( y s) T s 1 = A s 1, and consequently, using item of Lemma 3 and applying chain rule, we also have Let yyf (y) = A y s 1 = A Q,s 1 y s = A Q,s 1A T. H = H(y) := yyf (y) = A Q,s 1A T. (4) Note that the Hessian matrix H is positive definite under Assumption 1 and the assumption that s = s(y) 0. Let h R m, h 0, ā i := Q,s 1/a i, Ā := AQ,s 1/, and let p := Āh be a vector in En with the eigenvalues λ,1 (p) and λ, (p). Using (4), we have h T Hh = (Āh)T Āh = Āh,F = p,f = λ,1 (p) + λ, (p). (5) 13

14 Let u and v be two vectors in E n such that u is invertible. We have u Q,u 1[v] = u ( Arw(u 1 ) Arw(u ) ) [v] = Arw ( Q,u 1v ) Arw(u 1 ) Arw(u 1 )Arw ( Q,u 1v ) + Arw ( Q,u 1v u 1) = Q,(Q,u 1 v),u 1. Note that s 1/ is the unique positive definite vector having (s 1/ ) = s 1. Plugging in s(y) for u in the positive equation yields Using (4) and (6), we have Then Therefore, s Q,s(y) 1[u] = Q,(Q,s 1 Au),s 1 = Q,s 1/Arw(Āu)Q,s 1/. (6) y H[u] = Ā Arw(Āu)ĀT. y (h T Hh)[u] = p T (Arw(Āu)p). 3 yyyf [h, h, h] = y (h T Hh)[h] = p T (Arw(p)p). (7) It is immediate from the fact that Arw(p), = p,, the fact that p, p,f, and (7) that 3 yyyf [h, h, h] = p T (Arw(p)p) Arw(p), p T p = p, ( p T p ) = p, λ,1 (p) + λ, (p) = p, h T H(x)h = p, yyf [h, h] p,f yyf [h, h] = ( yyf [h, h]) 3/, where we used (5) to obtain the third and last equalities and (4) to obtain the fourth equality. It follows from (8), and x f (x) T ( xxf (x)) 1 x f (x) = e T ĀT (Ā ĀT ) 1 Āe e T e = 1, that f is a 1-self-concordant barrier for H. In addition, it is clear that f tends to infinity for any sequence approaching a boundary point of H. Thus, the logarithmic barrier f is a strongly 1-self-concordant barrier for H. The result is established. 7 Concluding remarks Contrary to the beliefs in the optimization community, it has seen that circular programming is a symmetric programming with a special structure. There is a particular Euclidean Jordan algebra that underlies the analysis of interior point algorithms for the circular programming. In this paper, we have used the machinery of this particular Euclidean Jordan algebra to derive polynomial-time path-following algorithms for circular programming problems. We have also proved the polynomial convergence results of the proposed algorithms by showing that the circular logarithmic barrier is a strongly self-concordant barrier. (8) 14

15 References [1] J.C. Zhou, J.S. Chen, Properties of circular cone and spectral factorization associated with circular cone, Nonlinear convex anal. 14 (013) [] J. Zhou, J. Chen, H. Hung, Circular cone convexity and some inequalities associated with circular cones, J. Ineq. Appl. 571 (013) [3] F. Alizadeh, D. Goldfarb, Second-order cone programming, Math. Program. Ser. B 95 (003) [4] M.J. Todd, Semidefinite optimization, Acta Numer. 10 (001) [5] S.H. Schmieta, F. Alizadeh, Extension of primal-dual interior point methods to symmetric cones, Math. Program. Ser. A 96 (003) [6] C.H. Ko, J.S. Chen, Optimal grasping manipulation for multifingered robots using semismooth Newton method, Math. Probl. Eng. (013) 1 9. [7] S. Boyd, B. Wegbreit, Fast computation of optimal contact forces, IEEE Trans. Robot. 3 (007) [8] B. Leon, A. Morales, J. Sancho-Bru, Robot Grasping Foundations, In From Robot to Human Grasping Simulation, Springer International Publishing, 014. [9] J.F. Bonnans, H.R. Cabrera, Perturbation analysis of second-order cone programming problems, Math. Program. Ser. B 104 (005) [10] B. Alzalg, The circular cone: A new symmetric cone. Submitted for publication. [11] R.D. Monteiro, Primal-dual path-following algorithms for semidefinite programming, SIAM J. Optim. 7 (1997) [1] Y. Zhang, On extending primal-dual interior-point algorithms from linear programing to semidefinite programming, SIAM J. Optim. 8 (1998) [13] Yu.E. Nesterov, A.S. Nemirovskii, Interior Point Polynomial Algorithms in Convex Programming, SIAM Publications, Philadelphia, PA, [14] J. Faraut, and A. Korányi, Analysis on Symmetric Cones, Oxford University Press, Oxford, UK, [15] C. Helmberg, F. Rendl, R.J. Vanderbei, H. Wolkowicz, An interior-point methods for stochastic semidefinite programming, SIAM J. of Optim., 6 (1996) [16] M. Kojima, S. Shindoh, S. Hara, Interior-point methods for the monotone linear complementarity problem in symmetric matrices, SIAM J. of Optim., 7 (9) (1997) [17] Yu.E. Nesterov, M.J. Todd, Self-scaled barriers and interior-point methods for convex programming, Math. Oper. Res. (1997) 1 4. [18] Yu.E. Nesterov, M.J. Todd, Primal-dual interior-point methods for self-scaled cones, SIAM J. Optim. 8 (1998)

The Jordan Algebraic Structure of the Circular Cone

The Jordan Algebraic Structure of the Circular Cone The Jordan Algebraic Structure of the Circular Cone Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan Abstract In this paper, we study and analyze the algebraic structure

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1 L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

An interior-point trust-region polynomial algorithm for convex programming

An interior-point trust-region polynomial algorithm for convex programming An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

Department of Social Systems and Management. Discussion Paper Series

Department of Social Systems and Management. Discussion Paper Series Department of Social Systems and Management Discussion Paper Series No. 1262 Complementarity Problems over Symmetric Cones: A Survey of Recent Developments in Several Aspects by Akiko YOSHISE July 2010

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Second-Order Cone Programming

Second-Order Cone Programming Second-Order Cone Programming F. Alizadeh D. Goldfarb 1 Introduction Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

OPTIMIZATION OVER SYMMETRIC CONES UNDER UNCERTAINTY

OPTIMIZATION OVER SYMMETRIC CONES UNDER UNCERTAINTY OPTIMIZATION OVER SYMMETRIC CONES UNDER UNCERTAINTY By BAHA M. ALZALG A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON STATE UNIVERSITY

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

The Algebraic Structure of the p-order Cone

The Algebraic Structure of the p-order Cone The Algebraic Structure of the -Order Cone Baha Alzalg Abstract We study and analyze the algebraic structure of the -order cones We show that, unlike oularly erceived, the -order cone is self-dual for

More information

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Supplement: Universal Self-Concordant Barrier Functions

Supplement: Universal Self-Concordant Barrier Functions IE 8534 1 Supplement: Universal Self-Concordant Barrier Functions IE 8534 2 Recall that a self-concordant barrier function for K is a barrier function satisfying 3 F (x)[h, h, h] 2( 2 F (x)[h, h]) 3/2,

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

Permutation invariant proper polyhedral cones and their Lyapunov rank

Permutation invariant proper polyhedral cones and their Lyapunov rank Permutation invariant proper polyhedral cones and their Lyapunov rank Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software and Second-Order Cone Program () and Algorithms for Optimization Software Jared Erickson JaredErickson2012@u.northwestern.edu Robert 4er@northwestern.edu Northwestern University INFORMS Annual Meeting,

More information

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS A ISRN KTH/OPT SYST/FR 02/12 SE Coden: TRITA/MAT-02-OS12 ISSN 1401-2294 Characterization of the limit point of the central path in semidefinite programming by Göran Sporre and Anders Forsgren Optimization

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES YU. E. NESTEROV AND M. J. TODD Abstract. In this paper we continue the development of a theoretical foundation for efficient primal-dual interior-point

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

The eigenvalue map and spectral sets

The eigenvalue map and spectral sets The eigenvalue map and spectral sets p. 1/30 The eigenvalue map and spectral sets M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Second-order cone programming

Second-order cone programming Math. Program., Ser. B 95: 3 51 (2003) Digital Object Identifier (DOI) 10.1007/s10107-002-0339-5 F. Alizadeh D. Goldfarb Second-order cone programming Received: August 18, 2001 / Accepted: February 27,

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

An E cient A ne-scaling Algorithm for Hyperbolic Programming

An E cient A ne-scaling Algorithm for Hyperbolic Programming An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

The Q Method for Second-Order Cone Programming

The Q Method for Second-Order Cone Programming The Q Method for Second-Order Cone Programming Yu Xia Farid Alizadeh July 5, 005 Key words. Second-order cone programming, infeasible interior point method, the Q method Abstract We develop the Q method

More information

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

Strict diagonal dominance and a Geršgorin type theorem in Euclidean Strict diagonal dominance and a Geršgorin type theorem in Euclidean Jordan algebras Melania Moldovan Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Conic Linear Programming. Yinyu Ye

Conic Linear Programming. Yinyu Ye Conic Linear Programming Yinyu Ye December 2004, revised January 2015 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture

More information

The Q Method for Symmetric Cone Programming

The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programming Farid Alizadeh Yu Xia October 5, 010 Communicated by F. Potra Abstract The Q method of semidefinite programming, developed by Alizadeh, Haeberly and Overton,

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information