On polyhedral approximations of the second-order cone Aharon Ben-Tal 1 and Arkadi Nemirovski 1 May 30, 1999

Size: px
Start display at page:

Download "On polyhedral approximations of the second-order cone Aharon Ben-Tal 1 and Arkadi Nemirovski 1 May 30, 1999"

Transcription

1 On polyhedral approximations of the second-order cone Aharon Ben-Tal 1 and Arkadi Nemirovski 1 May 30, 1999 Abstract We demonstrate that a conic quadratic problem min x e T x Ax b, A l x b l 2 c T l x d l, l = 1,..., m }, y 2 = y T y, (CQP) is polynomially reducible to Linear Programming. We demonstrate this by constructing, for every ɛ (0, 1 2 ], an LP program (explicitly given in terms of ɛ and the data of (CQP)) min e T x P x,u ( ) } x + p 0 u (LP) with the following properties: (i) the number dim x + dim u of variables and the number dim p of constraints in (LP) do not exceed [ ] m O(1) dim x + dim b + dim b l ln 1 ɛ ; (ii) every feasible solution x to (CQP) can be extended to a feasible solution (x, u) to (LP); (iii) if (x, u) is feasible for (LP), then x satisfies the ɛ-relaxed constraints of (CQP), namely, Ax b, A l x b l 2 (1 + ɛ)[c T l x d l ], l = 1,..., m. 1 Introduction The initial motivation for the question posed and resolved in this paper originated from the practical need to solve numerically a conic quadratic problem } min e T x Ax b, A l x b l 2 c T x l x d l, l = 1,..., m, (CQP) y 2 = y T y being the Euclidean norm. There are two major sources of recent interest in these problems: (CQP) is a natural form of several important applied problems, e.g., problems with Coulomb friction in contact mechanics, see [3, 5, 6, 10]; Conic quadratic constraints have very powerful expressive abilities, which allows one to cast a wide variety of nonlinear convex optimization problems in the form of (CQP), see, e.g., [6, 7] and Examples 1 5 below. 1 Faculty of Industrial Engineering and Management, Technion Israel Institute of Technology, Technion City, Haifa 32000, Israel Funded by the Germany-Israeli Foundation for R&D contract No. I /95 and the Israel Ministry of Science grant #

2 As a matter of fact, the case of (CQP) is covered by the existing general theory of polynomial time interior point (IP) methods (e.g., the general theory for well-structured convex programs [7] and the Nesterov-Todd theory of IP algorithms for conic problems on self-scaled cones [8, 9]). These theories yield both algorithms and complexity bounds; in view of these bounds, (CQP) with m constraints and n variables is not more difficult than a linear programming problem of similar sizes (in both cases, the problem can be solved within an accuracy δ in O(1) m ln(o(δ 1 )) steps of a polynomial time IP method, with no more than O(n 2 (m + n)) arithmetic operations per step). Moreover, there exists software implementing the theoretical schemes. However, to the best of our knowledge the IP software for (CQP) available at the moment, although capable of handling problems with tens of thousands of conic quadratic constraints, imposes severe restrictions on the design dimension of the problem (a few thousand variables). In this respect the state of the art in numerical processing of CQP s is incomparable to that in LP, where we can solve routinely problems with even hundreds of thousands of variables and constraints. Given this huge difference, the question arises: can we process CQP s via LP techniques? Mathematically, the question is whether we can approximate a CQP problem by an LP one, without increasing dramatically the sizes of the problem, and this is the question we address in this paper. We start with a precise formulation of the question. Let ɛ > 0, and let L k = (y, t) R k R t y 2 } be the (k + 1)-dimensional Lorentz cone. We define a polyhedral ɛ-approximation of L k as a linear mapping Π k (y, t, u) : R k R R p k R q k such that (i) If (y, t) L k, then there exists u R p k such that Π k (y, t, u) 0; (ii) If (y, t) R k R is such that Π k (y, t, u) 0 for some u, then y 2 (1 + ɛ)t. Geometrically: the system of homogeneous linear constraints Π k (y, t, u) 0 defines a polyhedral cone K in the space of (y, t, u)-variables; we say that Π k ( ) is a polyhedral ɛ-approximation of L k, if the projection of K on the space of (y, t)-variables contains L k and is contained in the (1+ɛ)-extension of L k in the image of L k under linear mapping (x, t) (x, (1+ɛ) 1 t) (note that for ɛ close to 0 the mapping is nearly the identity ). Let k l, l = 1,..., m, be the row sizes of the matrices A l in (CQP). Given polyhedral ɛ- approximations Π k l( ) of the Lorentz cones L k l, l = 1,..., m, we can approximate (CQP) by the linear programming problem ( min e T x Ax b, Π k l A l x b l, c T x,u l } m l x d l, u l) } 0, l = 1,..., m. (LP) From the definition of a polyhedral approximation it follows that (CQP) and (LP) are linked as follows: both problems have the same objectives, and the projection ˆX of the feasible set of (LP) onto the x-space is in between the feasible set of (CQP) and that of its the ɛ-relaxation: } min e T x Ax b, A l x b l 2 (1 + ɛ)[c T x l x d l ], l = 1,..., m. (CQP ɛ ) We see that if ɛ is close to 0, then (LP) is a good approximation of (CQP). 2

3 Note that the sizes of (LP) are larger than those of (CQP); specifically, the design dimension of (LP) is m N = n + [n = dim x], p kl and the total number of linear constraints in (LP) is M = k 0 + m q kl, where k 0 is the row size of A; recall that p k and q k are the dimension of the u-vector and the image dimension of the polyhedral approximation Π k (,, u), respectively. The weakest necessary requirement for the proposed scheme to be computationally meaningful is that the size M + N of (LP) should be polynomial in the size n + m k l of (CQP) and should grow moderately as ɛ l=0 approaches 0. We arrive at the following question: (?) Can we construct a polyhedral ɛ-approximation of L k with the sizes p k + q k growing moderately as k grows and as ɛ approaches 0? The straightforward approach when L k is approximated by a circumscribed polyhedral cone with a sufficiently large number of facets does not work: the required number of facets blows up exponentially as k grows (even with ɛ = 1, a rough lower bound on the number of required facets is expk/8}). Surprisingly, the situation is not as bad as suggested by the latter observation. We prove the following result: Theorem 1.1 For every k and every ɛ (0, 1], L k admits a polyhedral ɛ-approximation with p k + q k O(1)k ln 2 ɛ (1) (from now on, all O(1) s are absolute constants). Theorem 1.1 establishes an interesting (and, to the best of our knowledge, new) geometric fact and as such seems to be important by its own right. As far as the computational consequences are concerned, the situation is as follows. It was already mentioned that theoretically problem (CQP) with n variables and m constraints is not more difficult than a linear programming problem of the same sizes. When replacing (CQP) with its polyhedral approximation, we increase both m and n, thus losing in the complexity. Note, however, that theoretical complexity analysis and computational practice are not the same: performance and reliability of modern commercial LP software are much better than those of CQP-solvers available at the moment, and this (hopefully, temporary) difference can make the use of polyhedral approximation practically meaningful, especially in the case when this approximation does not increase that dramatically the sizes of the problem. The latter happens, e.g., when the row sizes k l of the conic quadratic constraints in (CQP) are small (recall that we need O(k l ln(ɛ 1 )) linear constraints and extra variables to approximate the constraint A l x b l 2 c T l x d l). In this respect it should be mentioned that there are important sources of conic quadratic problems with fairly small k l s, e.g., problems with Coulomb friction (k l 3) [3, 5, 6, 10], and truss topology design problems (k l 2) [1]. We list below several other examples where the polyhedral approximation may be of practical use. 3

4 Example 1: Quadratically constrained convex quadratic program min x x T Q T 0 Q 0 x 2b T 0 x : x T Q T i Q i x 2b T i x c i, i = 1,..., m } can be posed equivalently as the conic quadratic program ( ) Q0 x min t : t + 2bT 0 x + 4 ( ) Qi x, 2bT i x + c } i + 4, i = 1,..., m. x,t t+2b T 0 x 4 4 2b T i x+c i 4 4 If the number of truly quadratic (i.e., with Q i 0) constraints is relatively small, a polyhedral approximation of these constraints does not increase significantly the sizes of the problem. We see that linear programming software can be easily adapted to handle mixed linearly-quadratically constrained convex quadratic problems with few quadratic constraints. The accuracy of the resulting approximation, as well as those of the approximations in Examples 2 4 below, will be discussed in Section 4. Example 2: Geometric means. with n π i 1. The hypograph i=1 Let π i = p i p, p i, p N +, be positive rationals, i = 1,..., n, of the concave function f(x) = H = (x, t) R n+1 x 0, t n i=1 n i=1 x π i i } x π i i : R n + R can be represented via conic quadratic inequalities. Indeed, let us choose the smallest integer k such that 2 k p and consider the following system of constraints in variables y = (x, t, τ, y l j }): y 0 (y1 0,..., y0 ) = (x 2 k 1,..., x } 1,..., x } n,..., x n, τ,..., τ, 1,..., 1), }}}}}} p 1 p n 2 k p n y 0 0, p p i 1 0 yj l y2j 1 l 1 yl 1 2j, j = 1,..., 2 k l, l = 1,..., k 1, 0 τ y1 k 1 y2 k 1, t τ. (2) Note that in fact (2) is a system of conic quadratic inequalities, since ( ) } u, v, w 0 : u vw = u, v, w 0 : u v w 2 2 v + w 2 Moreover, it is immediately seen that the projection of the solution set X of (2) on the x, t-space is exactly the set H. Replacing every conic quadratic constraint in (2) by its tight polyhedral approximation, let us denote by Y the solution set of the resulting system of linear inequalities. This set lives 4 }.

5 in a certain R N which contains, as a subspace, the space of y-variables, and the projection of Y on this space is a tight outer approximation of X. It follows that the projection of Y on the subspace of x, t-variables is a tight approximation of the hypograph H of f. As a result, we get the possibility to process via LP optimization programs with linear objectives and constraints of the form a j x π lj l b [a j 0, π lj 1, π lj are positive rationals]. j l l Example 3: Inverse geometric means. i = 1,..., n. The epigraph Let π i = p i p, p i, p N +, be positive rationals, of the function f(x) = n i=1 E = (x, t) R n+1 x > 0, t choosing the smallest integer k such that 2 k constraints in variables y = (x, t, y l j }): n i=1 x π i i } x π i i can be represented via conic quadratic inequalities. Indeed, p + i p i, consider the following system of y 0 (y1 0,..., y0 ) = (x 2 k 1,..., x } 1,..., x } n,..., x n, t,..., t, 1,..., 1 ), }}}}}} p 1 p n p p i 2 k p i y 0 0, 0 yj l y2j 1 l 1 yl 1 2j, j = 1,.., 2 k l, l = 1,..., k 1, 1 y1 k 1 y2 k 1. (3) As in (2), the system (3) is in fact a system of conic quadratic inequalities; it is immediately seen that the projection of the solution set X of this system onto the space of x, t-variables is exactly the epigraph E of f. As in Example 2, we can use tight polyhedral approximations of the conic quadratic inequalities in (3) to get a tight polyhedral approximation of the epigraph of f. Thus, we get the possibility to process via linear programming optimization programs with linear objectives and constraints of the form a j x π lj l b [a j 0, π lj are positive rationals]. j l Example 4: Geometric programming. To process Geometric Programming problems via LP, it suffices to build a tight polyhedral approximation of the epigraph of the exponential function. Assume that the exponents a T i x of the exponential monomials expat i x} arising in the problem vary in a given segment R a T i x R (R < ). Note that one can enforce this assumption by adding to the original problem the constraints a T i x R, and that that from the computational viewpoint these constraints are indeed a must : e.g., a SUN computer believes that exp750} =, and exp 750} = 0, so that in actual computations one can safely set R = 750. We come to the problem as follows: 5

6 Given a set E R = (x, t) R 2 x R, expx} t}, and an ɛ > 0, find an (1 + ɛ)-polyhedral approximation of the set, i.e., a system P (x, t, u) 0 of affine constraints in variables x, t and additional variables u such that (i) If (x, t) E R, then there exists u such that P (x, t, u) 0; (ii) If (x, t, u) E R, then x R and expx} (1 + ɛ)t. A reasonably good solution to this problem is given by Theorem 1.1. First we observe that E R can be approximated by the following system of quadratic constraints: 1 4 x R, ) u 1, ( 2 q x (1 + 2 q x) 2 + (u 2 1) 2 u [ (1 + 2 q x) 2 u 2 ], u u2 2 u 3, 4u 2 2+i + (u 3+i 1) 2 u 3+i + 1, [ u 2 2+i u 3+i] ( with properly chosen q = O variables u 1 and u 2, reduces to ln R ɛ i = 1,..., q, u 3+q t ). Indeed, the system in question, after eliminating the x R, g q (x) 1 + (2 q x) + (2 q x) (2 q x) (2 q x) 4 24 u 3, u 2 2+i u 3+i, i = 1,..., q, u 3+q t, (4) and the projection of the solution set of the latter system on the plane of (x, t)-variables is the set E q R = (x, t) R2 x R, g (2q ) q (x) t}; which, for large q, is a good approximation of E R due to g (2q ) q (x) (exp2 q x}) (2q) = expx}. Replacing the conic quadratic constraints in the system (4) by their tight polyhedral approximations, given by Theorem 1.1, one obtains a tight polyhedral approximation of E R. Example 5: Robust counterpart of uncertain LP programs with ellipsoidal uncertainty (see [2]). This is a conic quadratic program obtained from an original LP program by replacing every inequality constraint affected by data uncertainty with a conic quadratic constraint. The motivation behind this procedure implies that the stabilizing effect of the procedure remains essentially the same when the conic quadratic constraints are further replaced with their polyhedral ɛ-approximations with a moderate (not necessary close to 0) value of ɛ, 6

7 say, ɛ = 1, thus allowing a moderate growth of the number of constraints, as compared to the original uncertain LP problem. The rest of the paper is organized as follows. The simple construction of a concrete polyhedral approximation leading to Theorem 1.1 is presented in the Section 2. In Section 3 we demonstrate that the upper bound in (1) is, in a sense, the best possible. Section 4 contains some concluding remarks. 2 The construction Let ɛ (0, 1] and a positive integer k be given. We intend to build a polyhedral ɛ-approximation of the Lorentz cone L k. Without loss of generality we may assume that k is an integer power of 2: k = 2 θ, θ N. Tower of variables. constraint The first step of our construction is to represent a conic quadratic y y2 k t (CQI) of dimension k + 1 by a system of conic quadratic constraints of dimension 3 each. Namely, let us call the original y-variables variables of generation 0 and let us split them into pairs (y 1, y 2 ),..., (y k 1, y k ). We associate with each pair its successor an additional variable of generation 1. We split the resulting 2 θ 1 variables of generation 1 into pairs and associate with every pair its successor an additional variable of generation 2, and so on; after θ 1 steps we end up with two variables of the generation θ 1. Finally, the only variable of generation θ is the variable t from (CQI). To introduce convenient notation, let us denote by yi l the i-th variable of generation l, so that y1 0,..., y0 k are our original variables y-variables y 1,..., y k, y1 θ t is the original t-variable, and the parents of yi l are the two variables yl 1 2i 1, yl 1 2i. Note that the total number of all variables in this tower of variables is 2k 1. It is clear that the system of constraints [y l 1 2i 1 ]2 + [y l 1 2i ] 2 y l i, i = 1,..., 2 θ l, l = 1,..., θ, (5) is a representation of (CQI) in the sense that a collection (y1 0 y 1,..., yk 0 y k, y1 θ t) can be extended to a solution of (5) if and only if (y, t) solves (CQI). Moreover, let Π l (x 1, x 2, x 3, u l ), l = 1,..., θ, be polyhedral ɛ l -approximations of the cone L 2 = (x 1, x 2, x 3 ) x x2 2 x 3}. Consider the system of linear constraints in variables y l i, ul i : Π l (y l 1 2i 1, yl 1 2i, y l i, u l i) 0, i = 1,..., 2 θ l, l = 1,..., θ. (6) Writing down this system of linear constraints as Π(y, t, u) 0, where Π is linear in its arguments, y = (y 0 1,..., y0 k ), t = yθ 1, and u is the collection of all ul i, l = 1,..., θ and all yl i, l = 1,..., θ 1, we immediately conclude that Π is a polyhedral ɛ-approximation of L k with ɛ = θ (1 + ɛ l ) 1. (7) 7

8 In view of this observation, we may focus on building polyhedral approximations of the Lorentz cone L 2. A polyhedral approximation of L 2. We are about to show that this is given by the following system of linear inequalities (the positive integer ν is a parameter of the construction): (a) (b) ξ 0 x 1 η 0 x 2 ( ) ξ j = cos π ξ j 1 + sin 2 j+1 ( ) η j sin π ξ j 1 + cos 2 j+1 ( ) π η j 1 2 j+1 ( ) π 2 j+1 η j 1, j = 1,..., ν. (8) (c) ξ ν x 3 ( ) η ν tan π ξ 2 ν ν+1 Note that (8) can be straightforwardly written as a system of linear homogeneous inequalities Π (ν) (x 1, x 2, x 3, u) 0, where u is the collection of the 2(ν + 1) variables ξ j, η j, j = 0,..., ν. Proposition 2.1 Π (ν) is a polyhedral δ(ν)-approximation of L 2 = (x 1, x 2, x 3 ) x x2 2 x 3 } with ( ) 1 1 δ(ν) = ( ) 1 = O cos π 4 ν. (9) 2 ν+1 Proof. We should prove that (i) If (x 1, x 2, x 3 ) L 2, then the triple (x 1, x 2, x 3 ) can be extended to a solution to (8); (ii) If a triple (x 1, x 2, x 3 ) can be extended to a solution to (8), then (x 1, x 2 ) 2 (1+δ(ν))x 3. Proof of (i): Given (x 1, x 2, x 3 ) L 2, let us set ξ 0 = x 1, η 0 = x 2, thus ensuring (8.a). Note that (ξ 0, η 0 ) 2 = (x 1, x 2 ) 2 and that the point P 0 = (ξ 0, η 0 ) belongs to the first quadrant. Now, for j = 1,..., ν let us set ( ) ( ) ξ j = cos π ξ j 1 + sin π η j 1 2 j+1 ( ) 2 j+1 ( ) η j = sin π ξ j 1 + cos π η j 1, 2 j+1 2 j+1 thus ensuring (8.b), and let P j = (ξ j, η j ). The point P j is obtained from P j 1 by the following construction: we rotate clockwise P j 1 by the angle φ j = π, thus getting a point Q j 1 ; if 2 j+1 this point is in the upper half-plane, we set P j = Q j 1, otherwise P j is the reflection of Q j 1 with respect to the x-axis. From this description it is clear that (I) P j 2 = P j 1 2, so that all vectors P j are of the same Euclidean norm as P 0, i.e., of the norm (x 1, x 2 ) 2 ; (II) Since the point P 0 is in the first quadrant, the point Q 0 is in the angle π 4 arg(p ) π 4, so that P 1 is in the angle 0 arg(p ) π 4. The latter relation, in turn, implies that Q1 is in the angle π 8 arg(p ) π 8, whence P 2 is in the angle 0 arg(p ) π 8. Similarly, P 3 is in the angle 0 arg(p ) π 16, and so on: P j is in the angle 0 arg(p ) π. 2 j+1 By (I), ξ ν P ν 2 = (x 1, x 2 ) 2 x 3, so that the first inequality in (8.c) is satisfied. By (II), P ν is in the angle 0 arg(p ) π, so that the second inequality in (8.c) is also satisfied. 2 ν+1 We have extended a point from L 2 to a solution to (8). 8

9 Proof of (ii): Assume that (x 1, x 2, x 3 ) can be extended to a solution (x 1, x 2, x 3, ξ j, η j } ν j=0 ) to (8). Let us set P j = (ξ j, η j ). From (8.a, b) it follows that all vectors P j are nonnegative. We have P 0 2 (x 1, x 2 ) 2 by (8.a). Now, (8.b) implies P j 2 P j 1 2. Thus, P ν 2 (x 1, x 2 ) 2. On the other hand, by (8.c) one has P ν 1 2 ( π )ξ ν ( 1 cos 2 ν+1 π )x 3, so that cos 2 (x ν+1 1, x 2 ) 2 (1 + δ(ν))x 3, as claimed. Proof of Theorem 1.1: Specifying in (6) the mappings Π l ( ) as Π (νl) ( ), we conclude that for every collection of positive integers ν 1,..., ν θ the following system of linear inequalities describes a polyhedral approximation Π ν1,...,ν θ (y, t, u) of L k, k = 2 θ : (a l,i ) (b l,i ) (c l,i ) ξ 0 l,i y2i 1 l 1 ηl,i 0 y l 1 2i ξ j ( ) l,i = cos π ξ j 1 ( ) 2 j+1 l,i + sin π η j 1 ( ) 2 j+1 ( l,i ) η j l,i sin π ξ j 1 2 j+1 l,i + cos π η j 1, 2 j+1 l,i ξ ν l l,i yi l ( ) η ν l l,i tan π ξ ν 2 ν l +1 l l,i i = 1,..., 2 θ l, l = 1,..., θ. The approximation possesses the following properties: j = 1,..., ν l. (10) 1. The dimension of the u-vector (comprised of all variables in (10) except y i = yi 0}k i=1 and t = y1 θ) is θ p(k, ν 1,..., ν θ ) k + O(1) 2 θ l ν l ; 2. The image dimension of Π ν1,...,ν θ ( ) (i.e., the # of linear inequalities plus twice the # of linear equations in (10)) is 3. The quality β of the approximation is Given ɛ (0, 1] and setting q(k, ν 1,..., ν θ ) O(1) β = β(ν 1,..., ν θ ) = θ 2 θ l ν l ; θ 1 ( ) 1. cos π 2 ν l +1 ν l = O(1)l ln 2, l = 1,..., θ, ɛ with properly chosen absolute constant O(1), we ensure that as claimed in Theorem 1.1. β(ν 1,..., ν θ ) ɛ, p(k, ν 1,..., ν θ ) O(1)k ln 2 ɛ, q(k, ν 1,..., ν θ ) O(1)k ln 2 ɛ, 9

10 Simplifying the polyhedral approximation. It is easily seen that the approximation (10) can be economized by reducing the image dimension q k = q(k, ν 1,..., ν θ ) and the number p k = p(k, ν 1,..., ν θ ) of extra variables without influencing the quality of the approximation. Specifically, (A) The variables ξl,i 1,..., ξν l l,i in (10) are given by linear equations; one can use these equations to represent ξ j l,i, j 1, as linear combinations of ξ0 l,i and the η l,i-variables, thus eliminating the variables ξl,i 1,..., ξν l l,i along with the corresponding linear equations; (B) For l = 2,..., θ, one can replace inequalities (10.a l,i ) with ξ 0 l,i y l 1 2i 1, η0 l,i y l 1 2i, (11) since the system we end up with after this modification ensures the nonnegativity of yi l 1, l = 2,..., θ; (C) In the system of linear inequalities obtained from (10) after modifications (A) and (B), every variable yi l, l = 1,..., θ 1, enters the inequalities (c l,i ) : ξ ν l l,i yl i, (11) : y l i ξ 0 l+1,(i+1)/2, i odd η 0 l+1,i/2, i even ; (12) it is immediately seen that we preserve the approximation property and the quality of approximation by converting these inequalities into equations. After this modification, we can eliminate from the system of constraints the inequalities (11), and eliminate the variables ξ 0 l,i and η0 l,i, l = 2,..., θ (these variables become linear forms of the remaining ones). To get an impression of the complexity of our polyhedral approximation scheme, we present a table showing the numbers of extra variables p k and linear constraints q k in a polyhedral ɛ-approximation of a conic quadratic constraint y 2 t with dim y = k: ɛ = 10 1 ɛ = 10 3 ɛ = 10 6 ɛ = k p k q k p k q k p k q k p k q k 5.2k ln k ln 1 4.7k ln k ln 1 4.1k ln 1 9.0k ln 1 4.0k ln 1 8.3k ln 1 ɛ ɛ ɛ ɛ ɛ ɛ ɛ ɛ A lower bound We have demonstrated that for every ɛ (0, 1], the Lorentz cone L k for every ɛ (0, 1] admits a polyhedral ɛ-approximation of the size p k + q k O(1)k ln 2 ɛ. We are about to prove that the order of this upper bound in fact cannot be improved: Proposition 3.1 Let k be a positive integer, ɛ (0, 0.5], and let Π(y, t, u) : R k R R p R q 10

11 be a polyhedral ɛ-approximation of L k. Then q O(1)k ln 1 ɛ (13) with positive absolute constant O(1). Proof. The projection L k of the polyhedral cone K = (y, t, u) Π(y, t, u) 0} on the plane of (y, t)-variables does not contain lines; replacing, if necessary, u with its projection on a properly chosen subspace in R p, we may assume that the cone K itself does not contain lines, so that K is a conic hull of its extreme rays. Let N be the number of extreme rays of K; one clearly has N 2 q. Now, since K is the conic hull of N rays, L k is the conic hull of N rays R 1,..., R N, and every one of these rays intersects the hyperplane H = t = 1} in the (y, t)-space. Now let us look at the set G = y (y, 1) L k }. Since Π( ) is a polyhedral ɛ-approximation of L k, the set G contains the unit k-dimensional Euclidean ball B and is contained in the ball (1 + ɛ)b. On the other hand, G is the convex hull of N points y 1,..., y N the y-components of the intersections R i H. It is well-known (see, e.g., [4]) that in the situation in question one has where O(1) is a positive absolute constant. N expo(1)k ln 1 } ɛ 0.5, (14) ɛ To make the presentation self-contained, here is a derivation of (14). We first observe that the N Euclidean balls B i centered at y i of the radius ρ = 2ɛ(1 + ɛ) cover the boundary of (1 + ɛ)b. Indeed, if for some ρ 0 there exists a point y S ((1 + ɛ)b) which is at the distance ρ from every y i, i = 1,..., N, then the convex hull of y 1,..., y N is contained in the convex hull of the set z z ɛ, z y 2 ρ}, so that the latter convex hull contains B; this observation via elementary geometry implies that ρ ρ. Comparing the (k 1)-dimensional areas of the sphere S and the spherical hats B i S, we conclude that the inclusion S B i implies (14). i Combining (14) with the inequality N 2 q, we deduce that q O(1)k ln 1 ɛ, as claimed. 4 Concluding remarks With our approach, we are able to associate with a given conic quadratic ( problem ) (CQP) and a given ɛ (0, 1] an LP program (LP ɛ ) of the same (up to a factor O ln 1 ɛ ) size such that the projection of the feasible set of the latter problem on the x-space is in between the feasible set of (CQP) and the feasible set of the ɛ-relaxation (CQP ɛ ) of (CQP). Thus, (LP ɛ ) is a good approximation of (CQP), provided that the feasible sets of (CQP) and (CQP ɛ ) are close to each 11

12 other. Note that the latter is not necessarily the case; one can easily find an example where (CQP) is infeasible, while all problems (CQP ɛ ), ɛ > 0, are feasible. There is, however, a simple sufficient condition ensuring that the feasible sets of (CQP) and (CQP ɛ ) are O(ɛ)-close to each other: Proposition 4.1 Assume that (CQP) is (i) strictly feasible: there exist x and r > 0 such that A x b, A l x b l 2 [c T l x d l ] r, l = 1,..., m; (ii) semi-bounded : there exists R such that Ax b, A l x b l 2 c T l x d l, l = 1,..., m c T l x d l R, l = 1,..., m. Then for every ɛ > 0 such that γ(ɛ) Rɛ r < 1 one has γ(ɛ) x + (1 γ(ɛ))feas((cqp ɛ )) Feas(CQP) Feas((CQP ɛ )), (15) where Feas((P)) denotes the feasible set of a problem (P). Proof. We already know that the right inclusion in (15) holds true. In order to prove the left inclusion, let y be a feasible solution to (CQP ɛ ); we should prove that the point x = (1 γ)y+γ x is feasible for (CQP). Since both x and y satisfy the linear constraints Ax b, all we should prove is that A l x b l 2 c T l x d l, l = 1,..., m. (16) For δ [0, 1], let x δ = (1 δ)y + δ x. By (i) and since y is feasible for (CQP ɛ ), we have Setting δ = max l A l x b l 2 c T l x d l r, A l y b l 2 (1 + ɛ)[c T l y d l }} t l ], l = 1,..., m. ɛt l r+ɛt l, we conclude that A l x δ b l 2 [c T l x δ d l ] + [(1 δ)ɛt l δr] [c T l x δ d l ], l = 1,..., m, and clearly Ax δ b. Thus, x δ is feasible for (CQP). It follows from (ii) that [c T l x δ d l ] = δ[c T l x d l ] + (1 δ)t l R, l = 1,..., m; since x is feasible for (CQP), we have [c T l x d l] 0, whence (1 δ)t l R for all l. Recalling the origin of δ, we conclude that δ = ɛt r+ɛt for some t satisfying (1 δ)t R. From these relations and in view of γ(ɛ) Rɛ R r < 1 we conclude that t 1 γ(ɛ), whence δ γ(ɛ). Since δ γ(ɛ) 1 and both x δ and x 1 = x are feasible for (CQP), so is x = x γ (ɛ). Proposition 4.1 allows us to quantify the actual quality of polyhedral approximations of conic quadratic problems, in particular, those mentioned in Examples 1 4 of the introduction. In these examples, we are interested in fact in building a polyhedral approximation of the epigraph Epi(f) = (x, t) x Dom f, t f(y)} 12

13 of a particular convex function f(x), i.e., in building a system of linear inequalities P x + tp + Qu + q 0 (S) in such a way that the projection of the solution set of the system onto the (y, t)-space is the epigraph of a certain function f close to f. Specifically, Examples 1 4 deal with the following functions f: Example 1: f(x) = x T Q T Qx : R n R; Example 2: f(x) = n x π i i : R n + R, with π i = p i p, p i, p N +, π i 1; i=1 i Example 3: f(x) = n i=1 x π i i : int R n + R, with π i = p i p, p i, p N + ; Example 4: f(x) = expx}. In all these examples, the closeness between f and f we can efficiently ensure is the uniform closeness on compact subsets of the domain of the target function f. Combining straightforwardly Theorem 1.1, Proposition 4.1 and the approximation schemes mentioned in Introduction in connection with Examples 1 4, we come to the following results: Example 1: for every ɛ (0, 1], R > 0 one can explicitly construct a system (S) of the size Size(S) dim x + dim u + dim q O(n) ln in such a way that the associated function f satisfies (1 + R n)(1 + n max Q ij ) i,j ɛ [n = dim x] f(x) f(x) ɛ (x : x R); Example 2: for every ɛ (0, 1], R > 0) one can explicitly construct a system (S) of the size Size(S) O(p) ln in such a way that the associated function f satisfies p(1 + R) ɛ f(x) f(x) ɛ (x 0 : x R); Example 3: for every ɛ (0, 1], R > 0 one can explicitly construct a system (S) of the size Size(S) O(p + (p + p i )(1 + R) i p i ) ln ɛ i in such a way that the associated function f satisfies f(x) f(x) ɛ (x : R 1 x i R, i = 1,..., n); Example 4: for every ɛ (0, 1], R > 0 one can explicitly construct a system (S) of the size [ Size(S) O(1) R + ln 2 ] (1 + R) ln ɛ ɛ in such a way that the associated function f satisfies f(x) expx} (1 + ɛ) f(x) (x : x R). 13

14 References [1] Ben-Tal, A., A. Nemirovskii (1994). Potential reduction polynomial time method for Truss Topology Design. SIAM J. on Optimization [2] Ben-Tal, A., A. Nemirovski (1998). Robust convex optimization Mathematics of Operations Research [3] Christensen P.W., J.-S. Pang (1999). Frictional contact algorithms based on semismooth Newton methods. M. Fukushima and L. Qi, eds., Reformulation Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, Kluwer Academic Publishers, [4] Gruber, P.M. (1993). Aspects of approximation of convex bodies. P.M. Gruber and J.H. Wills, eds., Handbook of Convex Opotimization, Volume A, Elsevier Science, [5] Lo. G. (1996). Complementarity problems in Robotics, Ph.D. Dissertation, Department of Mathematical Sciences, The John Hopkins University. [6] Lobo, M., L. Vandenberghe, S. Boyd, H. Lebret (1998). Applications of Second-Order Cone Programming. Linear Algebra and Applications [7] Nesterov, Yu., A. Nemirovski (1994). Interior-Point Polynomial Algorithms in Convex Programming, SIAM Series in Applied Mathematics, SIAM, Philadelphia. [8] Nesterov, Yu., M.J. Todd (1997). Self-scaled Barriers and Interior-point Methods for Convex Programming. Mathematics of Operations Research [9] Nesterov, Yu., M.J. Todd (1998). Primal-dual interior-point methods for self-scaled cones. SIAM J. of Optimization [10] Pang, J.S., D.E. Stewart (1999). A unified approach to frictional contact problems. International Journal of engineering Science

What can be expressed via Conic Quadratic and Semidefinite Programming?

What can be expressed via Conic Quadratic and Semidefinite Programming? What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III CONVEX ANALYSIS NONLINEAR PROGRAMMING THEORY NONLINEAR PROGRAMMING ALGORITHMS

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

ADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING. Arkadi Nemirovski

ADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING. Arkadi Nemirovski ADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING Arkadi Nemirovski Georgia Institute of Technology and Technion Israel Institute of Technology Convex Programming solvable case in Optimization Revealing

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

BCOL RESEARCH REPORT 07.04

BCOL RESEARCH REPORT 07.04 BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN

More information

Robust conic quadratic programming with ellipsoidal uncertainties

Robust conic quadratic programming with ellipsoidal uncertainties Robust conic quadratic programming with ellipsoidal uncertainties Roland Hildebrand (LJK Grenoble 1 / CNRS, Grenoble) KTH, Stockholm; November 13, 2008 1 Uncertain conic programs min x c, x : Ax + b K

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO QUESTION BOOKLET EECS 227A Fall 2009 Midterm Tuesday, Ocotober 20, 11:10-12:30pm DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO You have 80 minutes to complete the midterm. The midterm consists

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Supplement: Universal Self-Concordant Barrier Functions

Supplement: Universal Self-Concordant Barrier Functions IE 8534 1 Supplement: Universal Self-Concordant Barrier Functions IE 8534 2 Recall that a self-concordant barrier function for K is a barrier function satisfying 3 F (x)[h, h, h] 2( 2 F (x)[h, h]) 3/2,

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Exercises: Brunn, Minkowski and convex pie

Exercises: Brunn, Minkowski and convex pie Lecture 1 Exercises: Brunn, Minkowski and convex pie Consider the following problem: 1.1 Playing a convex pie Consider the following game with two players - you and me. I am cooking a pie, which should

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Constructing self-concordant barriers for convex cones

Constructing self-concordant barriers for convex cones CORE DISCUSSION PAPER 2006/30 Constructing self-concordant barriers for convex cones Yu. Nesterov March 21, 2006 Abstract In this paper we develop a technique for constructing self-concordant barriers

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

IE 521 Convex Optimization

IE 521 Convex Optimization Lecture 1: 16th January 2019 Outline 1 / 20 Which set is different from others? Figure: Four sets 2 / 20 Which set is different from others? Figure: Four sets 3 / 20 Interior, Closure, Boundary Definition.

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Mathematics 530. Practice Problems. n + 1 }

Mathematics 530. Practice Problems. n + 1 } Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Chapter 2: Preliminaries and elements of convex analysis

Chapter 2: Preliminaries and elements of convex analysis Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Notation and Prerequisites

Notation and Prerequisites Appendix A Notation and Prerequisites A.1 Notation Z, R, C stand for the sets of all integers, reals, and complex numbers, respectively. C m n, R m n stand for the spaces of complex, respectively, real

More information

The Split Closure of a Strictly Convex Body

The Split Closure of a Strictly Convex Body The Split Closure of a Strictly Convex Body D. Dadush a, S. S. Dey a, J. P. Vielma b,c, a H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive

More information

ROBUST CONVEX OPTIMIZATION

ROBUST CONVEX OPTIMIZATION ROBUST CONVEX OPTIMIZATION A. BEN-TAL AND A. NEMIROVSKI We study convex optimization problems for which the data is not specified exactly and it is only known to belong to a given uncertainty set U, yet

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Lifting for conic mixed-integer programming

Lifting for conic mixed-integer programming Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)

More information

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter

More information

Convex Optimization in Classification Problems

Convex Optimization in Classification Problems New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane

Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Alberto Del Pia Department of Industrial and Systems Engineering & Wisconsin Institutes for Discovery, University of Wisconsin-Madison

More information

The nonsmooth Newton method on Riemannian manifolds

The nonsmooth Newton method on Riemannian manifolds The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and

More information

Optimization: Then and Now

Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Shaowei Lin 9 Dec 2010 Abstract Recently, Helton and Nie [3] showed that a compact convex semialgebraic set S is a spectrahedral shadow if the

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Lecture 1: Convex Sets January 23

Lecture 1: Convex Sets January 23 IE 521: Convex Optimization Instructor: Niao He Lecture 1: Convex Sets January 23 Spring 2017, UIUC Scribe: Niao He Courtesy warning: These notes do not necessarily cover everything discussed in the class.

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Convex Optimization & Parsimony of L p-balls representation

Convex Optimization & Parsimony of L p-balls representation Convex Optimization & Parsimony of L p -balls representation LAAS-CNRS and Institute of Mathematics, Toulouse, France IMA, January 2016 Motivation Unit balls associated with nonnegative homogeneous polynomials

More information

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs) ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

The Split Closure of a Strictly Convex Body

The Split Closure of a Strictly Convex Body The Split Closure of a Strictly Convex Body D. Dadush a, S. S. Dey a, J. P. Vielma b,c, a H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Lecture 24: August 28

Lecture 24: August 28 10-725: Optimization Fall 2012 Lecture 24: August 28 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Jiaji Zhou,Tinghui Zhou,Kawa Cheung Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008 Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:

More information

Strong Dual for Conic Mixed-Integer Programs

Strong Dual for Conic Mixed-Integer Programs Strong Dual for Conic Mixed-Integer Programs Diego A. Morán R. Santanu S. Dey Juan Pablo Vielma July 14, 011 Abstract Mixed-integer conic programming is a generalization of mixed-integer linear programming.

More information

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics

More information

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions Robust Stability Robust stability against time-invariant and time-varying uncertainties Parameter dependent Lyapunov functions Semi-infinite LMI problems From nominal to robust performance 1/24 Time-Invariant

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

Structure of Valid Inequalities for Mixed Integer Conic Programs

Structure of Valid Inequalities for Mixed Integer Conic Programs Structure of Valid Inequalities for Mixed Integer Conic Programs Fatma Kılınç-Karzan Tepper School of Business Carnegie Mellon University 18 th Combinatorial Optimization Workshop Aussois, France January

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x

More information