Strong formulations for convex functions over nonconvex sets. Daniel Bienstock and Alexander Michalka Columbia University New York.
|
|
- Isaac Hamilton
- 5 years ago
- Views:
Transcription
1 Strong formulations for convex functions over nonconvex sets Daniel Bienstock and Alexander Michalka Columbia University New York November, 2011 version WedOct Introduction In this paper we derive strong linear inequalities for systems representing convex quadratics over nonconvex sets, and we present, in several cases, convex hull characterizations by polynomially separable linear inequalities in the original space A class of examples we consider is of the form { (x, q) R d R : q Q(x), x R d int(p ) }, where Q(x) : R d R is a positive definite quadratic function, P R d is full-dimensional and convex and int denotes interior Particular cases we consider are those where P is a polyhdedron or an ellipsoid We similarly characterize sets of the form where both F and G are positive definite quadratics {(x, w, z) R d R R : z F (x), w G(x) } Preliminaries Several important classes of optimization problems include nonlinearities in the objective or constraints Often this results in nonconvexities and a current research thrust addresses the computation of global bounds and exact solution techniques for such problems The field is not new; one of the earliest results is the characterization of the convex hull of a box-constrained bilinear form x 1 x 2 [21], [2] Recently, some interesting new results in this direction have been obtained [20] [9] contains a survey Also see [10], [5], [26], [27] A frequently used approach has been to borrow ideas from the field of mixed-integer programming, even when no binary variables are present The concept of lifting arose in (linear) mixed-integer programming [22] It has also been extended to the continuous setting [11], [15], [19] Lifting techniques are compelling in that when applicable they provide a computationally practicable way to stengthen valid inequalities An interesting use of this idea appears in [25], which approximates, using lifted linear inequalities, SDP relaxations of quadratically constrained sets [7] lifts tangent inequalities to approximate multilinear functions Our main approach also makes use of lifting Our contributions are that in each case we characterize the set of nondominated valid linear inequalities for the appropriate region and that we show that these are lifted linear inequalities, which furthermore are efficiently separable, and in the original space of variables In some cases we obtain closed-form expressions for the lifting coefficients Our results focus on quadratics A great deal of attention has, in fact, been recently focused on problems involving quadratics, and a number of deep results have followed, which provide alternative (but related) methodologies for addressing the problems we consider See, eg [3], [8], [4] A frequently-applied technique is the Reformulation-Linearization method (RLT) and semidefinite programming extensions See eg [28], [29] This paper is organized as follows The polyhedral case is considered in Section 23; Section 24 addresses the ellipsoidal case Sections 3 and 31 present results for indefinite quadratics Sections 2, 21 and 22 introduce some of our general ideas 1
2 2 The positive-definite case We consider sets of the form S = { (x, q) R d R : q Q(x), x R d int(p ) }, (1) where Q(x) : R d R is a positive-definite quadratic function, and each connected component of P R d is a homeomorph of either a half-plane or a ball Thus, each connected component of P is a closed set with nonempty interior Since Q(x) is positive definite, we may assume without loss of generality that Q(x) = x 2 (achieved via a linear transformation) For any y R d, the linearization inequality q 2y T (x y) + y 2 = 2y T x y 2 (2) is valid for all (x, q) R d R We seek ways of making this inequality stronger Definition 21 Given µ R d and R 0, we write B(µ, R) = { x R d : x µ R} 21 Geometric characterization Let x R d Then x R d int(p ) if and only if In terms of our set S, we can rewrite (3) as On the other hand, suppose x µ 2 ρ, for each ball B(µ, ρ) P (3) q 2µ T x µ 2 + ρ, for each ball B(µ, ρ) P (4) δq 2β T x β 0 (5) is valid for S Since R d P contains points with arbitrarily large norm it follows δ 0 Suppose that δ > 0: then without loss of generality δ = 1 Further, given x R d, (5) is satisfied by (x, q) with q x 2 if and only if it is satisfied by (x, x 2 ), and so if and only if we have x β 2 β 2 + β 0 (6) Since (5) is valid for S, we have that (6) holds for each x R d int(p ) Assuming further that (5) is not trivial, that is to say, it is violated by some (z, z 2 ) with z int(p ), we must therefore have that β 2 + β 0 > 0 and B(β, β 2 + β 0 ) P, ie statement (6) is an example of (3) Below we discuss several ways of sharpening these observations 22 Lifted first-order cuts Let y P Then we can always find a ball B(µ, ρ) P such that µ y 2 = ρ, possibly by setting µ = y and ρ = 0 Definition 22 Given y P, we say P is locally flat at y if there is a ball B(µ, ρ) P with µ y 2 = ρ and ρ > 0 Suppose P is locally flat at y and let B(µ, ρ) be as in the definition Let a T x a 0 be a supporting hyperplane for B(µ, ρ) at y, ie a T y = a 0 and a T x a 0 for all x B(µ, ρ) We claim that q 2y T x y 2 + 2α(a T x a 0 ) (7) is valid for S if α 0 is small enough To see this, note that since a T x a 0 supports B(µ, ρ) at y, it follows that µ y = ᾱa for positive ᾱ, ie, B(y + ᾱa, ᾱ 2 a 2 ) = B(µ, ρ) (8) 2
3 Now, assume α ᾱ Then (v, v 2 ) violates (7) iff v 2 < 2y T v y 2 + 2α(a T v a 0 ) (9) = 2(y + αa) T v y + αa 2 + α 2 a 2 + 2α(y T a a 0 ) (10) = 2(y + αa) T v y + αa 2 + α 2 a 2, that is, (11) v B(y + αa, α 2 a 2 ) B(µ, ρ) (12) since α ᾱ In other words, for small enough, but positive α, (7) is valid for S In fact, the above derivation implies a stronger statement: since a T x a 0 supports B(y + αa, α 2 a 2 ) at y, for any α > 0, it follows (7) is valid for S iff B(y + αa, α 2 a 2 ) P Define ˆα = ˆα(P, y) = sup{ α : (7) is valid } (13) If there exists v / P such that a T v > a 0 then the assumptions on P imply that ˆα < + and the sup is a max If on the other hand a T v a 0 for all v / P then ˆα = + (and, of course, a T x a 0 is valid for S) In the former case, we call a lifted first-order inequality Theorem 23 Any linear inequality q 2y T x y 2 + 2ˆα(a T x a 0 ) (14) δq β T x β 0 (15) valid for S either has δ = 0 (in which case the inequality is valid for R d P ), or δ > 0 and (15) is dominated by a lifted first-order inequality or by a linearization inequality (2) Proof Consider a valid inequality (15) As above we either have δ = 0, in which case we are done, or without loss of generality δ = 1, and by increasing β 0 if necessary we have that (15) is tight at some point (y, y 2 ) R d R Write β T x + β 0 = 2y T x y 2 + 2γ T x + γ 0, (16) for appropriate γ and γ 0 Suppose first that y int(r d P ) Then (γ, γ 0 ) = (0, 0), or else (15) would not be valid in a neighborhood of y Thus, (15) is a linearization inequality Suppose next that y P, and that (15) is not a linearization inequality, ie (γ, γ 0 ) (0, 0) We can write (15) as q 2y T x y 2 + 2γ T x + γ 0 = 2(y + γ) T x y + γ 2 2γ T y γ 2 + γ 0 (17) Since (15) is not a linearization inequality, and is tight at (y, y 2 ) there exist points (v, v 2 ) (with v near y) which do not satisfy it Necessarily, any such v must not lie in R d P (since (15) is valid for S) Using (17) this happens iff v 2 < 2(y + γ) T v y + γ 2 2γ T y γ 2 + γ 0, that is, (18) ( ( ) ) v int B y + γ, 2γ T y γ 2 + γ 0 (19) In other words, the set of points that violate (15) is the interior of some ball B with positive radius, which necessarily must be contained in P Since (y, y 2 ) satisfies (15) with inequality, y is in the boundary of B Thus, P is locally flat at y; writing a T x = a 0 to denote the hyperplane orthogonal to γ through y, we have that (15) is dominated by the resulting lifted first-order inequality 3
4 23 The polyhedral case Here we will discuss an efficient separation procedure for lifted first-order inequalities in the case that P is a polyhedron Further properties of these inequalities are discussed in [23] Suppose that P = { x R d : a T i x b i, 1 i m} is a full-dimensional polyhedron, where each inequality is facet-defining and the representation of P is minimal For 1 i m let H i = { x R d : a T i x = b i} For i j let H {i,j} = { x R d : a T i x = b i, a T j x = b j} Assuming H {i,j} (ie H i and H j are not parallel) H {i,j} is (d 2) dimensional; in that case we denote by ω ij the unique unit norm vector orthogonal to both H ij and a i (unique up to reversal) Consider a fixed pair of indices i j with H {i,j}, and let µ int(p ) Let Ω ij be the 2-dimensional hyperplane through µ generated by a i and ω ij By construction, therefore, Ω ij is orthogonal to H {i,j} and is thus the orthogonal complement to H {i,j} through µ It follows that Ω ij = Ω ji and that this hyperplane contains the orthogonal projection of µ onto H i (which we denote by π i (µ) and the orthogonal projection of µ onto H j (π j (µ), respectively) Further, Ω ij H {i,j} consists of a single point k {i,j} (µ) satisfying µ k {i,j} (µ) 2 = µ π i (µ) 2 + π i (µ) k {i,j} (µ) 2 = µ π j (µ) 2 + π j (µ) k {i,j} (µ) 2 (20) Now we return to the question of separating lifted first-order inequalities Note that P is locally flat at a point y if and only if y is in the relative interior of one of the facets Suppose that y is in the relative interior of the i th facet Denoting, for j i, we clearly have (see (13)) P i,j = {x R d : a T i x b i, a T j x b j }, and (21) ˆα = min j i ˆα(P i,j, y) We will argue that for j i, ˆα(P i,j, y) is an affine function of y, ie ˆα(P i,j, y) = p ij y + q ij (22) for appropriate constants p ij and q ij Assume first that H {i,j} =, ie H i and H j are parallel and thus without loss of generality a i = a j and b i < b j But as per (for example) equation (12) the lifting coefficient at y is proportional to the largest radius of a ball that can be inscribed in the region delimited by H i and H j, ie {x R d : b i a T i x b j} This largest radius equals exactly half the distance between H i and H j, and is therefore independent of y, ie it is trivially an affine function of y Thus we assume that H {i,j} Then y = π i (µ) and ŷ = π j (µ), (23) y k {i,j} (µ) is parallel to ω ij and ŷ k {i,j} (µ) is parallel to ω ji, (24) µ y 2 = µ ŷ 2 = ρ, and by (20), (25) y k {i,j} (µ) = ŷ k {i,j} (µ), and (26) µ y = tan φ y k {i,j} (µ), (27) where 2φ is the angle formed by ω ij and ω ji By the preceding discussion, ρ = ( ˆα(P i,j, y) a i ) 2 ; using (25) and (27) we will complete the argument that ˆα(P i,j, y) is an affine function of y Let h g {i,j} (1 g d 2) be a basis for { x Rd : a T i x = at j x = 0} Then a i, together with ω ij and the h g {i,j} form a basis for Rd Let O i be the projection of the origin onto H i hence O i is a multiple of a i, N i be the projection of O i onto H {i,j} 4
5 We have y = O i + (N i O i ) + (k {i,j} (µ) N i ) + (y k {i,j} (µ)), (28) and thus, since N i O i and y k {i,j} (µ) are parallel to ω ij, and k {i,j} (µ) N i and O i are orthogonal to ω ij, ωijy T = ωij(n T i O i ) + ωij(y T k {i,j} (µ)) = ωij(n T i O i ) + ω ij y k {i,j} (µ), (29) or y k {i,j} (µ) = ω ij 1 ωij T (y N i + O i ) (30) Consequently, ˆα(P i,j, y) = ρ a i = tan φ a i y k {i,j}(µ) (31) = tan φ a i ω ij 1 ω T ij (y N i + O i ), (32) which is affine in y, as desired Now let x R d The problem of finding the strongest possible lifted first-order inequality at x chosen from among those obtained by starting from a point on face i, can thus be written as follows: min 2y T x + y 2 2α(a T x a 0 ) (33) st y P (34) a T i y = b i (35) 0 α p ij y + q ij j i (36) [Here, (36) is valid because for y H {i,j} expression (32) yields ˆα = 0, since ω ij is orthogonal to both a i and H {i,j} ] This is a linearly constrained, convex quadratic program with d+1 variables and 2m 1 constraints By solving this problem for each choice of 1 i m we obtain the strongest inequality overall 231 The Disjunctive Approach For 1 i m let P i = { x R d : a T i x b i}; thus R d P = i P i Further, for 1 i m write: Q i = { (x, q) R d R : a T i x b i, q x 2 } Thus, (x, q ) conv(s) if and only if (x, q ) can be written as a convex combination of points in the sets Q i This is the approach pioneered in Ceria and Soares [14] (also see [30]) The resulting separation problem is carried out by solving a second-order cone program with m conic constraints and md variables, and then using second-order cone duality in order to obtain a linear inequality (details in [23]) Thus, the derivation we presented above amounts to a possibly simpler alternative to the Ceria-Soares approach, which also makes explicit the geometric nature of the resulting cuts 24 The ellipsoidal case In this section we will discuss an efficient separation procedure for lifted first-order inequalities in the case that P is a convex ellipsoid with nonempty interior Write P = {x R d : x T Ax 2c T x + b 0} for appropriate A 0, c and b Suppose we are given a point x int(p ) The problem of finding the strongest inequality at x is: min µ,ρ µ 2 ρ 2 x T µ (37) Subject to: {x : x µ 2 ρ} P (38) 5
6 Constraint (38) forces the ball of excluded points to be contained in the ellipsoid P The S-Lemma [31], [24], [6], tells us that (µ, ρ) is feasible for (38) if and only if there is some nonnegative θ = θ(µ, ρ) such that This is equivalent to saying that there is θ 0 with x µ 2 ρ θ(x T Ax 2b T x + c) 0 x R d or equivalently min x { x µ 2 ρ θ(x T Ax 2b T x + c)} 0, min x {x T (I θa)x 2(µ θb) T x + ( µ 2 ρ θc)} 0 (39) Clearly, we must have θ 1 for this to hold Now consider an optimal pair (ˆµ, ˆρ) for problem (37)-(38), and the corresponding value ˆθ We will show next that ˆθ = 1 Aiming for a contradiction, assume ˆθ < 1 Then (I ˆθA) is invertible, and the optimal solution to the minimization problem (39) is given by x = (I ˆθA) 1 (ˆµ ˆθb) Substituting this expression in (39), we obtain [ ˆµ T (I ˆθA) 1 ˆµ 2ˆθˆµ T (I ˆθA) 1 b + ˆθ 2 b T (I ˆθA) ] 1 b + ˆµ T ˆµ ˆρ ˆθc 0 Thus (via another application of the S-Lemma) problem (37)-(38) can be rewritten as: min µ,ρ Subject to: µ 2 ρ 2 x T µ [ µ T (I ˆθA) 1 µ 2ˆθµ T (I ˆθA) 1 b + ˆθ 2 b T (I ˆθA) ] 1 b + µ 2 ρ ˆθc 0 This is a convex QCQP Notice the term µ 2 ρ which appears both in the objective and the constraint From this we can see that the constraint will hold with equality at the optimal (µ, ρ), so we can substitute into the objective to get the unconstrained separation problem: min µ ˆθc + µ T (I ˆθA) 1 µ 2ˆθb T (I ˆθA) 1 µ + ˆθ 2 b T (I ˆθA) 1 b 2 x T µ This is a convex QP; using KKT conditions we get that its optimal solution is given by and plugging this into the ojective gives a value of ˆµ = ˆθb + (I ˆθA) x, x 2 + ˆθ( x T A x 2b T x + c) Since x int(p ) we have x T A x 2b T x + c < 0, so this objective value is decreasing linearly in ˆθ Since our objective in problem (37)-(38) is to minimize, the optimal ˆθ will be as large as possible: 1, as desired [Note that we can then determine the optimal squared radius ˆρ by: This again shows that any ˆθ < 1 ˆµ 2 ˆρ 2 x T ˆµ = x 2 + ˆθ( x T A x 2b T x + c) is not optimal - we always get a better cut by slightly increasing ˆθ] Assuming now that ˆθ = λ 1 max, the following approach is almost identical to the above separation problem as: Write the min µ,ρ Subject to: µ 2 ρ 2 x T µ (40) min x {x T (I ˆθA)x 2(µ ˆθb) T x + ( µ 2 ρ ˆθc)} 0 (41) 6
7 or equivalently, pulling out a few terms in the constraint which don t depend on x: min µ,ρ Subject to: µ 2 ρ 2 x T µ (42) µ 2 ρ + min x {x T (I ˆθA)x 2(µ ˆθb) T x + ˆθc} 0 (43) Clearly the constraint will hold with equality, so we can transform the constrained problem into an unconstrained one: [ min 2 x T µ min{x T (I ˆθA)x 2(µ ˆθb) T x ˆθc} ] µ x The optimal µ must be such that the optimal value of the inner minimization problem (the one over x) is finite That is, for any δ R d, (I ˆθA)δ = 0 implies (µ ˆθb) T δ = 0 Using the Farkas Lemma, this is equivalent to µ being of the form µ = ˆθb + (I ˆθA)π for some π R d Then the optimal solution to the inner minimization is any x satisfying (I ˆθA)x = µ ˆθb = (I ˆθA)π Clearly π is a minimizer, and the resulting optimal value is π T (I ˆθA)π ˆθc We can then rewrite the separation problem again as: [ min 2 x T (ˆθb + (I ˆθA)π) + π T (I ˆθA)π + ˆθc ] π This is an unconstrained convex QP, its optimal solution is ˆπ = x, which means the optimal center ˆµ is and the optimal squared radius ˆρ is ˆµ = ˆθb + (I ˆθA) x 3 Indefinite Quadratics ˆρ = ˆµ 2 2 x T ˆµ + x 2 ˆθ( x T A x 2b T x + c) The general case of a set { (x, q) R d R : q Q(x), x R d int(p ) }, where Q(x) is a semidefinite quadratic can be approached in much the same way as that employed above, but with some important differences We first consider the case where P is a polyhedron Let P = {(x, w) R d+1 : a T i x w b i, 1 i m} (here, w is a scalar) Consider a set of the form S = { (x, w, q) R d+2 : q x 2, (x, w) R d+1 P } (44) Many examples can be brought into this form, or similar, by an appropriate affine transformation Consider a point (x, w ) in the relative interior of the i th facet of P We seek a lifted first-order inequality of the form (2x αa i ) T x + αw + αb i x 2 q, 7
8 for appropriate α 0 If we are lifting to the j th facet, then we must have v ij = αb i x 2, where v ij = min x 2 (2x αa i ) T x αw (45) st a T j x w = b j (46) To solve this optimization problem, consider its Lagrangian: L(x, w, ν) = x 2 (2x αa i ) T x αw ν(a T j x w b j ) Taking the gradient in x and setting it to 0: x L = 0 2x 2x + αa i νa j = 0 x = x α 2 a i + ν 2 a j Now doing the same for w: w L = 0 α + ν = 0 ν = α Combining these two gives then using the constraint a T j x w = b j gives x = x α 2 a i + α 2 a j w = a T j x b j α 2 at j a i + α 2 at j a j Next we expand out the objective value using the expressions we have derived for x and w, and set the result equal to αb i x 2 Omitting the intermediate algebra, the result is the quadratic equation α(a T i x b i (a T j x b j )) 1 4 α2 (a T i a i 2a T i a j + a T j a j ) = 0 One root of this equation is α = 0 The other root is Since a T i x w = b i, and a T j x w b j, we have ˆα = 4(aT i x b i (a T j x b j )) a T i a i 2a T i a j + a T j a (47) j a T i x b i (a T j x b j ) > 0 so ˆα > 0 (the denominator is a squared distance between some two vectors so it is non-negative) Moreover, the expression for ˆα is an affine function of x Thus, as in Section 23, the computation of a maximally violated lifted first-order inequality is a convex optimization problem In this case there is an additional detail of interest: note that the points (x, w, x 2 ) cut-off by the inequality are precisely those such that (2x ˆαa i ) T x + ˆαw + ˆαb i x 2 > x 2 (48) This condition defines the interior of a paraboloid; this is the proper generalization of condition (3) in the indefinite case 8
9 31 Tightening a general quadratic expression Consider a set of the form Π = { (x, w, z) R d R R : z x T Qx + q T x, w x T Ax } (49) where both A and Q are symmetric positive definite d d matrices We will show below that this system can be characterized through a family of polynomially separable linear inequalities in x, w and z; we develop a streamlined construction in Section 311 An application is described later Let Q = LL T, where L is lower triangular and invertible, and let V ΛV T be the spectral decomposition of L 1 AL T Writing p = V T L T x, and so x = L T V p, we therefore have: x T Qx = p T V T L 1 LL T L T V p = p T p, and x T Ax = p T V T L 1 AL T V p = p T Λp Thus, without loss of generality Π is described by the system where Λ 0 is diagonal Define z x 2 + q T x (50) w x T Λx, (51) P = {(x, w) R d R : x T Λx w 0} This is a paraboloid in (x, w)-space whose interior is the set of points in in (x, w)-space that are cut-off by (51) Write = max λ i, (52) i and, given µ R d and ν R, Then it is seen that M(µ, ν) = {(x, w) R d R : x µ 2 + (ν w) 0} x R d int(p ) iff x R d int(m(µ, ν)), for all µ, ν such that M(µ, ν) P (53) Using this characterization together with (50) we have that for each pair (µ, ν) R d R with M(µ, ν) P the following inequality is valid for the set (50)-(51): µ 2 (2µ + q) T x + (ν w) + z 0, (54) which precisely cuts-off int(m(µ, ν)) in the sense that given (ˆx, ŵ) int(m(µ, ν)) if ẑ ˆx 2 + q T ˆx then (ˆx, ŵ, ẑ) violates (54) By definition, for (µ, ν) R d R we have M(µ, ν) P iff there exists (ˆx, ŵ) M(µ, ν) with ŵ < ˆx T Λˆx, and therefore iff there exists ˆx such that (ˆx, ˆx T Λˆx) int(m(µ, ν)) Consequently, M(µ, ν) P iff ν + min x { x µ 2 x T Λx } 0 Therefore, the maximum violation of an inequality (54) at a point ( x, w) int(p ) is obtained by solving the problem max µ,ν subject to: w ν µ x T µ + 2 q T x (55) ν + min x { x µ 2 x T Λx } 0 (56) We will show that this problem can be solved in polynomial time; in fact we will provide an explicit expression for an optimal solution Let (µ, ν ) be optimal Clearly (56) will hold with equality, otherwise we could just decrease ν and obtain a better solution Further, if x is the minimizer in the constraint, we have µ i = 0 for all i with λ i = (57) 9
10 and Thus, x i = µ i λ i for all i with λ i < ν + {i:λ i<} ν + {i:λ i<} ν + {i:λ i<} ν = {i:λ i<} ( µ i λ i µ i ) 2 (µ i )2 λ 2 i ( λ i ) 2 Now we can rewrite the separation problem (55)-(56): max µ {i:λ i<} {i:λ i<} λ 2 λ max(µ i )2 i = 0, or (58) ( λ i ) 2 λ i λ 2 max(µ i )2 = 0, and therefore (59) ( λ i ) 2 µ 2 i λ2 i λ i µ 2 i ( λ i ) 2 = 0, thus, (60) µ 2 i λ i λ i (61) µ x T µ + 2 q T x + w {i:λ i<} λ i µ i 2 λ i (62) subject to: µ i = 0 for all i with λ i = ; (63) dividing the objective by and ignoring constant terms we get max ( ) λ i 1 + µ 2 i + 2 µ λ i {i:λ i<} {i:λ i<} µ i x i (64) Note that the coefficient of µ 2 i is /( λ i ) < 0, and thus the quadratic maximized in (64) is negative definite Setting its gradient to zero, we obtain that the optimal solution is which, together with (57) implies, by substituting into (61) ν = 1 µ i = λ i x i (65) λ i< λ i ( λ i ) x 2 i (66) We thus obtain an explicit solution to the separation problem (55)-(56), and therefore, using the geometrical statement (53) a characterization of (50)-(51) by polynomially separable linear inequalities We now prove a domination result for the cuts (54) similar to that in Theorem 23 We will show that these are lifted inequalities, and that they dominate all valid inequalities Given x R d, and scalar α 0, the inequality ([ ] 2 x + q + α 0 [ ]) T [ ] 2Λ x x x 1 w x T + x 2 + q T x z (67) Λ x is termed a lifted inequality at ( x, x T Λ x) with lifting coefficient α We will also equivalently rewrite (67) as z x 2 + q T x + (2 x + q) T (x x) + α(w x T Λ x 2 x T Λ(x x)); (68) thus (67) strengthens the valid inequality z x 2 + q T x + (2 x + q) T (x x) in the (infeasible) region where w > x T Λx 10
11 Theorem 31 Given x R d the lifted inequality (67) is valid iff α λ 1 max, and when α = λ 1 max it coincides with the strongest inequality (54) at ( x, x T Λ x) Proof Suppose without loss of generality that = λ 1 Write e 1 = (1, 0,, 0) T R d, and define, for δ R, x(δ) = x + δe 1, (69) w(δ) = x T (δ) Λ x(d) = x T Λ x + 2δ x 1 + δ 2, and (70) z(δ) = x(δ) 2 + q T x(δ) = x 2 + q T x + 2δ x 1 + δ 2 + δq 1 (71) Then by construction ( x(δ), w(δ), z(δ)) is feasible (it is contained in Π, defined by (50)-(51)) Evaluating (68) at ( x(δ), w(δ)), on the other hand, we obtain z x 2 + q T x + 2δ x 1 + δq 1 + αδ 2 > z(δ), if α > λ 1 max (72) Thus (67) is not valid if α > λ 1 max Next, by definition if (67) is invalid for a certain value ˆα > 0, then there is a triple (x, w, z) in Π for which the right-hand side of (68) exceeds z This can only happen if the last term in the right-hand side of (68) is positive; ie w x T Λ x 2 x T Λ(x x) > 0 But in that case (67) will be invalid for any lifting coefficient α ˆα Thus, in order to complete the proof of the theorem, it suffices to show that when α = λ 1 max inequality (67) coincides with the most violated inequality (54) at ( x, x T Λ x) To do so, first write the lifted cut at ( x, x T Λ x) and using coefficient 1/ as: (2 x 2λ 1 maxλ x + q) T x + λ 1 maxw z λ 1 max x T Λ x + x 2 Now we just show that the coefficients of our separating cut (54) match the coefficients of the lifted cut The coefficients of x match if and only if 2 x 2 Λ x = 2µ µ = x 1 Λ x ( λmax λ i µ i = ) x i i The constant terms (right-hand-sides of the two cuts) match if and only if ν = x T ( Λ + I) x µ 2 n n ( = ( λ i ) x 2 λmax λ i i = = i=1 n i=1 1 x 2 i i=1 i=1 (( λ i ) ( λ i ) 2 ) n x 2 i (λ i ( λ i )); this matches expressions (65), (66) for the optimal µ and ν we get when computing the strongest cut (54) at ( x, x T Λ x) Theorem 32 Any inequality valid for Π with γ 0 and τ 0 is dominated by a lifted inequality t i c T x + γw τz d (73) ) 2 11
12 Proof Clearly γ > 0 and τ > 0; without loss of generality τ = 1; for convenience we restate the inequality as c T x + γw z d (74) Without loss of generality, assume (74) is not dominated by another valid inequality Since (74) is valid, we have 0 x 2 + (q c) T x γx T Λx + d, x R d, (75) and in consequence I γλ 0 Thus the expression in the right-hand side (75) attains its minimum at some point x R d Writing z = x 2 + q T x and w = x T Λ x, and since by assumption (74) is not dominated, we therefore have that (74) is tight at ( x, w, z) The set of points (x, w, x 2 + q T x) violating (74) is { (x, w) : c T x + γw ( x 2 + q T x) > d } = { (x, w) : (c q) T x x 2 + γw > d } = { (x, w) : x 2 + (q c) T x + d < γw } = {(x, w) : 1γ x 2 + 1γ (q c)t x + 1γ } d < w which is the interior of a paraboloid in the (x, w) space Since (74) is tight at ( x, w), we know that ( x, w) must be on the boundary of this paraboloid, and we have 1 γ x γ (q c)t x + 1 γ d = w = xt Λ x Given these facts, we can determine what the vector c must be We want to show that (74) must have the form of the lifted first-order cut 1 : ([ ] [ ]) T [ ] 2 x + q 2Λ x x x + α + z z (76) 0 1 w w where α 0 is the lifting coefficient Note that (76) is equivalent to: (2 x + q 2αΛ x) T x + αw z α w + x 2 (77) We will show that c = 2 x+q 2γΛ x, that is to say, inequality (74) is a lifted inequality with lifting coefficient α = γ Suppose c 2 x + q 2γΛ x This means that the system α = γ, 2αΛ x = 2 x + q c is infeasible, and by the Farkas Lemma, there exists a vector (π, ρ) with At ( x, w), the inequality π T (2 x + q c) + ργ < 1 and π T (2Λ x) + ρ = 0 (2Λ x)(x x) + w w supports P, where as before P = { (x, w) R d R : x T Λx w 0} Thus the opposite inequality (2Λ x, 1) (x, w) w gives a sufficient condition to guarantee (x, w) / int(p ) Now consider the point ( x + ɛπ, w ɛρ) where ɛ is a scalar For this point to be feasible, it is therefore enough that: 1 Borrowing terminology from Section 22 (2Λ x) T ( x + ɛπ) ( w ɛρ) w 2 w w + 2ɛπ T Λ x + ɛρ w ɛ(2π T Λ x + ρ) 0 ɛ
13 which holds for all ɛ R Then, since the inequality (74) is valid, it must hold for all points ( x + ɛπ, w ɛρ) This requires in particular that for all ɛ > 0, we must have: c T ( x + ɛπ) + γ( w ɛρ) x + ɛπ 2 q T ( x + ɛπ) d c T x + γ w + ɛc T π γɛρ x T x 2ɛπ T x ɛπ T π q T x ɛq T π d ɛc T π γɛρ 2ɛπ T x ɛ 2 π 2 ɛq T π 0 ɛ(c T π γρ 2π T x q T π) ɛ 2 π 2 0 ɛ π 2 + π T (2 x + q c) + γρ 0 However, since π T (2 x + q c) + γρ < 1, this fails to hold for small ɛ, a contradiction As a summary of the above we have: Theorem 33 Let A, Q be positive definite d d matrices Any nondominated valid inequality for the set { (x, w, z) R d R R : z x T Qx + q T x, w x T Ax } is a lifted inequality, and given a point ( x, w) in the interior of {(x, w) R d R : x T Λx w 0} we can compute a strongest lifted inequality at ( x, w) in polynomial time 311 No-spectrum implementation The construction above requires the computation of the spectrum of the d d matrix A found in the initial description of the set Π (eq (49) This step was needed in order to derive the various relationships obtained above; but might prove expensive if d is large Here we describe an equivalent construction that avoids the computation of eigenvalues, other than the largest We assume, therefore, that we have a system of the form: z x T x + q T x w x T Ax where A is positive-definite Suppose we have a point ( x, w, z), with x T A x < w which we want to separate We have shown that valid tight cuts must be of the form (2µ + q, α, 1) (x, w, z) αν + µ 2 where α 0 Rearranging terms, we write the constraint as z (2µ + q) T x + αw αν µ 2 A point (x, w, x T x + q T x) violates such a cut if and only if x T x + q T x < 2µ T x + q T x + αw αν µ 2 x T x 2µ T x + µ T µ αw + αν < 0 β x µ 2 w + ν < 0 where in the last line we define β = α 1 The excluded region is a paraboloid in the (x, w) space For the cut to be valid, we need this paraboloid to be contained in the infeasible region That is, we need { (x, w) β x µ 2 w + ν < 0 } { (x, w) x T Ax w < 0 } By the S-Lemma, this is equivalent to the existence of some θ 0 with β x µ 2 w + ν θ(x T Ax w) 0 (x, w) R d+1 Clearly we must have θ = 1, or else we could fix x and send w to ± So the validity of the separating cut is equivalent to ν + β µ 2 x T (A βi)x + 2βµ T x x R d or ν + β µ 2 { max x T (A βi)x + 2βµ T x } x 13
14 The objective of the separation problem is or, using the definition β = α 1 : max : (2µ + µ,ν,α q)t x + α w αν µ 2 max : µ,ν,β (2µ + q)t x + 1 β w 1 β ν µ 2 Adding in the validity constraint for our separating cut, the separation problem is: maximize: (2µ + q) T x + 1 β w { 1 β ν µ 2 subject to: ν + β µ 2 max x x T (A βi)x + 2βµ T x } Clearly the constraint will hold with equality at the optimum; we can move it into the objective to get the equivalent unconstrained problem: max : (2µ + µ,β q)t x + 1 β w 1 β max { x T (A βi)x + 2βµ T x } x In the optimal solution, the value of the inner maximization must be finite This implies that we must have β (A) and βµ = (A βi)π for some π Suppose first that we have fixed β > (A), so (A βi) is negative-definite and invertible The optimal x for the inner maximization is given by β(a βi) 1 µ and results in an optimal value of We can then rewrite the separation problem as: β 2 µ T (A βi) 1 µ max : (2µ + µ q)t x + 1 β w + 1 ( β 2 µ T (A βi) 1 µ ) β max µ : βµt (A βi) 1 µ + 2 x T µ + q T x + 1 β w This is a convex QP whose optimal solution is The resulting objective value is µ = 1 (A βi) x β 1 β xt (A βi) x 2 β xt (A βi) x + q T x + 1 β w = 1 β xt A x + x T x + q T x + 1 β w = x T x + q T x + 1 β ( w xt A x) which is decreasing in β, since w > x T A x So we want to have β at its lower bound of (A) Define = (A) We can restate the separation problem as: max : (2µ + µ q)t x + 1 w 1 { max x T (A I)x + 2 µ T x } x The optimal solution for the inner minimization is any x satisfying (A I)x = µ 14
15 Since we had the condition µ = (A I)π, we have that π is a maximizer The resulting maximum value is π T (A I)π and the separation problem becomes: max π : 1 π T (A I)π + 2 x T (A I)π + q T x + 1 w Again, the separation problem is a convex QP Its optimal solution is any π satisfying so setting gives a maximizer The resulting optimal µ is 1 (A I)π = 1 (A I) x π = x µ = 1 (A I)π = 1 ( I A) x Using this and the constraint from the first formulation of the separation problem (which we know will hold with equality) we can get the optimal ν: { } ν = µ 2 + max x T (A I)x + 2 µ T x x = µ 2 + x T (A I) x + 2 µ T x = µ 2 + x T (A I) x 2 x T (A I) x = µ 2 + x T ( I A) x The reader may verify that these expressions for µ and ν coincide with (65) and (66) (resp) when A is diagonal 312 Application Consider an optimization problem with an objective function of the form or a constraint of the form min x T Mx + v T x + c, (78) x T Mx + v T x + c 0, (79) where M R d R d is symmetric By using the spectral decomposition of M to change coordinates, and if necessary adding and subtracting terms of the form x 2 i, and finally scaling, without loss of generality we obtain an expression of the form d i=1 x 2 i d λ i x 2 i + v T x + c, where λ i > 0 for all i i=1 In case of an optimization problem with objective (78), we can lift to an equivalent system of the form d d min{ z w + c : st z x 2 i + v T x, w λ i x 2 i }, whose constraint set is exactly of the form (50)-(51) i=1 i=1 15
16 313 Example Consider the bilinear form f(x) = 2(x 1 x 2 + x 1 x 3 + x 2 x 3 ) over the unit cube [0, 1] 3 Writing, for 1 i < j 3, f ij = x i x j, the McCormick relaxation for f ij amounts to: f ij x i + x j 1, f ij min{x i, x j } At x = (1/2, 1/2, 1/2) T, the lower bound on f( x) produced by the McCormick relaxation is zero (for more complex examples see [18]) We show next how our procedures may be used to generate a formulation that proves a positive lower bound on f( x) We stress that what we have here is an adhoc construction we plan to return to this topic in a future work We have f(x) = U(x) L(x), where U(x) L(x) = (x 1 + x 2 ) 2 + (x 1 + x 3 ) 2 + (x 2 + x 3 ) 2, = 2(x x x 2 3) (80) Now we apply the techniques from Section 31 We have U(x) = x T Qx and L(x) = x T Ax, where Q = 1 2 1, A = The Cholesky decomposition of Q is Q = LL T = 1/ 2 1/ 2 1/ 2 2 3/2 0 1/ 2 1/ 0 3/2 1/ 6 6 4/ /3 Let V ΛV T be the eigendecomposition of L 1 AL T = 2L 1 L T = 2(L T L) 1 : 3/6 1/2 2/ 6 V = 1/ /2 2/3 2, Λ = /3 0 1/ /2 The transformation we use is p = V T L T x, or x = L T V p Note: 1/ 2 1/ 6 3/6 L T V = 0 2/ 6 3/6 1/2 2/ 6 3/6 1/6 1/ 6 2/2 3/6 3/2 2/ /2 2 = 1/ 6 2/2 3/6 2/3 0 1/3 2/ 6 0 3/6 (81) Thus, we have x [0, 1] 3 0 [ ] 0 L T V L T p 0 V Let H be the image of [0, 1] 3 under the mapping It can be seen that for any x we have p 3 (x) = 2 3 (x 1 + x 2 + x 3 ) and thus our point of interest, x, is mapped to p = (0, 0, 3) T 16
17 Further, in p-space, f(x) is represented as F (p) = (p p p 2 3) (2p p p2 3) Consider the paraboloid cut p p (p 3 2α 3) 2 + ɛ 2p p p2 3 (82) For α = ɛ = 1/10, a calculation shows that (82) is valid for all p H with p 3 3 (or, informally, it is valid for all x [0, 1] 3 with i x i 3/2) In the region of validity, we therefore have F (p) 4α 3p 3 12α 2 ɛ = 2 5 In other words, for x [0, 1] 3 with i x i 3/2, f(x) 4 5 (x 1 + x 2 + x 3 ) p Consider now the paraboloid cut (82) with α = 1/2 and ɛ = 2 A calculation shows that in that case (82) is valid for all p H with p 3 3 Where it is valid we get and thus, for x [0, 1] 3 with i x i 3/2, F (p) 4α 3p 3 12α 2 ɛ = 2 3p 3 5, f(x) 4(x 1 + x 2 + x 3 ) 5 We now have a disjunction between two polyhedra: Θ = (x, f) : x [0, 1]3, x j 3/2, f 4 5 (x 1 + x 2 + x 3 ) 11 50, j Π = (x, f) : x [0, 1]3, x j 3/2, f 4(x 1 + x 2 + x 3 ) 5 j and Thus, solving the linear program min f st (x, f) conv(θ Π) x = x yields a valid lower bound on f( x) The value of this LP is (slightly greater than) Using LP duality, one also obtains the valid cut which likewise implies f( x) f(x) (x 1 + x 2 + x 3 ) As the example makes clear, issues of numerical precision are of paramount importance in this context We plan to return to these questions in a future work 17
18 References [1] F Alizadeh and D Goldfarb, Second-Order Cone Programming, Mathematical Programming 95 (2001), 3 51 [2] F Al-Khayyal, and J Falk, Jointly constrained biconvex programming, Math Oper Res 8 (1983), [3] KM Anstreicher, Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic programming, J Global Optimization 43 (2009), [4] KM Anstreicher and S Burer, Computable representations for convex hulls of low-dimensional quadratic forms, Mathematical Programming (Series B) 124 (2010), [5] X Bao, NV Sahinidis, and M Tawarmalani, Multiterm polyhedral relaxations for nonconvex, quadratically constrained quadratic programs, Optimization Methods and Software 24 (2009), [6] A Ben-Tal and A Nemirovsky, em Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications (2001) MPS-SIAM Series on Optimization, SIAM, Philadelphia, PA [7] P Belotti, AJ Miller and M Namazifar, Valid Inequalities and Convex Hulls for Multilinear Functions, Electronic Notes in Discrete Mathematics 36 (2010), [8] S Burer and A N Letchford, On non-convex quadratic programming with box constraints, SIAM Journal on Optimization, 20 (2009), [9] S Burer and A N Letchford, Non-Convex Mixed-Integer Nonlinear Programming: A Survey, Optimization Online, February 2012 [10] S Cafieri, J Lee, and L Liberti, On convex relaxations of quadrilinear terms, Journal of Global Optimization, 47 (2010), [11] D Bienstock, Computational study of a family of mixed-integer quadratic programming problems, Math Programming 74 (1996), [12] D Bienstock, Eigenvalue techniques for proving bounds for convex objective, nonconvex programs, Proc IPCO 2010 [13] S Boyd and L Vandenberghe, Convex Optimization, Cambridge University Press (2004) [14] S Ceria and J Soares, Convex programming for disjunctive convex optimization, Mathematical Programming 86 (1999), [15] IR de Farias JR, EL Johnson and GL Nemhauser, Facets of the Complementarity Knapsack Polytope, Mathematics of Operations Research 27 (2002), [16] D Goldfarb and G Iyengar, Robust Portfolio Selection Problems, Mathematics of Operations Research, 28 (2002), 1 38 [17] GH Golub, Some modified matrix eigenvalue problems, SIAM Review 15 (1973), [18] M Kilinc, J Linderoth and J Luedtke, Effective Separation of Disjunctive Cuts for Convex Mixed Integer Nonlinear Programs, Optimization Online (2010) [19] AB Keha, IR de Farias JR and GL Nemhauser, A Branch-and-Cut Algorithm without Binary Variables for Nonconvex Piecewise Linear Optimization, Operations Research 54 (2006),
19 [20] J Luedtke, M Namazifar and J Linderoth, Some Results on the Strength of Relaxations of Multilinear Functions, Optimization Online, August 2010 [21] GP McCormick, Computability of global solutions to factorable nonconvex programs: Part IConvex underestimating problems Math Program 10 (1976), [22] GL Nemhauser and LA Wolsey, Integer and Combinatorial Optimization, Wiley, New York (1988) [23] A Michalka, PhD Dissertation, Columbia University (in preparation) [24] I Pólik and T Terlaky, A survey of the S-lemma, SIAM Review 49 (2007), [25] A Qualizza, P Belotti, F Margot, Linear Programming Relaxations of Quadratically Constrained Quadratic Programs, manuscript, 2011 [26] A Saxena, P Bonami and J Lee, Convex relaxations of non-convex mixed integer quadratically constrained programs: Extended formulations Mathematical Programming B 124 (2010) [27] A Saxena, P Bonami and J Lee, Convex relaxations of non-convex mixed integer quadratically constrained programs: Projected formulations To appear, Mathematical Programming [28] H D Sherali and W P Adams, A Reformulation-Linearization Technique for Solving Discrete and Continuous Nonconvex Problems, Kluwer, Dordrecht (1998) [29] Sherali, H D and Adams, W P A, Reformulation-Linearization Technique (RLT) for Semi- Infinite and Convex Programs under Mixed 0-1 and General Discrete Restrictions, Discrete Applied Mathematics, 157, (2009) [30] RA Stubbs and S Mehrotra, A branch-and-cut method for 0-1 mixed convex programming, Mathematical Programming 86 (1999), [31] V A Yakubovich, S-procedure in nonlinear control theory, Vestnik Leningrad University, 1 (1971)
Optimizing Convex Functions over Non-Convex Domains
Optimizing Convex Functions over Non-Convex Domains Daniel Bienstock and Alexander Michalka University Berlin 2012 Introduction Generic Problem: min Q(x), s.t. x F, Introduction Generic Problem: min Q(x),
More informationStrong Formulations for Convex Functions over Non-Convex Domains
Strong Formulations for Convex Functions over Non-Convex Domains Daniel Bienstock and Alexander Michalka University JMM 2013 Introduction Generic Problem: min Q(x), s.t. x F, Introduction Generic Problem:
More informationOnline generation via offline selection - Low dimensional linear cuts from QP SDP relaxation -
Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation - Radu Baltean-Lugojan Ruth Misener Computational Optimisation Group Department of Computing Pierre Bonami Andrea
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationBCOL RESEARCH REPORT 07.04
BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN
More informationThe Trust Region Subproblem with Non-Intersecting Linear Constraints
The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region
More informationNonconvex Quadratic Programming: Return of the Boolean Quadric Polytope
Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope Kurt M. Anstreicher Dept. of Management Sciences University of Iowa Seminar, Chinese University of Hong Kong, October 2009 We consider
More informationLifting for conic mixed-integer programming
Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationA Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs
A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs Adam N. Letchford Daniel J. Grainger To appear in Operations Research Letters Abstract In the literature
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConvex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams
Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams John E. Mitchell, Jong-Shi Pang, and Bin Yu Original: June 10, 2011 Abstract Nonconvex quadratic constraints can be
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationDisjunctive Cuts for Cross-Sections of the Second-Order Cone
Disjunctive Cuts for Cross-Sections of the Second-Order Cone Sercan Yıldız Gérard Cornuéjols June 10, 2014 Abstract In this paper we provide a unified treatment of general two-term disjunctions on crosssections
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationPolynomial Solvability of Variants of the Trust-Region Subproblem
Polynomial Solvability of Variants of the Trust-Region Subproblem Daniel Bienstock Alexander Michalka July, 2013 Abstract We consider an optimization problem of the form x T Qx + c T x s.t. x µ h r h,
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationOn Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators
On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators Hongbo Dong and Jeff Linderoth Wisconsin Institutes for Discovery University of Wisconsin-Madison, USA, hdong6,linderoth}@wisc.edu
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationConvex hull of two quadratic or a conic quadratic and a quadratic inequality
Noname manuscript No. (will be inserted by the editor) Convex hull of two quadratic or a conic quadratic and a quadratic inequality Sina Modaresi Juan Pablo Vielma the date of receipt and acceptance should
More informationLifted Inequalities for 0 1 Mixed-Integer Bilinear Covering Sets
1 2 3 Lifted Inequalities for 0 1 Mixed-Integer Bilinear Covering Sets Kwanghun Chung 1, Jean-Philippe P. Richard 2, Mohit Tawarmalani 3 March 1, 2011 4 5 6 7 8 9 Abstract In this paper, we study 0 1 mixed-integer
More informationValid inequalities for sets defined by multilinear functions
Valid inequalities for sets defined by multilinear functions Pietro Belotti 1 Andrew Miller 2 Mahdi Namazifar 3 3 1 Lehigh University 200 W Packer Ave Bethlehem PA 18018, USA belotti@lehighedu 2 Institut
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationValid Inequalities and Convex Hulls for Multilinear Functions
Electronic Notes in Discrete Mathematics 36 (2010) 805 812 www.elsevier.com/locate/endm Valid Inequalities and Convex Hulls for Multilinear Functions Pietro Belotti Department of Industrial and Systems
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationCuts for Conic Mixed-Integer Programming
Cuts for Conic Mixed-Integer Programming Alper Atamtürk and Vishnu Narayanan Department of Industrial Engineering and Operations Research, University of California, Berkeley, CA 94720-1777 USA atamturk@berkeley.edu,
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationarxiv:math/ v1 [math.co] 23 May 2000
Some Fundamental Properties of Successive Convex Relaxation Methods on LCP and Related Problems arxiv:math/0005229v1 [math.co] 23 May 2000 Masakazu Kojima Department of Mathematical and Computing Sciences
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationSemidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming
Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Christoph Buchheim 1 and Angelika Wiegele 2 1 Fakultät für Mathematik, Technische Universität Dortmund christoph.buchheim@tu-dortmund.de
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationTwo-Term Disjunctions on the Second-Order Cone
Noname manuscript No. (will be inserted by the editor) Two-Term Disjunctions on the Second-Order Cone Fatma Kılınç-Karzan Sercan Yıldız the date of receipt and acceptance should be inserted later Abstract
More informationA new family of facet defining inequalities for the maximum edge-weighted clique problem
A new family of facet defining inequalities for the maximum edge-weighted clique problem Franklin Djeumou Fomeni June 2016 Abstract This paper considers a family of cutting planes, recently developed for
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationCOURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion
COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F
More informationRelaxations and Randomized Methods for Nonconvex QCQPs
Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationDistributionally Robust Convex Optimization
Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationOn Conic QPCCs, Conic QCQPs and Completely Positive Programs
Noname manuscript No. (will be inserted by the editor) On Conic QPCCs, Conic QCQPs and Completely Positive Programs Lijie Bai John E.Mitchell Jong-Shi Pang July 28, 2015 Received: date / Accepted: date
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationIn English, this means that if we travel on a straight line between any two points in C, then we never leave C.
Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from
More informationA Gentle, Geometric Introduction to Copositive Optimization
A Gentle, Geometric Introduction to Copositive Optimization Samuel Burer September 25, 2014 Revised: January 17, 2015 Abstract This paper illustrates the fundamental connection between nonconvex quadratic
More informationA conic representation of the convex hull of disjunctive sets and conic cuts for integer second order cone optimization
A conic representation of the convex hull of disjunctive sets and conic cuts for integer second order cone optimization Pietro Belotti Xpress Optimizer Team, FICO, Birmingham, UK. Julio C. Góez Dept of
More informationThe Split Closure of a Strictly Convex Body
The Split Closure of a Strictly Convex Body D. Dadush a, S. S. Dey a, J. P. Vielma b,c, a H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive
More informationarxiv: v3 [math.oc] 24 May 2016
Mathematical Programming manuscript No. (will be inserted by the editor) How to Convexify the Intersection of a Second Order Cone and a Nonconvex Quadratic Samuel Burer Fatma Kılınç-Karzan arxiv:1406.1031v3
More informationObtaining Tighter Relaxations of Mathematical Programs with Complementarity Constraints
Obtaining Tighter Relaxations of Mathematical Programs with Complementarity Constraints John E. Mitchell, Jong-Shi Pang, and Bin Yu Original: February 19, 2011. Revised: October 11, 2011 Abstract The class
More informationHow to Convexify the Intersection of a Second Order Cone and a Nonconvex Quadratic
How to Convexify the Intersection of a Second Order Cone and a Nonconvex Quadratic arxiv:1406.1031v2 [math.oc] 5 Jun 2014 Samuel Burer Fatma Kılınç-Karzan June 3, 2014 Abstract A recent series of papers
More informationModule 04 Optimization Problems KKT Conditions & Solvers
Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September
More informationOn Non-Convex Quadratic Programming with Box Constraints
On Non-Convex Quadratic Programming with Box Constraints Samuel Burer Adam N. Letchford July 2008 Abstract Non-Convex Quadratic Programming with Box Constraints is a fundamental N P-hard global optimisation
More informationCutting planes from extended LP formulations
Cutting planes from extended LP formulations Merve Bodur University of Wisconsin-Madison mbodur@wisc.edu Sanjeeb Dash IBM Research sanjeebd@us.ibm.com March 7, 2016 Oktay Günlük IBM Research gunluk@us.ibm.com
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationMarch 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set
ON THE FACETS OF THE MIXED INTEGER KNAPSACK POLYHEDRON ALPER ATAMTÜRK Abstract. We study the mixed integer knapsack polyhedron, that is, the convex hull of the mixed integer set defined by an arbitrary
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationHandout 6: Some Applications of Conic Linear Programming
ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationEigenvalue techniques for convex objective, nonconvex optimization problems Daniel Bienstock, Columbia University, New York.
Eigenvalue techniques for convex objective, nonconvex optimization problems Daniel Bienstock, Columbia University, New York November, 2009 Abstract Consider a minimization problem given by a nonlinear,
More informationThe Split Closure of a Strictly Convex Body
The Split Closure of a Strictly Convex Body D. Dadush a, S. S. Dey a, J. P. Vielma b,c, a H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationOn Sublinear Inequalities for Mixed Integer Conic Programs
Noname manuscript No. (will be inserted by the editor) On Sublinear Inequalities for Mixed Integer Conic Programs Fatma Kılınç-Karzan Daniel E. Steffy Submitted: December 2014; Revised: July 7, 2015 Abstract
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationCutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs
Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs 1 Cambridge, July 2013 1 Joint work with Franklin Djeumou Fomeni and Adam N. Letchford Outline 1. Introduction 2. Literature Review
More informationOuter-Product-Free Sets for Polynomial Optimization and Oracle-Based Cuts
Outer-Product-Free Sets for Polynomial Optimization and Oracle-Based Cuts Daniel Bienstock 1, Chen Chen 1, and Gonzalo Muñoz 1 1 Industrial Engineering & Operations Research, Columbia University, New York,
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More informationAgenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)
Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in
More informationLecture: Cone programming. Approximating the Lorentz cone.
Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationA General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones
Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More informationOn the projection onto a finitely generated cone
Acta Cybernetica 00 (0000) 1 15. On the projection onto a finitely generated cone Miklós Ujvári Abstract In the paper we study the properties of the projection onto a finitely generated cone. We show for
More informationSEMIDEFINITE PROGRAM BASICS. Contents
SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n
More informationLP Relaxations of Mixed Integer Programs
LP Relaxations of Mixed Integer Programs John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA February 2015 Mitchell LP Relaxations 1 / 29 LP Relaxations LP relaxations We want
More informationThe moment-lp and moment-sos approaches
The moment-lp and moment-sos approaches LAAS-CNRS and Institute of Mathematics, Toulouse, France CIRM, November 2013 Semidefinite Programming Why polynomial optimization? LP- and SDP- CERTIFICATES of POSITIVITY
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationSplit cuts and extended formulations for Mixed Integer Conic Quadratic Programming
Split cuts and extended formulations for Mixed Integer Conic Quadratic Programming The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters.
More informationRobust linear optimization under general norms
Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationLecture 1 Introduction
L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation
More information