Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming

Size: px
Start display at page:

Download "Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming"

Transcription

1 Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming Kurt M. Anstreicher Dept. of Management Sciences University of Iowa European Workshop on MINLP, Marseille, April 2010

2 The QCQP problem Consider a quadratically constrained quadratic program: (QCQP) z = min f 0 (x) s.t. f i (x) d i, i = 1,..., q x 0, Ax b, where f i (x) = x T Q i x + c T i x, i = 0, 1,..., q, each Q i is an n n symmetric matrix, and A is an m n matrix.

3 The QCQP problem Consider a quadratically constrained quadratic program: (QCQP) z = min f 0 (x) s.t. f i (x) d i, i = 1,..., q x 0, Ax b, where f i (x) = x T Q i x + c T i x, i = 0, 1,..., q, each Q i is an n n symmetric matrix, and A is an m n matrix. Let F = {x 0 : Ax b}; assume throughout F bounded.

4 The QCQP problem Consider a quadratically constrained quadratic program: (QCQP) z = min f 0 (x) s.t. f i (x) d i, i = 1,..., q x 0, Ax b, where f i (x) = x T Q i x + c T i x, i = 0, 1,..., q, each Q i is an n n symmetric matrix, and A is an m n matrix. Let F = {x 0 : Ax b}; assume throughout F bounded. If Q i 0 for each i, QCQP is a convex programming problem that can be solved in polynomial time, but in general the problem is NP-Hard. QCQP is a fundamental global optimization problem.

5 Two Convexifications of QCQP A common approach to obtaining a lower bound on z is to somehow convexify the problem. We consider two different approaches.

6 Two Convexifications of QCQP A common approach to obtaining a lower bound on z is to somehow convexify the problem. We consider two different approaches. For the first, let ˆf i ( ) be the convex lower envelope of f i ( ) on F, ˆf i (x) = max{v T x : v T ˆx f(ˆx) ˆx F}.

7 Two Convexifications of QCQP A common approach to obtaining a lower bound on z is to somehow convexify the problem. We consider two different approaches. For the first, let ˆf i ( ) be the convex lower envelope of f i ( ) on F, ˆf i (x) = max{v T x : v T ˆx f(ˆx) ˆx F}. Let QCQP be the problem where ˆfi ( ) replaces f i ( ), i = 0,..., q, and let ẑ be the solution value in QCQP.

8 Second approach to convexifying QCQP is based on linearizing the problem by adding additional variables. Let X denote a symmetric n n matrix. Then QCQP can be written (QCQP) z = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, i = 1,..., q x 0, Ax b, X = xx T.

9 Second approach to convexifying QCQP is based on linearizing the problem by adding additional variables. Let X denote a symmetric n n matrix. Then QCQP can be written (QCQP) z = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, x 0, Ax b, X = xx T. i = 1,..., q A convexification of QCQP can then be given in terms of the set { (1 )( ) 1 T C = Co : x F}. x x

10 Second approach to convexifying QCQP is based on linearizing the problem by adding additional variables. Let X denote a symmetric n n matrix. Then QCQP can be written (QCQP) z = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, x 0, Ax b, X = xx T. i = 1,..., q A convexification of QCQP can then be given in terms of the set { (1 )( ) 1 T C = Co : x F}. x x Let QCQP denote problem where X = xx T is replaced by ( ) 1 x T Y (x, X) := C. x X

11 Have two convexifications: ( QCQP) ẑ = min ˆf 0 (x) s.t. ˆfi (x) d i, i = 1,..., q x 0, Ax b, (QCQP) z = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, Y (x, X) C, i = 1,..., q

12 Have two convexifications: ( QCQP) ẑ = min ˆf 0 (x) s.t. ˆfi (x) d i, i = 1,..., q x 0, Ax b, (QCQP) Claim: ẑ z z = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, Y (x, X) C, i = 1,..., q To prove the claim, must relate the different convexifications used to construct QCQP and QCQP.

13 Theorem 1. For x F, let f(x) = x T Qx + c T x, and let ˆf( ) be the convex lower envelope of f( ) on F. Then ˆf(x) = min{q X + c T x : Y (x, X) C}.

14 Theorem 1. For x F, let f(x) = x T Qx + c T x, and let ˆf( ) be the convex lower envelope of f( ) on F. Then ˆf(x) = min{q X + c T x : Y (x, X) C}. Proof: For x F, let g(x) = min{q X +c T x : Y (x, X) C}. Must show that ˆf(x) = g(x). First show that g( ) is a convex function with g(x) f(x), x F, implying that g(x) ˆf(x).

15 Theorem 1. For x F, let f(x) = x T Qx + c T x, and let ˆf( ) be the convex lower envelope of f( ) on F. Then ˆf(x) = min{q X + c T x : Y (x, X) C}. Proof: For x F, let g(x) = min{q X +c T x : Y (x, X) C}. Must show that ˆf(x) = g(x). First show that g( ) is a convex function with g(x) f(x), x F, implying that g(x) ˆf(x). Assume that for i {1, 2}, x i F and g(x i ) = Q X i + c T x i, where Y (x i, X i ) C. For 0 λ 1, let x(λ) = λx 1 + (1 λ)x 2, X(λ) = λx 1 + (1 λ)x 2. Then Y (x(λ), X(λ)) = λy (x 1, X 1 )+(1 λ)y (x 2, X 2 ) C, since C is convex. It follows that g(x(λ)) Q X(λ) + c T x(λ) = λg(x 1 ) + (1 λ)g(x 2 ), proving that g( ) is convex on F. That g(x) f(x) follows immediately from Y (x, xx T ) C and Q xx T + c T x = f(x).

16 Remains to show that ˆf(x) g(x). Assume that g(x) = Q X + c T x, where Y (x, X) C. From the definition of C, there exist x i F, and λ i 0, i = 1,..., k, k i=1 λ i = 1 such that ki=1 λ i x i = x, k i=1 λ i x i (x i ) T = X.

17 Remains to show that ˆf(x) g(x). Assume that g(x) = Q X + c T x, where Y (x, X) C. From the definition of C, there exist x i F, and λ i 0, i = 1,..., k, k i=1 λ i = 1 such that ki=1 λ i x i = x, k i=1 λ i x i (x i ) T = X. Follows that g(x) = Q X + c T x k = Q λ i x i (x i ) T + c T = i=1 k λ i f(x i ). i=1 k i=1 λ i x i

18 Remains to show that ˆf(x) g(x). Assume that g(x) = Q X + c T x, where Y (x, X) C. From the definition of C, there exist x i F, and λ i 0, i = 1,..., k, k i=1 λ i = 1 such that ki=1 λ i x i = x, k i=1 λ i x i (x i ) T = X. Follows that g(x) = Q X + c T x k = Q λ i x i (x i ) T + c T = i=1 k λ i f(x i ). i=1 k λ i x i i=1 But ˆf( ) is convex on F, and ˆf(x) f(x) for all x F, so ˆf(x) = ˆf k k k λ i x i λ i ˆf(x i ) λ i f(x i ) = g(x). i=1 i=1 i=1

19 Claimed relationship between QCQP and QCQP is an immediate consequence of Theorem 1. In particular, using Theorem 1, QCQP could be rewritten in the form ( QCQP) so that QCQP corresponds to X 0 = X 1 =... = X q. ẑ = min Q 0 X 0 + c T x s.t. Q i X i + c T i x d i, i = 1,..., q Y (x, X i ) C, i = 0, 1,..., q, QCQP with the added constraints

20 Claimed relationship between QCQP and QCQP is an immediate consequence of Theorem 1. In particular, using Theorem 1, QCQP could be rewritten in the form ( QCQP) so that QCQP corresponds to X 0 = X 1 =... = X q. ẑ = min Q 0 X 0 + c T x s.t. Q i X i + c T i x d i, i = 1,..., q Y (x, X i ) C, i = 0, 1,..., q, QCQP with the added constraints Corollary 1. Let ẑ and z denote solution values in convex relaxations QCQP and QCQP, respectively. Then ẑ z.

21 Note both problem of computing an exact convex lower envelope ˆf( ) and problem of characterizing C may be intractable. However, C can be represented using the cone of completely positive matrices. Define 1 x T s(x) T Y + (x, X) = x X Z(x, X), s(x) Z(x, X) T S(x, X) where s(x) = b Ax, S(x, X) = bb T Axb T bx T A T + AXA T, Z(x, X) = xb T XA T. Matrices S(x, X) and Z(x, X) relax s(x)s(x) T and xs(x) T, respectively.

22 Note both problem of computing an exact convex lower envelope ˆf( ) and problem of characterizing C may be intractable. However, C can be represented using the cone of completely positive matrices. Define 1 x T s(x) T Y + (x, X) = x X Z(x, X), s(x) Z(x, X) T S(x, X) where s(x) = b Ax, S(x, X) = bb T Axb T bx T A T + AXA T, Z(x, X) = xb T XA T. Matrices S(x, X) and Z(x, X) relax s(x)s(x) T and xs(x) T, respectively. Then (Burer 2009) C = {Y (x, X) : Y + (x, X) CP m+n+1 }, where CP k is the cone of k k completely positive matrices.

23

24 Distinction between QCQP and n = q = 1. Consider QCQP is already sharp for m = min x 2 1 s.t. x x 1 1.

25 Distinction between QCQP and n = q = 1. Consider QCQP is already sharp for m = min x 2 1 s.t. x x 1 1. Written in form of QCQP, x is x , and convex lower envelope on [0, 1] is x 1. Relaxation QCQP is then min x 2 1 s.t. x x 1 1.

26 Distinction between QCQP and n = q = 1. Consider QCQP is already sharp for m = min x 2 1 s.t. x x 1 1. Written in form of QCQP, x is x , and convex lower envelope on [0, 1] is x 1. Relaxation QCQP is then min x 2 1 s.t. x x 1 1. Solution value is ẑ = 4 1. The solution value for QCQP is z = z = 1 2. For x 1 = 1 2, Y (x 1, x 11 ) C for x 11 [ 4 1, 1 2 ]. The solution of QCQP corresponds to using x 1 = 1 2 along with x 11 = 4 1 for the objective, and x 11 = 1 2 for the single nonlinear constraint.

27 Figure 1: Set C for example

28 Two computable relaxations For a quadratic function f(x) = x T Qx + c T x defined on F = {x : 0 x e}, the well-known αbb underestimator is f α (x) = x T (Q + Diag(α))x + (c α) T x, where α R n + has Q + Diag(α) 0. Since f α (x) ˆf(x), 0 x e. f α( ) is convex,

29 Two computable relaxations For a quadratic function f(x) = x T Qx + c T x defined on F = {x : 0 x e}, the well-known αbb underestimator is f α (x) = x T (Q + Diag(α))x + (c α) T x, where α R n + has Q + Diag(α) 0. Since f α (x) ˆf(x), 0 x e. A further relaxation of (QCQP αbb ) QCQP is then given by: f α( ) is convex, z αbb = min x T (Q 0 + Diag(α 0 ))x + (c 0 α 0 ) T x s.t. x T (Q i + Diag(α i ))x + (c i α i ) T x d i, i = 1,..., q 0 x e, where each α i is chosen so that Q i + Diag(α i ) 0.

30 For F = {x : 0 x e}, there are a variety of known constraints that are valid for Y (x, X) C:

31 For F = {x : 0 x e}, there are a variety of known constraints that are valid for Y (x, X) C: 1. The constraints from the Reformulation-Linearization Technique (RLT); {0, x i + x j 1} x ij {x i, x j }.

32 For F = {x : 0 x e}, there are a variety of known constraints that are valid for Y (x, X) C: 1. The constraints from the Reformulation-Linearization Technique (RLT); {0, x i + x j 1} x ij {x i, x j }. 2. The semidefinite programming (SDP) constraint Y (x, X) 0;

33 For F = {x : 0 x e}, there are a variety of known constraints that are valid for Y (x, X) C: 1. The constraints from the Reformulation-Linearization Technique (RLT); {0, x i + x j 1} x ij {x i, x j }. 2. The semidefinite programming (SDP) constraint Y (x, X) 0; 3. Constraints on the off-diagonal components of Y (x, X) coming from the Boolean Quadric Polytope (BQP), for example, the triangle inequalities for i j k, x i + x j + x k x ij + x ik + x jk + 1, x ij + x ik x i + x jk, x ij + x jk x j + x ik, x ik + x jk x k + x ij.

34 Consider relaxation that imposes Y (x, X) 0 together with the diagonal RLT constraints diag(x) x. Note that these conditions together imply the original bound constraints 0 x e. Result is: (QCQP SDP ) z SDP = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, i = 1,..., q Y (x, X) 0, diag(x) x.

35 Consider relaxation that imposes Y (x, X) 0 together with the diagonal RLT constraints diag(x) x. Note that these conditions together imply the original bound constraints 0 x e. Result is: (QCQP SDP ) z SDP = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, i = 1,..., q Y (x, X) 0, diag(x) x. Goal is to relate QCQP αbb and QCQP SDP. The following theorem shows that there is a simple relationship between the convexifications used to construct these problems.

36 Theorem 2. For 0 x e, let f α (x) = x T (Q + Diag(α))x + (c α) T x, where α 0 and Q + Diag(α) 0. Assume that Y (x, X) 0, diag(x) x. Then f α (x) Q X + c T x.

37 Theorem 2. For 0 x e, let f α (x) = x T (Q + Diag(α))x + (c α) T x, where α 0 and Q + Diag(α) 0. Assume that Y (x, X) 0, diag(x) x. Then f α (x) Q X + c T x. Proof: Let Q(α) = Q + Diag(α). Since Q(α) 0, f α (x) = (c α) T x + min{q(α) X : X xx T } = (c α) T x + min{q(α) X : X xx T, diag(x) x}, the last because diag(x) x holds automatically for X = xx T, 0 x e.

38 Theorem 2. For 0 x e, let f α (x) = x T (Q + Diag(α))x + (c α) T x, where α 0 and Q + Diag(α) 0. Assume that Y (x, X) 0, diag(x) x. Then f α (x) Q X + c T x. Proof: Let Q(α) = Q + Diag(α). Since Q(α) 0, f α (x) = (c α) T x + min{q(α) X : X xx T } = (c α) T x + min{q(α) X : X xx T, diag(x) x}, the last because diag(x) x holds automatically for X = xx T, 0 x e. But then Y (x, X) 0 and diag(x) x imply that f α (x) Q(α) X + (c α) T x = Q X + c T x + α T (diag(x) x) Q X + c T x.

39 Immediate corollary proves relationship between QCQP αbb and QCQP SDP first conjectured by Jeff Linderoth.

40 Immediate corollary proves relationship between QCQP αbb and QCQP SDP first conjectured by Jeff Linderoth.

41 Immediate corollary proves relationship between QCQP αbb and QCQP SDP first conjectured by Jeff Linderoth. Corollary 2. Let z αbb and z SDP denote the solution values in the convex relaxations QCQP αbb and QCQP SDP, respectively. Then z αbb z SDP.

42 More general convexifications Consider quadratic function f(x) = x T Qx + c T x, and v j R n, j = 1,..., k. Assume for x F, l j v T j x u j. Follows that (v T j x l j)(v T j x u j) 0, or (v T j x)2 (l j + u j )v T j x + l ju j 0.

43 More general convexifications Consider quadratic function f(x) = x T Qx + c T x, and v j R n, j = 1,..., k. Assume for x F, l j v T j x u j. Follows that (v T j x l j)(v T j x u j) 0, or (v T j x)2 (l j + u j )v T j x + l ju j 0. For α R k +, define Q(α) = Q + c(α) = c p(α) = k α j v j vj T, j=1 k α j (l j + u j )v j, j=1 k α j l j u j, j=1 and let f α (x) = x T Q(α)x + c(α) T x + p(α).

44 If Q(α) 0, f α ( ) is a convex underestimator for f( ) on F. Zheng, Sun and Li (2009) refer to functions of the form f α ( ) as DC underestimators, and apply them to convexify the objective in QCQP problems with only linear constraints (q = 0).

45 If Q(α) 0, f α ( ) is a convex underestimator for f( ) on F. Zheng, Sun and Li (2009) refer to functions of the form f α ( ) as DC underestimators, and apply them to convexify the objective in QCQP problems with only linear constraints (q = 0). Note that the αbb underestimator on 0 x e corresponds to v j = e j, l j = 0, u j = 1, j = 1,..., n. Additional possiblities for v j include:

46 If Q(α) 0, f α ( ) is a convex underestimator for f( ) on F. Zheng, Sun and Li (2009) refer to functions of the form f α ( ) as DC underestimators, and apply them to convexify the objective in QCQP problems with only linear constraints (q = 0). Note that the αbb underestimator on 0 x e corresponds to v j = e j, l j = 0, u j = 1, j = 1,..., n. Additional possiblities for v j include: eigenvectors corresponding to negative eigenvalues of Q,

47 If Q(α) 0, f α ( ) is a convex underestimator for f( ) on F. Zheng, Sun and Li (2009) refer to functions of the form f α ( ) as DC underestimators, and apply them to convexify the objective in QCQP problems with only linear constraints (q = 0). Note that the αbb underestimator on 0 x e corresponds to v j = e j, l j = 0, u j = 1, j = 1,..., n. Additional possiblities for v j include: eigenvectors corresponding to negative eigenvalues of Q, transposed rows of the constraint matrix A.

48 Using underestimators of the form f α ( ), obtain convex relaxation (QCQP DC ) z DC = min x T Q 0 (α 0 )x + c 0 (α 0 ) T x + p(α 0 ) s.t. x T Q i (α i )x + c i (α i ) T x + p(α i ) d i, i = 1,..., q x 0, Ax b, where each α i R k + is chosen so that Q i(α i ) 0.

49 Using underestimators of the form f α ( ), obtain convex relaxation (QCQP DC ) z DC = min x T Q 0 (α 0 )x + c 0 (α 0 ) T x + p(α 0 ) s.t. x T Q i (α i )x + c i (α i ) T x + p(α i ) d i, i = 1,..., q x 0, Ax b, where each α i R k + is chosen so that Q i(α i ) 0. We will compare QCQP DC to a relaxation of QCQP that combines the semidefiniteness condition Y (x, X) 0 with the RLT constraints on (x, X) that can be obtained from the original linear constraints x 0, Ax b.

50 The RLT constraints can be described very succinctly using the the matrix Y + (x, X); in fact these constraints correspond exactly to X 0, S(x, X) 0, Z(x, X) 0. Therefore the RLT constraints and Y (x, X) 0 together are equivalent to Y + (x, X) being a doubly nonnegative matrix. We define the relaxation (QCQP DNN ) z DNN = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, i = 1,..., q Y + (x, X) DNN m+n+1, where DNN k is the cone of k k doubly nonnegative matrices.

51 The RLT constraints can be described very succinctly using the the matrix Y + (x, X); in fact these constraints correspond exactly to X 0, S(x, X) 0, Z(x, X) 0. Therefore the RLT constraints and Y (x, X) 0 together are equivalent to Y + (x, X) being a doubly nonnegative matrix. We define the relaxation (QCQP DNN ) z DNN = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, i = 1,..., q Y + (x, X) DNN m+n+1, where DNN k is the cone of k k doubly nonnegative matrices. Note that the relaxation QCQP DNN is entirely determined by the original data from the problem QCQP; in particular, QCQP DNN does not involve the vectors v j and bounds (l j, u j ) used to construct the convexifications in QCQP DC.

52 To compare QCQP DC and QCQP DNN we require a generalization of Theorem 2 that applies to the more general convexification f α ( ). This result naturally involves the RLT constraints v T j Xv j (l j + u j ) T x l j u j, j = 1,..., k. (1) that are obtained from l j v T j x u j, j = 1,..., k.

53 To compare QCQP DC and QCQP DNN we require a generalization of Theorem 2 that applies to the more general convexification f α ( ). This result naturally involves the RLT constraints v T j Xv j (l j + u j ) T x l j u j, j = 1,..., k. (1) that are obtained from l j v T j x u j, j = 1,..., k. Theorem 3. For x F, let f α (x) = x T Q(α)x+c(α) T x+p(α), where α 0 and Q(α) 0. Assume that Y (x, X) 0 and (x, X) satisfy (1). Then f α (x) Q X + c T x.

54 To compare QCQP DC and QCQP DNN we require a generalization of Theorem 2 that applies to the more general convexification f α ( ). This result naturally involves the RLT constraints v T j Xv j (l j + u j ) T x l j u j, j = 1,..., k. (1) that are obtained from l j v T j x u j, j = 1,..., k. Theorem 3. For x F, let f α (x) = x T Q(α)x+c(α) T x+p(α), where α 0 and Q(α) 0. Assume that Y (x, X) 0 and (x, X) satisfy (1). Then f α (x) Q X + c T x. Proof. The proof is very similar to that of Theorem 2.

55 Theorem 4. Let z DC and z DNN denote the solution values in the convex relaxations QCQP DC and QCQP DNN, respectively. Then z DC z DNN.

56 Theorem 4. Let z DC and z DNN denote the solution values in the convex relaxations QCQP DC and QCQP DNN, respectively. Then z DC z DNN. Proof: Consider the convex relaxation z V = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, i = 1,..., q vj T Xv j (l j + u j ) T x l j u j, x 0, Ax b, Y (x, X) 0. j = 1,..., k

57 Theorem 4. Let z DC and z DNN denote the solution values in the convex relaxations QCQP DC and QCQP DNN, respectively. Then z DC z DNN. Proof: Consider the convex relaxation z V = min Q 0 X + c T 0 x s.t. Q i X + c T i x d i, i = 1,..., q vj T Xv j (l j + u j ) T x l j u j, x 0, Ax b, Y (x, X) 0. j = 1,..., k By Theorem 3 we immediately have z DC z V. However, the constraints l j vj T x u j are implied by the original constraints x 0, Ax b, and therefore (Sherali and Adams 1998, Proposition 8.2) the constraints (1) are implied by the RLT constraints X 0, S(x, X) 0, Z(x, X) 0. It follows that z V z DNN.

58 Conclusion Approach of approximating C has substantial advantages over common methodology of approximating convex lower envelopes, if solver can handle the constraint Y (x, X) 0.

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators Hongbo Dong and Jeff Linderoth Wisconsin Institutes for Discovery University of Wisconsin-Madison, USA, hdong6,linderoth}@wisc.edu

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation -

Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation - Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation - Radu Baltean-Lugojan Ruth Misener Computational Optimisation Group Department of Computing Pierre Bonami Andrea

More information

Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope

Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope Kurt M. Anstreicher Dept. of Management Sciences University of Iowa Seminar, Chinese University of Hong Kong, October 2009 We consider

More information

Quadratic reformulation techniques for 0-1 quadratic programs

Quadratic reformulation techniques for 0-1 quadratic programs OSE SEMINAR 2014 Quadratic reformulation techniques for 0-1 quadratic programs Ray Pörn CENTER OF EXCELLENCE IN OPTIMIZATION AND SYSTEMS ENGINEERING ÅBO AKADEMI UNIVERSITY ÅBO NOVEMBER 14th 2014 2 Structure

More information

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Laura Galli 1 Adam N. Letchford 2 Lancaster, April 2011 1 DEIS, University of Bologna, Italy 2 Department of Management Science,

More information

Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs

Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs 1 Cambridge, July 2013 1 Joint work with Franklin Djeumou Fomeni and Adam N. Letchford Outline 1. Introduction 2. Literature Review

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

A Gentle, Geometric Introduction to Copositive Optimization

A Gentle, Geometric Introduction to Copositive Optimization A Gentle, Geometric Introduction to Copositive Optimization Samuel Burer September 25, 2014 Revised: January 17, 2015 Abstract This paper illustrates the fundamental connection between nonconvex quadratic

More information

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs Adam N. Letchford Daniel J. Grainger To appear in Operations Research Letters Abstract In the literature

More information

Solving Box-Constrained Nonconvex Quadratic Programs

Solving Box-Constrained Nonconvex Quadratic Programs Noname manuscript No. (will be inserted by the editor) Solving Box-Constrained Nonconvex Quadratic Programs Pierre Bonami Oktay Günlük Jeff Linderoth June 13, 2016 Abstract We present effective computational

More information

Separation and relaxation for cones of quadratic forms

Separation and relaxation for cones of quadratic forms Separation and relaxation for cones of quadratic forms Samuel Burer Hongbo Dong May 14, 2010 Revised: April 28, 2011 Abstract Let P R n be a pointed, polyhedral cone. In this paper, we study the cone C

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

On Non-Convex Quadratic Programming with Box Constraints

On Non-Convex Quadratic Programming with Box Constraints On Non-Convex Quadratic Programming with Box Constraints Samuel Burer Adam N. Letchford July 2008 Abstract Non-Convex Quadratic Programming with Box Constraints is a fundamental N P-hard global optimisation

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

There are several approaches to solve UBQP, we will now briefly discuss some of them:

There are several approaches to solve UBQP, we will now briefly discuss some of them: 3 Related Work There are several approaches to solve UBQP, we will now briefly discuss some of them: Since some of them are actually algorithms for the Max Cut problem (MC), it is necessary to show their

More information

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Separating Doubly Nonnegative and Completely Positive Matrices

Separating Doubly Nonnegative and Completely Positive Matrices Separating Doubly Nonnegative and Completely Positive Matrices Hongbo Dong and Kurt Anstreicher March 8, 2010 Abstract The cone of Completely Positive (CP) matrices can be used to exactly formulate a variety

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions. Chapter 9 Lagrange multipliers Portfolio optimization The Lagrange multipliers method for finding constrained extrema of multivariable functions 91 Lagrange multipliers Optimization problems often require

More information

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 We present the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

The Difference Between 5 5 Doubly Nonnegative and Completely Positive Matrices

The Difference Between 5 5 Doubly Nonnegative and Completely Positive Matrices The Difference Between 5 5 Doubly Nonnegative and Completely Positive Matrices Sam Burer and Kurt M. Anstreicher University of Iowa Mirjam Dür TU Darmstadt IMA Hot Topics Workshop, November 2008 Consider

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Module 04 Optimization Problems KKT Conditions & Solvers

Module 04 Optimization Problems KKT Conditions & Solvers Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance

More information

A note on 5 5 Completely positive matrices

A note on 5 5 Completely positive matrices A note on 5 5 Completely positive matrices Hongbo Dong and Kurt Anstreicher October 2009; revised April 2010 Abstract In their paper 5 5 Completely positive matrices, Berman and Xu [BX04] attempt to characterize

More information

The min-cut and vertex separator problem

The min-cut and vertex separator problem manuscript No. (will be inserted by the editor) The min-cut and vertex separator problem Fanz Rendl Renata Sotirov the date of receipt and acceptance should be inserted later Abstract We consider graph

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Implementation of an αbb-type underestimator in the SGO-algorithm

Implementation of an αbb-type underestimator in the SGO-algorithm Implementation of an αbb-type underestimator in the SGO-algorithm Process Design & Systems Engineering November 3, 2010 Refining without branching Could the αbb underestimator be used without an explicit

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

Maximum-Entropy Sampling and the Boolean Quadric Polytope

Maximum-Entropy Sampling and the Boolean Quadric Polytope Journal of Global Optimization manuscript No. (will be inserted by the editor) Maximum-Entropy Sampling and the Boolean Quadric Polytope Kurt M. Anstreicher the date of receipt and acceptance should be

More information

On Conic QPCCs, Conic QCQPs and Completely Positive Programs

On Conic QPCCs, Conic QCQPs and Completely Positive Programs Noname manuscript No. (will be inserted by the editor) On Conic QPCCs, Conic QCQPs and Completely Positive Programs Lijie Bai John E.Mitchell Jong-Shi Pang July 28, 2015 Received: date / Accepted: date

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams

Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams John E. Mitchell, Jong-Shi Pang, and Bin Yu Original: June 10, 2011 Abstract Nonconvex quadratic constraints can be

More information

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

COM Optimization for Communications 8. Semidefinite Programming

COM Optimization for Communications 8. Semidefinite Programming COM524500 Optimization for Communications 8. Semidefinite Programming Institute Comm. Eng. & Dept. Elect. Eng., National Tsing Hua University 1 Semidefinite Programming () Inequality form: min c T x s.t.

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

New Global Algorithms for Quadratic Programming with A Few Negative Eigenvalues Based on Alternative Direction Method and Convex Relaxation

New Global Algorithms for Quadratic Programming with A Few Negative Eigenvalues Based on Alternative Direction Method and Convex Relaxation New Global Algorithms for Quadratic Programming with A Few Negative Eigenvalues Based on Alternative Direction Method and Convex Relaxation Hezhi Luo Xiaodi Bai Gino Lim Jiming Peng December 3, 07 Abstract

More information

What can be expressed via Conic Quadratic and Semidefinite Programming?

What can be expressed via Conic Quadratic and Semidefinite Programming? What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent

More information

A RLT relaxation via Semidefinite cut for the detection of QPSK signaling in MIMO channels

A RLT relaxation via Semidefinite cut for the detection of QPSK signaling in MIMO channels A RLT relaxation via Semidefinite cut for the detection of QPSK signaling in MIMO channels Rupaj Kumar Nayak Harish Kumar Sahoo Abstract A detection technique of quadrature phase shift keying (QPSK) signaling

More information

Lecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem

Lecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem Math 280 Geometric and Algebraic Ideas in Optimization April 26, 2010 Lecture 5 Lecturer: Jesús A De Loera Scribe: Huy-Dung Han, Fabio Lapiccirella 1 Goermans-Williamson Algorithm for the maxcut problem

More information

Continuous Optimisation, Chpt 9: Semidefinite Problems

Continuous Optimisation, Chpt 9: Semidefinite Problems Continuous Optimisation, Chpt 9: Semidefinite Problems Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2016co.html version: 21/11/16 Monday

More information

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Author: Faizan Ahmed Supervisor: Dr. Georg Still Master Thesis University of Twente the Netherlands May 2010 Relations

More information

Globally Solving Nonconvex Quadratic Programming Problems via Completely Positive Programming

Globally Solving Nonconvex Quadratic Programming Problems via Completely Positive Programming ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 Globally Solving Nonconvex Quadratic Programming Problems via Completely Positive Programming Jieqiu Chen and Samuel Burer Mathematics

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION

FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION RAMTIN MADANI, GHAZAL FAZELNIA, SOMAYEH SOJOUDI AND JAVAD LAVAEI Abstract. This paper is concerned with the problem

More information

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints 1 Objectives Optimization of functions of multiple variables subjected to equality constraints

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

RLT-POS: Reformulation-Linearization Technique (RLT)-based Optimization Software for Polynomial Programming Problems

RLT-POS: Reformulation-Linearization Technique (RLT)-based Optimization Software for Polynomial Programming Problems RLT-POS: Reformulation-Linearization Technique (RLT)-based Optimization Software for Polynomial Programming Problems E. Dalkiran 1 H.D. Sherali 2 1 The Department of Industrial and Systems Engineering,

More information

SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS

SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS HAIBIN CHEN, GUOYIN LI, AND LIQUN QI Abstract. In this paper, we examine structured tensors which have sum-of-squares (SOS) tensor decomposition, and study

More information

Decomposition Methods for Quadratic Zero-One Programming

Decomposition Methods for Quadratic Zero-One Programming Decomposition Methods for Quadratic Zero-One Programming Borzou Rostami This dissertation is submitted for the degree of Doctor of Philosophy in Information Technology Advisor: Prof. Federico Malucelli

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

A Geometrical Analysis of a Class of Nonconvex Conic Programs for Convex Conic Reformulations of Quadratic and Polynomial Optimization Problems

A Geometrical Analysis of a Class of Nonconvex Conic Programs for Convex Conic Reformulations of Quadratic and Polynomial Optimization Problems A Geometrical Analysis of a Class of Nonconvex Conic Programs for Convex Conic Reformulations of Quadratic and Polynomial Optimization Problems Sunyoung Kim, Masakazu Kojima, Kim-Chuan Toh arxiv:1901.02179v1

More information

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Ellen H. Fukuda Bruno F. Lourenço June 0, 018 Abstract In this paper, we study augmented Lagrangian functions for nonlinear semidefinite

More information

1. Introduction. Consider the following quadratically constrained quadratic optimization problem:

1. Introduction. Consider the following quadratically constrained quadratic optimization problem: ON LOCAL NON-GLOBAL MINIMIZERS OF QUADRATIC OPTIMIZATION PROBLEM WITH A SINGLE QUADRATIC CONSTRAINT A. TAATI AND M. SALAHI Abstract. In this paper, we consider the nonconvex quadratic optimization problem

More information

1 Positive definiteness and semidefiniteness

1 Positive definiteness and semidefiniteness Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Bilinear and quadratic forms

Bilinear and quadratic forms Bilinear and quadratic forms Zdeněk Dvořák April 8, 015 1 Bilinear forms Definition 1. Let V be a vector space over a field F. A function b : V V F is a bilinear form if b(u + v, w) = b(u, w) + b(v, w)

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

A Binarisation Approach to Non-Convex Quadratically Constrained Quadratic Programs

A Binarisation Approach to Non-Convex Quadratically Constrained Quadratic Programs A Binarisation Approach to Non-Convex Quadratically Constrained Quadratic Programs Laura Galli Adam N. Letchford February 2015 Abstract The global optimisation of non-convex quadratically constrained quadratic

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Globally solving nonconvex quadratic programming problems via completely positive programming

Globally solving nonconvex quadratic programming problems via completely positive programming Math. Prog. Comp. (2012) 4:33 52 DOI 10.1007/s12532-011-0033-9 FULL LENGTH PAPER Globally solving nonconvex quadratic programming problems via completely positive programming Jieqiu Chen Samuel Burer Received:

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Preliminaries Overview OPF and Extensions. Convex Optimization. Lecture 8 - Applications in Smart Grids. Instructor: Yuanzhang Xiao

Preliminaries Overview OPF and Extensions. Convex Optimization. Lecture 8 - Applications in Smart Grids. Instructor: Yuanzhang Xiao Convex Optimization Lecture 8 - Applications in Smart Grids Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 32 Today s Lecture 1 Generalized Inequalities and Semidefinite Programming

More information

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22. 61 Matrices Definition: A Matrix A is a rectangular array of the form A 11 A 12 A 1n A 21 A 22 A 2n A m1 A m2 A mn The size of A is m n, where m is the number of rows and n is the number of columns The

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS. Albert Atserias Universitat Politècnica de Catalunya Barcelona

MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS. Albert Atserias Universitat Politècnica de Catalunya Barcelona MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS Albert Atserias Universitat Politècnica de Catalunya Barcelona Part I CONVEX POLYTOPES Convex polytopes as linear inequalities Polytope: P = {x R n : Ax b}

More information

Continuous Optimisation, Chpt 9: Semidefinite Optimisation

Continuous Optimisation, Chpt 9: Semidefinite Optimisation Continuous Optimisation, Chpt 9: Semidefinite Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version: 28/11/17 Monday

More information

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term EE 150 - Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko JPL Third Term 2011-2012 Due on Thursday May 3 in class. Homework Set #4 1. (10 points) (Adapted

More information

Separation Techniques for Constrained Nonlinear 0 1 Programming

Separation Techniques for Constrained Nonlinear 0 1 Programming Separation Techniques for Constrained Nonlinear 0 1 Programming Christoph Buchheim Computer Science Department, University of Cologne and DEIS, University of Bologna MIP 2008, Columbia University, New

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming

Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Christoph Buchheim 1 and Angelika Wiegele 2 1 Fakultät für Mathematik, Technische Universität Dortmund christoph.buchheim@tu-dortmund.de

More information

Lift-and-Project Techniques and SDP Hierarchies

Lift-and-Project Techniques and SDP Hierarchies MFO seminar on Semidefinite Programming May 30, 2010 Typical combinatorial optimization problem: max c T x s.t. Ax b, x {0, 1} n P := {x R n Ax b} P I := conv(k {0, 1} n ) LP relaxation Integral polytope

More information

PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX

PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX Math. Operationsforsch. u. Statist. 5 974, Heft 2. pp. 09 6. PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX András Prékopa Technological University of Budapest and Computer

More information

ROBUST STATE FEEDBACK CONTROL OF UNCERTAIN POLYNOMIAL DISCRETE-TIME SYSTEMS: AN INTEGRAL ACTION APPROACH

ROBUST STATE FEEDBACK CONTROL OF UNCERTAIN POLYNOMIAL DISCRETE-TIME SYSTEMS: AN INTEGRAL ACTION APPROACH International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 3, March 2013 pp. 1233 1244 ROBUST STATE FEEDBACK CONTROL OF UNCERTAIN POLYNOMIAL

More information

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations LAAS-CNRS and Institute of Mathematics, Toulouse, France Tutorial, IMS, Singapore 2012 LP-relaxations LP- VERSUS SDP-relaxations

More information