On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming

Size: px
Start display at page:

Download "On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming"

Transcription

1 On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming Yichuan Ding Dongdong Ge Henry Wolkowicz April 28, 2010 Contents 1 Introduction Preliminaries Outline Quadratic Matrix Programming: Case I Lagrangian Relaxation Equivalence of Vector and Matrix Lifting for QMP Quadratic Matrix Programming: Case II Equivalence of Vector and Matrix Lifting for QMP Proof of (Main) Theorem Unbalanced Orthogonal Procrustes Problem An Extension to QMP with Conic Constraints Graph Partition Problem Conclusion 19 References 23 Index 23 Abstract In this paper, we analyze two popular semidefinite programming (SDP) relaxations for quadratically constrained quadratic programs (QCQP) with matrix variables. These are based on vector-lifting and on matrix lifting and are of different size and expense. We prove, under mild assumptions, that these two relaxations provide equivalent bounds. Thus, our results provide a theoretical guideline for how to choose an inexpensive SDP relaxation and still obtain a strong bound. Our results also shed important insights on how to simplify large-scale SDP constraints by exploiting the particular sparsity pattern. Technical report CORR ; URL: orion.math.uwaterloo.ca/ hwolkowi/reports/abstracts.html Department of Management Science & Engineering, Stanford University. y7ding@stanford.edu Antai College of Economics and Management, Shanghai Jiao Tong University. ddge@sjtu.edu.cn Research supported by Natural Sciences Engineering Research Council Canada. hwolkowicz@uwaterloo.ca 1

2 Keywords: Semidefinite Programming Relaxations, Quadratic Matrix Programming, Quadratic Constrained Quadratic Programming, Hard Combinatorial Problems. 1 Introduction In this paper, we aim to provide theoretical insights on how to compare different semidefinite programming (SDP) relaxations. In particular, we study a vector lifting relaxation and compare it to a significantly smaller matrix lifting relaxation and show that the resulting two bounds are equal. Many hard combinatorial problems can be formulated as a quadratically constrained quadratic program (QCQP) with matrix variables. If the resulting formulated problem is nonconvex, then SDP relaxations provide an efficient and successful approach for computing approximate solutions and strong bounds. Finding strong and inexpensive bounds is essential for branch and bound algorithms for solving large hard combinatorial problems. However, there can be many different SDP relaxations for the same problem, and it is usually not obvious which relaxation is overall optimal with regard to both computational efficiency and bound quality, e.g., 12]. For examples of using SDP relaxations for QCQP arising from hard problems, see e.g., quadratic assignment (QAP) 11, 12, 23, 26, 35], graph partitioning (GPP) 34], sensor network localization (SNL) 9, 10, 21], and the more general Euclidean distance matrix completions 1]. 1.1 Preliminaries The concept of quadratic matrix programming (QMP)was introduced by Beck in 6], where it refers to a special instance of QCQP with matrix variables. Because we include the study of more general problems, we denote the model discussed in 6] as the first case of quadratic matrix programming, denoted (QMP 1 ), (QMP 1 ) µ P1 := min trace(xt Q 0 X) + 2trace(C0 TX) + β 0 s.t. trace(x T Q j X) + 2trace(Cj TX) + β j 0, j = 1,2,...,m X M nr, where M nr denotes the set of n by r matrices, Q j S n,j = 0,1,...,m, S n is the space of n n symmetric matrices, and C j M nr. Throughout this paper, we use the trace inner-product (dot product) C X := trace C T X. The applicability of QMP 1 is limited when compared to the more general class QCQP. However, many applications use QCQP models in the form of QMP 1, e.g., robust optimization 7] and sensor network localization 1]. In addition, many combinatorial problems are formulated with orthogonality constraints in one of the two forms XX T = I, X T X = I. (1) When X is square, the pair of constraints in (1) are equivalent to each other, in theory. However, relaxations that include both forms of the constraints rather than just one, can be expected to obtain stronger bounds. For example, Anstreicher and Wolkowicz 4] proved that strong duality holds for a certain relaxation of QAP when both forms of the orthogonality constraints in (1) are included; however, there can be a duality gap if only one of the forms is used. Motivated by this result, we extend our scope of problems so that the objective and constraint functions can include 2

3 both forms of quadratic terms X T Q j X and XP j X T. We now define the second case of quadratic matrix programming (QMP 2 ) problems as (QMP 2 ) min trace(x T Q 0 X) + trace(xp 0 X T ) + 2trace(C T 0 X) + β 0 s.t. trace(x T Q j X) + trace(xp j X T ) + 2trace(C T j X) + β j 0,j = 1,...,m X M nr, (2) where Q j, P j are symmetric matrices of appropriate sizes. Both QMP 1 and QMP 2 can be vectorized into the QCQP form using trace(x T QX) = vec(x) T (I r Q)vec(X), trace(xpx T ) = vec(x) T (P I n )vec(x), (3) where denotes the Kronecker product, and vec(x) vectorizes X by stacking columns of X on top of each other. The difference in the Kronecker products (I r Q),(P I n ) shows that there is a difference in the corresponding Lagrange multipliers and illustrates why the bounds from Lagrangian relaxation will be different for these two sets of constraints. The SDP relaxation for the vectorized QCQP is called the vector-lifting semidefinite relaxation (V SDR). Under a constraint qualification assumption, V SDR for QCQP is equivalent to the dual of classical Lagrangian relaxation, see e.g., 24, 33, 5]. From (3), we get trace(x T QX) = trace(i r Q)Y, if Y = vec(x)vec(x) T trace(xpx T ) = trace(p I n )Y, if Y = vec(x)vec(x) T. (4) V SDR is derived using (4) with the relaxation Y vec(x)vec(x) T. A Schur complement argument, e.g., 25, 22], implies the equivalence of this relaxation to the large matrix variable constraint 1 vec(x T ] ) 0. A similar result holds for trace(xpx vec(x) Y T ) = vec(x) T (P I n )vec(x). Alternatively, from (3), we get the smaller system trace(x T QX) = trace QY, if Y = XX T trace(xpx T ) = trace PY, if Y = X T X. (5) The matrix-lifting semidefinite relaxation M SDR is derived using (5) with the relaxation Y XX T. A Schur complement argument ] now implies the equivalence of this relaxation to the smaller I X T matrix variable constraint 0. Again, a similar result holds for trace(xpx X Y T ). Intuitively, one expects that V SDR should provide stronger bounds than M SDR. Beck 6] proved that V SDR is actually equivalent to MSDR for QMP 1 if both SDP relaxations attain optimality and have a zero duality gap, e.g., when a constraint qualification, such as the Slater condition, holds for the dual program. In this paper we strengthen the above result by dropping the constraint qualification assumption. Then, we present our main contribution, i.e., we show the equivalence between MSDR and V SDR for the more general problem QMP 2 under a constraint qualification. This result is of more interest because QMP 2 does not possess the same nice structure (chordal pattern) as QMP 1. Moreover, QMP 2 encompasses a much richer class of problems and therefore has more significant applications. 3

4 1.2 Outline In Section 2 we present the equivalence of the corresponding V SDR and M SDR formulations for QMP 1 and prove Beck s result without the constraint qualification assumption, see Theorem 2.1. Section 3 proves the main result that V SDR and MSDR generate equivalent lower bounds for QMP 2, under a constraint qualification assumption, see Theorem 3.1. Numerical tests are included. Section 4 provides concluding remarks. 2 Quadratic Matrix Programming: Case I We first discuss the two relaxations for QMP 1. We denote the matrices in the relaxations obtained from vector and matrix lifting by M(qj V ( )) := β j vec(c j ) T ], vec(c j ) I r Q j ] βj M(qj M( )) := r I r Cj T. We let y = C j ( ) x0 R nr+1, Y = vec(x) and denote the quadratic and homogenized quadratic functions Q j ( ) X0 M (r+n)r, X q j (X) := trace(x T Q j X) + 2trace(C T j X) + β j q V j (X,x 0) := trace(x T Q j X) + 2trace(C T j Xx 0) + β j x 2 0 = y T M(q V j ( ))y q M j (X,X 0) := trace(x T Q j X) + 2trace(X T 0 CT j X) + trace β j r XT 0 I rx 0 = trace Y T M(q M j ( ))Y 2.1 Lagrangian Relaxation As mentioned above, under a constraint qualification, V SDR for QCQP is equivalent with the dual of classical Lagrangian relaxation. We include this result for completeness and to illustrate the role of a constraint qualification in the relaxation. We follow the approach in 24, Pg. 403] and use the strong duality of the trust region subproblem 31] to obtain the Lagrangian relaxation (or 4

5 dual) for QMP 1 as an SDP. µ L := max min q 0(X) + m λ 0 X i=j λ jq j (X) = max min q V λ 0 X,x 2 0 =1 0 (X,x 0) + m j=1 λ iqj V (X,x 0) ( = max min y T M(q V λ 0,t y 0 ( )) + ) m j=1 λ jm(qj V ( )) y + t(1 x 2 0 ( ) = max min trace M(q V λ 0,t y 0 ( )) + ) m j=1 λ jm(qj V ( )) yy T + t(1 x 2 0 ) max ] t t 0 = (DV SDR 1 ) s.t. m 0 0 j=1 λ jm(qj V ( )) M(qV 0 ( )) λ R m +,t R. (6) As illustrated in (6), Lagrangian relaxation is the dual program (denoted by DV SDR 1 ) of the vector-lifting relaxation V SDR 1 given below. Hence, under a constraint qualification, the Lagrangian relaxation is equivalent with the vector-lifting semidefinite relaxation. The usual constraint qualification is the Slater condition, i.e., λ R m +, s.t. M(q V 0 ( )) + m λ j M(qj V ( )) 0. (7) j=1 2.2 Equivalence of Vector and Matrix Lifting for QMP 1 Recall that the dot product refers to the trace inner-product, C X = trace C T X. The vector-lifting relaxation is (V SDR 1 ) Thus the constraint matrix is blocked as Z V = The matrix-lifting relaxation is µ V 1 := min M(qV 0 ( )) Z V s.t. M(qj V ( )) Z V 0, j = 1,2,...,m (Z V ) 1,1 = 1 Z V 0. 1 vec(x) T vec(x) Y V µ M1 := min M(qM 0 ( )) Z M s.t. M(qj M (MSDR 1 ) ( )) Z M 0, j = 1,2,...,m (Z M ) 1:r,1:r = I r Z M 0. Ir X Thus the constraint matrix is blocked as Z M = T ]. X Y M V SDR 1 is obtained by relaxing the quadratic equality constraint Y V = vec(x)vec(x) T to Y V vec(x)vec(x) T 1 vec(x), and then formulating this as Z V = T ] 0. MSDR vec(x) Y 1 is V obtained by relaxing the quadratic equality constraint Y M = XX T to Y M XX T and then Ir X reformulating this to the linear conic constraint Z M = T ] 0. V SDR X 1 involves O((nr) 2 ) 5 Y M ].

6 variables and O(m) constraints which is often at the complexity of O(nr), whereas the smaller problem MSDR 1 has only O((n + r) 2 ) variables. The equivalence of relaxations using vector and matrix-liftings is proved in 6, Theorem 4.3] by assuming a constraint qualification for the dual programs. We now present our first main result and prove the above mentioned equivalence without any constraint qualification assumptions. The proof is of interest in itself in that we use the chordal property and matrix completions to connect the two relaxations. Theorem 2.1. As numbers in the extended real line,+ ], the optimal values of the two relaxations obtained using vector and matrix-liftings are equal, i.e., µ V 1 = µ M1. Proof. The proof follows by showing that both V SDR 1 and MSDR 1 generate the same optimal values as the following program. µ V 1 := min Q 0 r j=1 Y jj + 2C 0 X + β 0 (V SDR 1 ) s.t. Q j r j=1 Y jj + 2C j X + β j 0, j = 1,2,...,m ] 1 x T Z jj = j 0, j = 1,2,...,r, x j where x j, j = 1,2,...,r, are the columns of matrix X, and Y jj, j = 1,2,,r, represent the corresponding quadratic parts x j x T j. We first show that the optimal values of V SDR 1 and V SDR 1 are equal, i.e., that Y jj µ V 1 = µ V 1. (8) The equivalence of the two optimal values can be established by showing for each program, for each feasible solution, one can always construct a corresponding feasible solution to the other program with the same objective value. ] 1 x First, suppose V SDR T 1 has a feasible solution Z jj = j, j = 1,2,...,r. Construct the partial symmetric matrix x j Y jj 1 x T 1 x T 2... x T r x 1 Y 11??? Z V = x 2? Y 22??,..??..? x r??? Y rr where the entries denoted by? are unknown/unspecified. By observation, the unspecified entries of Z V are not involved in the constraints or in the objective function of V SDR 1. In other words, adding values to the unspecified positions will not change the constraint function values and the objective value. Therefore, any positive semidefinite completion of the partial matrix Z V is feasible for V SDR 1 and has ] the same objective value. The feasibility of Z jj (j = 1,2,...,r) for 1 x V SDR T 1 implies j 0 for each j = 1,2,...,r. So all the specified principal submatrices x j Y jj of Z V are positive semidefinite, and hence Z V is a partial positive semidefinite matrix (See references 2, 15, 16, 18, 32] for the specific definitions of partial positive semidefinite, chordal graph, semidefinite completion.) It is not difficult to verify the chordal graph property for the sparsity 6

7 pattern of Z V. Therefore Z V has a positive semidefinite completion by the classical completion result 15, Theorem 7]. Thus we have constructed a feasible solution to V SDR 1 with the same objective value as the feasible solution from V SDR 1. i.e., this shows that µ V 1 µ V 1. Conversely, suppose V SDR 1 has a feasible solution 1 x T 1 x T 2... x T r x 1 Y 11 Y Y 1r Z V = x 2 Y 21 Y Y 2r x r Y r1 Y r2... Y rr ] 1 x T Now we construct Z jj := j, j = 1,2,...,r. Because each Z jj is a principal submatrix x j Y jj of the positive semidefinite matrix Z V, we have Z jj 0. The feasibility of Z V for V SDR 1 also implies M(q V i ( )) Z V 0, i = 1,2,...,m. (9) It is easy to check that r Q i Y jj + 2C i X + β i = M(qi V ( )) Z V 0, i = 1,2,...,m, (10) j=1 where X = ] x 1 x 2 x r. Therefore, Zjj, j = 1,2,...,r, is feasible for V SDR 1, and also generates the same objective value for V SDR 1 as Z V for V SDR 1 by (10), i.e., this shows that µ V 1 µ V 1. This completes the proof of (8). Next we prove that the optimal values of MSDR 1 and V SDR 1 are equal, i.e., that µ M1 = µ V 1. (11) The proof is similar to the one for (8). First suppose V SDR 1 has a feasible solution Z jj = ] 1 x T j, j = 1,2,,m. Let X = ] x x j Y 1 x 2 x r, and YM = r j=1 Y jj. Now we construct jj Ir X Z M := T ]. Then by Z X Y jj 0, j = 1,2,...,r, we have Y M = r j=1 Y jj r j=1 x jx T j = XXT, M which implies ZM 0 by the Schur Complement 22, 25]. And, because r M(qj M ( )) Z M = Q j Y ii + 2C j X + β j, j = 1,...,m, (12) i=1 we get Z M is feasible for MSDR and it generates the same objective value as the one by Z jj, j = 1,2,,m, for V SDR 1, i.e., µ M1 µ V 1. Ir X Conversely, suppose Z M = T ] 0 is feasible for MSDR X Y 1, and X = ] x 1 x 2 x r. M Let Y ii = x i (x i ) T for i = 1,2,...,r 1, and let Y rr = x r x T r + (Y M XX T ). As a result, Y ii x i x T i for i = 1,2,...,r, and ] r 1 x T i=1 Y ii = Y M. So, by constructing Z jj = j,j = 1,2,,r, it is easy to show that Z jj is feasible for V SDR 1 and generates an objective value equal to the objective value of MSDR with Z M, i.e., µ M1 µ V 1. This completes the proof of (11). Combining this with (8) completes the proof of the Theorem. 7 x j Y jj

8 Though MSDR 1 is significantly less expensive, Theorem 2.1 implies that the quality of the MSDR 1 bound is no weaker than that from V SDR 1. This tells us to always choose MSDR 1 if the problem can be formulated as in QMP 1. Example 2.1 (Sensor Network Localization Problem). The Sensor Network Localization, SNL, problem is one of the most studied problems in Graph Realization, e.g., 20, 21, 30]. In this problem one is given a graph with m known points (anchors) a k R d, k = 1,2,,m, and n unknown points (sensors) x j R d, j = 1,2,,n, where d is the embedding dimension. A Euclidean distance ˆd kj between a k and x j or distance ˆd ij between x i and x j is also given for some pairs of two points. The goal is to seek estimates of the positions for all unknown points. One possible formulation of the problem is as follows. min 0 s.t. trace(x T (E ii + E jj 2E ij )X) = d ij, (i,j) N x trace(x T E ii X) 2trace( a T j 0 ] X) + a T j a j = d ij, (i,j) N a X M nr, (13) where N x,n a refers to sets of known distances. This formulation is a QMP 1, so we can develop both its V SDR 1 and MSDR 1 relaxations. min 0 s.t. I (E ii + E jj 2E ij ) Y = d ij, (i,j) N x I E ii Y 2a T j x i + a T j a j = d ij, (i,j) N 1 ] a x T 0 x Y (14) min 0 s.t. (E ii + E jj 2E ij ) Y = d ij, (i,j) N x E ii Y 2 a T j 0 ] X + a T j a j = d ij, (i,j) N ] a I X T 0 X Y Theorem 2.1 implies that the MSDR 1 relaxation always provides the same lower bound as the V SDR 1 one, while the number of variables for MSDR 1 (O((n+d) 2 )) is significantly smaller than the number for V SDR 1 (O(n 2 d 2 )). The quality of the bounds combined with a lower computational complexity explains why MSDR 1 is a favourite relaxation for researchers. 3 Quadratic Matrix Programming: Case II In this section, we move to the main topic of our paper, i.e., the equivalence of the vector and matrix relaxations for the more general QMP 2. (15) 8

9 3.1 Equivalence of Vector and Matrix Lifting for QMP 2 We first propose the Vector-Lifting Semidefinite Relaxation V SDR 2 for QMP 2. From applying both equations in (4), we get the following. µ V 2 := min β 0 vec(c 0 ) T ] Z vec(c 0 ) I r Q 0 + P 0 I V n β s.t. j vec(c j ) T ] Z (V SDR 2 ) vec(c j ) I r Q j + P j I V 0, j = 1,2,...,m n (Z V ) 1,1 = 1 Z V S rn+1 + ( 1 vec(x) Z V = T ]) vec(x) Y V Matrix Y V is nr nr and can be partitioned into exactly r 2 block matrices Y ij V, i,j = 1,2,...,r, where each block is n n. From applying both equations in (5), we get the smaller Matrix- Lifting Semidefinite Relaxation MSDR 2 for QMP 2. (We add the additional constraint trace Y 1 = trace Y 2 since trace XX T = trace X T X.) (MSDR 2 ) µ M2 := min Q 0 Y 1 + P 0 Y 2 + 2C 0 X + β 0 s.t. Q j Y 1 + P j Y 2 + 2C j X + β j 0, j = 1,2,...,m ( Y 1 XX T S+ n Ir X Z 1 := T ] ) 0 X Y 1 ( ] ) Y 2 X T X S+ r In X Z 2 := X T 0 Y 2 trace Y 1 = trace Y 2. V SDR 2 has O((nr) 2 ) variables, whereas MSDR 2 has only O((n + r) 2 ) variables. The computational advantage of using the smaller problem MSDR 2 motivates the comparison of the corresponding bounds. The main result is interesting and surprising, i.e., that V SDR 2 and MSDR 2 actually generate the same bound under a constraint qualification assumption. In general, the bound from MSDR 2 is at least as strong as the bound from V SDR 2. Define the block-diag and block-offdiag transformations, respectively, B 0 Diag(Q) : S n S rn+1, O 0 Diag(P) : S r S rn+1, ] ] B Diag(Q) :=, O Diag(P) :=. 0 I r Q 0 P I n (See 35] for the r = n case.) It is clear that Q,P 0 implies that both B 0 Diag(Q) 0,O 0 Diag(P) 0. The adjoints b 0 diag,o 0 diag are, respectively, Y 1 Y 2 = B 0 Diag (Z V ) = b 0 diag(z V ) := r j=1 Y jj V, = O 0 Diag (Z V ) = o 0 diag(z V ) := (trace Y ij V ) i,j=1,2,...,r. Lemma 3.1. Let X M nr be given. Suppose that one of the following two conditions hold. 1. Let Y v be given and Z V defined as in V SDR 2. Let the pair Z 1,Z 2 in MSDR 2 be constructed using Y 1 = b 0 diag(z V ),Y 2 = o 0 diag(z V ). (17) (16) 9

10 2. Let Y 1,Y 2 be given with trace Y 1 = trace Y 1, and let Z 1,Z 2 be defined as in MSDR 2. Let Y V,Z V for V SDR 2 be constructed from Y 1,Y 2 as follows. 1 V 1 n (Y 1 2) 12 I n... n (Y 2) 1r I n 1 n (Y 1 2) 12 I n V 2... n (Y 2) 2r I n Y V = n (Y 2) (r 1)r I n, (18) V r with r V i = Y 1, trace V i = (Y 2 ) ii,i = 1,...,r. (19) i=1 Then, Z v satisfies the linear inequality constraints in V SDR 2 if, and only if, Z 1,Z 2 satisfies the linear inequality constraints in MSDR 2. Moreover, the values of the objective functions with the corresponding variables are equal. Proof. 1. Note that β vec(c) T ] Z vec(c) I r Q + P I V = β + 2C X + ( B 0 Diag(Q) + O 0 Diag(P) ) Z V n = β + 2C X + Q b 0 diag(z V ) + P o 0 diag(z V ) = β + 2C X + Q Y 1 + P Y 2, by (17). (20) 2. Conversely, we note that trace Y 1 = trace Y 2 is a constraint in MSDR 2. And, Z V as constructed using (18) satisfies (17). In addition, the n + r assignment type constraints in (19) on the rn variables in the diagonals of the V i,i = 1,...,r, can always be solved. We can now apply the argument in (20) again. Lemma 3.1 guarantees the equivalence of the feasible sets of the two relaxations with respect to the linear inequality constraints and the objective function. However, this ignores the semidefinite constraints. The following result partially addresses this deficiency. Corollary 3.1. If the feasible set of V SDR 2 is nonempty, then the feasible set of MSDR 2 is also nonempty and µ M2 µ V 2. (21) Proof. Suppose Z V = 1 vec(x) T vec(x) Y V and can be partitioned into exactly r 2 block matrices Y ij ] is feasible for V SDR 2. Recall that Matrix Y V is nr nr V, i,j = 1,2,...,r. As above, we set Y 1,Y 2 Ir X following (17), and we set Z 1 = T ] ] In X,Z X Y 2 = 1 X T. Y 2 Denote the j-th column of X by X :j, j = 1,2,...,r. Now Z V 0 implies Y jj V X :jx:j T 0. Therefore, r j=1 Y jj V r j=1 X :jx:j T = Y 1 XX T 0, i.e. Z 1 0. Similarly, denote the k-th row of X by X k:, k = 1,2,...,n. Let (Y ij V ) kk denote the k-th diagonal entry of Y ij V, and define the r r matrix Y k := ((Y ij V ) kk) i,j=1,2,,r. Then Z V 0 implies Y k Xk: TX k: 0. Therefore, n k=1 Y k n k=1 XT k: X k: = Y 2 X T X 0, i.e. Z 2 0. The proof now follows from Lemma

11 Corollary 3.1 holds because MSDR 2 only restricts the sum of some principal submatrices of Z V (i.e., b 0 diag(z V ), o 0 diag(z V )) to be positive semidefinite; while V SDR 2 restricts the whole matrix Z V positive semidefinite. So the semidefinite constraints in MSDR 2 are not as strong as in V SDR 2. Moreover, the entries of Y V involved in b 0 diag( ),o 0 diag( ) form a partial semidefinite matrix which is not chordal and does not necessarily have a semidefinite completion. Therefore, the semidefinite completion technique we used to prove the equivalence between V SDR 1 and MSDR 1 is not applicable here. Instead, we will prove the equivalence of their dual programs. It is well known that the primal equals the dual when the generalized Slater condition holds 17, 28], and in this case we will then conclude that V SDR 2 and MSDR 2 generate the same bound. Definition 3.1. For λ R m, let: and let C λ,q λ,p λ be defined similarly. β λ := β 0 + m λ j β j ; After substituting α β λ α, we see that the dual of V SDR 2 is equivalent to j=1 (DV SDR 2 ) max β λ α α vec(c s.t. λ ) T ] 0 vec(c λ ) I r Q λ + P λ I n α R,λ R m +. And, the dual of MSDR 2 is (DMSDR 2 ) max β λ trace S 1 trace S 2 S1 R s.t. 1 T ] 0 R 1 Q λ ti n ] S2 R 2 R2 T 0 P λ + ti r R 1 + R 2 = C λ λ R m +,S 1 S r,s 2 S n,r 1,R 2 M nr,t R. The Slater condition for DV SDR 2 is equivalent to the following: λ R m +, s.t. I r Q λ + P λ I n 0. (22) The corresponding constraint qualification condition for DMSDR 2 is t R, λ R m +, s.t. Q λ ti r 0,P λ + ti n 0. (23) These two conditions are equivalent due to the following lemma, which will also be used in our subsequent analysis. Lemma 3.2. Let Q S n, P S r. Then I r Q + P I n 0, (resp. 0) if, and only if, t R, s.t. Q ti n 0, P + ti r 0, (resp. 0.) 11

12 Proof. Suppose the symmetric matrices Q, P have spectral decomposition Q = V Λ Q V T, P = UΛ P U T, where V,U are orthogonal, and the eigenvalues are in the diagonal matrices Λ Q = Diag (( λ 1 (Q) λ 2 (Q)... λ n (Q) )), Λ P = Diag (( λ 1 (P) λ 2 (P)... λ r (P) )). Therefore, we get the equivalences: I r Q + P I n 0 if, and only if, (V U)(I r Λ P + P I n )(V U) T 0 if, and only if, I r Λ Q + Λ P I n 0 if, and only if, minλ i (Q) + min j λ j (P) > 0 i if, and only if, min λ i (Q) t > 0,min λ j (P) + t > 0, for some t R. i j The equivalences hold if the strict inequalities, 0 and > are replaced by the inequalities 0 and, respectively. Now we state the main theorem of this paper on the equivalence of the two SDP relaxations for QMP 2. Theorem 3.1. Suppose that DV SDR 2 is strictly feasible. As numbers in the extended real line (,+ ], the optimal values of the two relaxations V SDR 2, MSDR 2, obtained using vector and matrix-liftings, are equal, i.e., µ V 2 = µ M Proof of (Main) Theorem 3.1 Since DV SDR 2 is strictly feasible, Lemma 3.2 implies that both dual programs satisfy constraint qualifications. Therefore, both programs satisfy strong duality, see e.g., 28]. Therefore, both have zero duality gaps, i.e., the optimal values of DV SDR 2, DMSDR 2, are µ V 2,µ M2, respectively. Now assume that λ is feasible for DV SDR 2. (24) Lemma 3.2 implies that λ is also feasible for DMSDR 2, i.e., there exists t R such that Q := Q λ ti n 0, P := P λ + ti r 0. (25) (To simplify notation, we use Q, P to denote these dual slack matrices.) The spectral decomposition of Q, P can be expressed as ] ] Q = V Λ Q V T ΛQ + 0 = V 1 V 2 ] V V 2 ] T, P = UΛ P U T ΛP + 0 = U 1 U 2 ] U U 2 ] T, where the columns of the submatrices U 1, V 1 form an orthonormal basis that spans the range spaces R(P) and R(Q), respectively; while the columns of U 2, V 2 span the orthogonal complements R(P) and R(Q), respectively. Λ Q + is a diagonal matrix where diagonal entries are nonzero eigenvalues of matrix Q, and Λ P + is defined similarly. Let {σ i }, {θ i } denote the eigenvalues of P, Q, respectively. We similarly simplify the notation C := C λ, c := vec(c), β := β λ. (26) Let A denote the Moore-Penrose pseudoinverse of matrix A. The following lemma allows us to express µ V 2 as a function of Q, P, c and β. 12

13 Lemma 3.3. Let λ,p,q,c,β be defined as above in (24), (25), (26). Let α := c T ((I r Q + P I n ) )c. (27) Then, α,λ is a feasible pair for DV SDR 2. And, for any pair α,λ feasible to DV SDR 2, we have α + β α + β. Proof. A general quadratic ] function f(x) = x T Qx+2 c T x+ β is nonnegative for any x R n if, and β c T only if, the matrix 0, e.g., 8, Pg. 163]. Therefore, c Q if, and only if, For a fixed α, this is further equivalent to ] ] α c T α c T = 0 (28) c I r Q λ + P λ I n c I r Q + P I n x T (I r Q + P I n )x + 2c T x + α 0, x R nr. α min Therefore, we can choose α as in (27). x x T (I r Q + P I n )x + 2c T x = c T ((I r Q + P I n ) )c. To further explore the structure of (27), we note that c can be decomposed as c = (U 1 V 1 )r 11 + (U 1 V 2 )r 12 + (U 2 V 1 )r 21 + (U 2 V 2 )r 22. (29) The validity of such an expression follows from the fact that the columns of U 1 U 2 ] V 1 V 2 ] form an orthonormal basis of] R nr. Furthermore, the dual feasibility of DV SDR 2 includes the constraint α c T 0, which implies c R(I c I r Q + P I r Q + P I n ). This range space is spanned by n the columns in the matrices U 1 V 1, U 2 V 1 and U 1 V 2, which implies that c has no component in R(U 2 V 2 ), i.e., r 22 = 0 in (29). The following lemma provides a key observation for the connections between the two dual programs. It deduces that if c is in R(U 1 V 1 ), then the α component of the objective value of DV SDR 2 in Lemma 3.3 has a specific representation. Lemma 3.4. If c R(U 1 V 1 ), then the α component of the objective value of DV SDR 2 in Lemma 3.3 satisfies α = c T ((I r Q + P I n ) )c max vec(r 1 ) T (I r Q) vec(r 1 ) vec(r 2 ) T (P I n ) vec(r 2 ) (30) = s.t. R 1 + R 2 = C R 1,R 2 M nr 13

14 Proof. We can eliminate R 2 and express the maximization problem on the right hand side of the equality as max R1 φ(r 1 ), where φ(r 1 ) := vec(r 1 ) T ((I r Q) + (P I n ) )vec(r 1 ) +2c T (P I n ) vec(r 1 ) c T (P I n ) c. (31) Since P, Q are both positive semidefinite, we know (I r Q) + (P I n ) 0. Hence φ is concave. It is not difficult to verify that (P I n ) c R((I r Q) + (P I n ) ). Therefore, the maximum of the quadratic concave function φ(r 1 ) is finite and attained at R 1, vec(r 1 ) = ((I r Q) + (P I n ) ) (P I n ) c = (P Q + PP I n ) c; (32) and this corresponds to a value φ(r 1 ) = ct ((P Q + P P I n ) (P I n ) (P I n ) )c = c T (U V )ˆΛ(U V ) T c, where ˆΛ := Λ P I n (Λ 2 P Λ Q + Λ P I n ). Matrix ˆΛ is diagonal. Its diagonal entries can be calculated as { 1 σ ˆΛ(i,j) = i +θ j if σ i > 0,θ j > 0 0 if σ i = 0 or θ j = 0. We now compare φ(r 1 ) with ct (I r Q + P I n ) c. Let Λ := (I r Q + P I n ) = (U V )(I r Λ Q + Λ P I n ) (U V ) T = (U V )((I σ+ Λ Q + Λ P I θ+ ) + I σ0 Λ Q + Λ P I θ 0 )(U V ) T, (33) where matrix I σ+ (resp. I σ0 ) is r r, diagonal, and zero, except for the i-th diagonal entries that are equal to one if σ i > 0 (resp. σ i = 0); and, matrix I θ+ (resp. I θ0 ) is defined in the same way. Hence we know that matrix Λ is also diagonal. Its diagonal entries can be calculated as Λ(i,j) = 1 σ i +θ j if σ i > 0,θ j > 0 1 σ i if σ i > 0,θ j = 0 1 θ j if σ i = 0,θ j > 0 0 if σ i = 0,θ j = 0. By assumption, c = (U 1 V 1 )r 11, for some r 11 of appropriate size. Note that (U 1 V 1 )r 11 is orthogonal to the columns in U 2 V 1 and U 1 V 2. Thus only the part (I σ+ Λ Q + Λ P I θ+ ) in the diagonal matrix is involved in computations, i.e., c T (I r Q + P I n ) c = r T 11 (U 1 V 1 ) T (U V ) Λ(U V ) T (U 1 V 1 )r 11 = r T 11 (U 1 V 1 ) T (U V )(I σ+ Λ Q + Λ P I θ+ ) (U V ) T (U 1 V 1 )r 11 = r T 11 (U 1 V 1 ) T (U V )ˆΛ(U V ) T (U 1 V 1 )r 11 = φ(r 1 ). (34) 14

15 Now, for the given feasible α,λ of Lemma 3.3, we will construct a feasible solution for DMSDR 2 that generates the same objective value. Using Lemma 3.2, we choose t R satisfying Q = Q λ ti n 0, P = P λ + ti r 0. We can now find a lower bound for the optimal value of DMSDR 2. Proposition 3.1. Let λ,t,p,q,c,c,β be as above. Let R 1 denote the maximizer of φ(r 1) in the proof of Lemma 3.4. Construct R 1 R 2 as follows: vec(r 1 ) = vec(r 1 ) + (U 2 V 1 )r 21 vec(r 2 ) = vec(r 2 ) + (U 1 V 2 )r 12. (35) Then we obtain a lower bound for the optimal value of DMSDR 2. µ M2 vec(r 1 ) T (I r Q) vec(r 1 ) vec(r 2 ) T (P I n ) vec(r 2 ) + β. (36) Proof. Consider the subproblem that maximizes the objective with λ, t, R 1 and R 2 defined as above. max β trace S 1 trace S 2 S1 R s.t. 1 T ] 0 R 1 Q ] (37) S2 R 2 R2 T 0 P S 1 S r S 2 S n, Because λ, t, R 1, and R 2 are all feasible for DMSDR 2, this subproblem will generate a lower bound for µ M2. We now invoke ] a result from 6], i.e., that there exists a symmetric matrix S such S C T that trace S γ and 0 if, and only if, f(x) = trace(x C Q T QX + 2C T X) + γ 0 for any X M nr. This is equivalent to γ min X M trace(xt QX + 2C T X) = trace(c T Q C). nr Therefore, the subproblem (37) can be reformulated as max β trace S 1 trace S 2 s.t. trace S 1 trace(r T 1 Q R 1 ) trace S 2 trace(r 2 Q R T 2 ) S 1 S r,s 2 S n. (38) Hence, the optimal value of (38) has an explicit expression and provides a lower bound for DMSDR 2 µ M2 vec(r 1) T (I r Q) vec(r 1 ) vec(r 2 ) T (P I n ) vec(r 2 ) + β. With all the above preparations, we now complete the proof of (main) Theorem

16 Proof. (of Theorem 3.1) Now we compare µ V 2 from the expression (27) with µ M2 based on the lower bound expression in (36). By writing c in the form of (29), we get µ v2 = r T 11 (U 1 V 1 ) T + r T 12 (U 1 V 2 ) T + r T 21 (U 2 V 1 ) T] (I r Q + P I n ) (U 1 V 1 )r 11 + (U 1 V 2 )r 12 + (U 2 V 1 )r 21 ] +β. Consider the crossterm such as r T 11 (U 1 V 1 ) T (I r Q + P I n ) (U 1 V 2 )r 12. Since (U 1 V 1 )r 11 is orthogonal to (U 1 V 2 )r 12, and (I r Q + P I n ) is diagonalizable by U 1 U 2 ] V 1 V 2 ], this term is actually zero. Similarly, we can verify that the other crossterms equal zero. As a result, only the following sum of three quadratic terms remain, which we label using C1,C2,C3, respectively. (39) µ v2 = r11 T (U 1 V 1 ) T (I r Q + P I n ) (U 1 V 1 )r 11 r21 T (U 2 V 1 ) T (I r Q + P I n ) (U 2 V 1 )r 21 r12 T (U 1 V 2 ) T (I r Q + P I n ) (U 1 V 2 )r 12 + β =: C1 + C2 + C3 + β. (40) We can also formulate the lower bound for µ M2 based on (36): µ M2 vec(r 1 ) T (I r Q) vec(r 1 ) vec(r 2 ) T (P I n ) vec(r 2 ) + β = (vec(r 1 ) + (U 2 V 1 )r 21 ) T (I r Q) vec(r 1 + (U 2 V 1 )r 21 ) (vec(r 2 ) + (U 1 V 2 )r 12 ) T (P I n ) (vec(r 2 ) + (U 1 V 2 )r 12 ) + β. (41) Since vec(r1 ) and vec(r 2 ) are both in R(U 1 V 1 ), and this is orthogonal to both (U 1 V 2 )r 12 and (U 2 V 1 )r 21, and both matrices (I r Q) and (P I n ) are diagonalizable by U 1 U 2 ] V 1 V 2 ], we conclude that the crossterms, such as vec(r1 )T (I r Q) (U 2 V 1 )r 21, all equal zero. Therefore, the lower bound for µ M2 can be reformulated as µ M2 (vec(r1 )(I r Q) vec(r1 ) + vec(r 2 )(P I n) vec(r2 )) r21 T (U 2 V 1 ) T (I r Q) (U 2 V 1 )r 21 r12 T (U 1 V 2 ) T (P I n ) (U 1 V 2 )r 12 + β. =: T1 + T2 + T3 + β. (42) As above, denote the first three quadratic terms by T1,T2,T3, respectively. We will show that term C1, C2 and C3 equal T1, T2 and T3, respectively. The equality between C1 and T1 follows from Lemma 3.4. For the other terms, consider C2 first. Write (I r Q+P I n ) as the diagonal matrix Λ by (33). Note that (U 2 V 1 )r 21 is orthogonal with the columns in U 1 V 1 and U 1 V 2. Thus only the part I σ0 Λ Q in diagonal matrix Λ is involved in computing term C2, i.e., r21 T (U 2 V 1 ) T (I r Q + P I n ) (U 2 V 1 )r 21 = r21 T (U 2 V 1 ) T (U V )(I σ0 Λ Q )(U V (43) )T (U 2 V 1 )r 21. Similarly, since (U 2 V 1 )r 21 is orthogonal with eigenvectors in U 1 V 2, we have r T 21 (U 2 V 1 ) T (I r Q) (U 2 V 1 )r 21 = r T 21 (U 2 V 1 ) T (U V )(I σ+ Λ Q + I σ 0 Λ Q )(U V )T (U 2 V 1 )r 21 = r T 21 (U 2 V 1 ) T (U V )(I σ0 Λ Q )(U V )T (U 2 V 1 )r 21. (44) 16

17 By (43) and (44), we conclude that that the term C2 equals T2. We may use the same argument to prove the equality of C3 and T3. Therefore, we conclude that c T ((I r Q + P I n ) )c + β = vec(r 1 ) T ((I r Q) )vec(r 1 ) vec(r 2 ) T ((P I n ) )vec(r 2 ) + β. (45) Then by (27) and (36), we have established µ V 2 µ M2. The other direction has been proved in Corollary 3.1. This completes the proof of the theorem Unbalanced Orthogonal Procrustes Problem Example 3.1. In the Unbalanced Orthogonal Procrustes Problem 14] one seeks to solve the following minimizing problem. min AX B 2 F s.t. X T X = I r, (46) X M nr, where A M nn, B M nr, and n r. The balanced case n = r can be solved efficiently 29] and this special case also admits a QMP 1 relaxation, 6]. Note that the unbalanced case is a typical QMP 2. Its V SDR 2 can be written as: min trace((i r A T A)Y ) trace(2b T AX) s.t. trace((e ] ij I n )Y ) = δ i,j, i,j = 1,2,,r, (47) 1 x x T 0. Y It is easy to check that the SDP in (47) is feasible and its dual is strictly feasible, which implies the equivalence between MSDR 2 and V SDR 2. Thus we can obtain a nontrivial lower bound from its MSDR 2 relaxation: min trace(a T AY 2B T AX) Ir X s.t. T ] 0, X Y ] In X 0, X T I r trace Y = r. We also run preliminary computational experiments to compare the efficiency of the two SDP relaxations, see Table 3.1. All matrices in 5 instances are randomly generated and the problems are solved by SeDuMi 1.1 on a notebook computer(cpu: Intel Core 2 Duo, 2.53GHZ/Memory: 3GB). Table 3.1 exhibits the computational advantage of MSDR 2 over V SDR 2. (48) Table 1: The solution time(cpu seconds) of two SDP relaxations on the orthogonal Procrustes problem. Problem Scale (n,r) (15,5) (20,10) (30,10) (30,15) (40,20) V SDR 2 (CPU time) MSDR 2 (CPU time)

18 3.2 An Extension to QMP with Conic Constraints Some QMP problems include conic constraints such as X T X ( )S, where S is a given positive semidefinite matrix. We can prove that the corresponding MSDR 2 and V SDR 2 are still equivalent for such problems. Consider the following general form of QMP 2 with conic constraints: (QMP 3 ) min trace(x T Q 0 X) + trace(xp 0 X T ) + 2trace(C T 0 X) + trace(ht 0 Z) s.t. trace(x T Q j X) + trace(xp j X T ) + 2trace(C T j X) + trace(ht j Z) + β j 0, j = 1,2,...,m X M nr, Z K where K can be the direct sum of convex cones (e.g., second-order cones, semidefinite cones). Note that the constraint X T X ( )S can be formulated as trace(x T XE ij ) + ( )trace(ze ij ) = S ij Z 0. (49) The formulations of V SDR 2 and MSDR 2 for QMP 3 are the same as for QMP 2 except for the additional term H j Z and the conic constraint Z K. Correspondingly, the dual programs DV SDR 2 and DMSDR 2 for QMP 3 will both have an additional constraint H 0 m λ j H j K. (50) j=1 If a dual solution λ is feasible for DV SDR 2, then it satisfies the constraint (50) in both DV SDR 2 and DMSDR 2. Therefore, we can follow the proof of Theorem 3.1, and construct a feasible solution for DMSDR 2 with λ which generates the same objective value as µ V 2. This yields the following. Corollary 3.2. Assume V SDR 2 for QMP 3 is strictly feasible and its dual DV SDR 2 is feasible. Then DV SDR 2 and DMSDR 2 both attain their optimum at the same λ and generate the same optimal value µ V 2 = µ M Graph Partition Problem Example 3.2 (Graph Partition Problem, (GPP)). GPP is an important combinatorial optimization problem with broad applications in network design and floor planning 3, 27]. Given a graph with n vertices, the problem is to find an r-partition S 1,S 2,...,S r of the vertex set, such that S i = m i with m := (m i ) i=1,...,r given cardinalities of subsets, and the total number of edges across different subsets is minimized. Define matrix X M nr to be the assignment of vertices, i.e., X ij = 1 if vertex i is assigned to subset j; X ij = 0 otherwise. With L the Laplacian matrix, the GPP can be formulated as an optimization problem: µ GPP = min 1 2 trace(xt LX) s.t. X T X = Diag(m) diag(xx T ) = e n X 0 (51) 18

19 This formulation involves quadratic matrix constraints of both types trace(x T E ii X) = 1, i = 1,...,n and trace(xe ij X T = m i δ(i,j), i,j = 1,...,r. Thus it can be formulated as a QMP 2 but not a QMP 1. Anstreicher and Wolkowicz 5] proposed a semidefinite program relaxation with O(n 4 ) variables and proved that its optimal value equals the so-called Donath-Hoffman lower bound 13]. This SDP formulation can be written in a more compact way as Povh 27] suggested: µ DH = min 1 2 trace((i r L)V ) s.t. r i=1 1 m i V ii + W = I n, trace(v ij ) = m i δ i,j, i,j = 1,...,r trace((i E ii )V ) = 1, i = 1,...,n V S rn +, W S+ n, (52) where V has been partitioned into r 2 square blocks, with each block size of n by n, and V ij is the (i,j)-th block of V. Note that formulation (52) reduces the number of variables to O(n 2 r 2 ). An interesting application is the graph equipartition problem in which m i (= m 1 ) s are all the same. Povh s SDP formulation is actually a V SDR 2 for QMP 3 : min 1 2 trace(xt LX) s.t. trace( 1 m 1 X T E ij X + E ij W) = δ i,j, i,j = 1,...,n trace(xe ij X T ) = m 1 δ i,j, i,j = 1,...,r trace(x T E ii X) = 1, i = 1,...,n W S n +. It is easy to check that (52) is feasible and its dual is strictly feasible. Hence by Corollary 3.2, the equivalence between MSDR 2 and V SDR 2 for QMP 3 implies that the Donath-Hoffman bound can be computed by solving a small MSDR 2 : µ DH = min 1 2 L Y 1, s.t. Y 1 m 1 I n, Y 2 = m 1 I r, diag(y 1 ) = e n, Ir X T ] 0, X Y 1 ] In X X T 0 Y 2 Because X and Y 2 do not appear in the objective, formulation (54) can be reduced to a very simple form: µ DH = min 1 2 L Y 1 s.t. diag(y 1 ) = e n (55) 0 Y 1 m 1 I n. This MSDR formulation has only O(n 2 ) variables, which is a significant reduction from O(n 2 r 2 ). This result coincides with Karish and Rendl s result 19] for the graph equipartition. Their proof derives from the particular problem structure, while our result is based on the general equivalence of V SDR 2 and MSDR 2. 4 Conclusion This paper proves the equivalence of two SDP bounds for the hard QCQP in QMP 2. Thus, it is clear that a user should use the smaller/inexpensive MSDR bound from matrix-lifting, rather (53) (54) 19

20 than the more expensive V SDR bound from vector lifting. In particular, our results show that the large V SDR 2 relaxation for the unbalanced orthogonal Procrustes problem can be replaced by the smaller MSDR 2. And, with an extension of the main theorem, we proved the Karish and Rendl result 19] that the Donath-Hoffman bound for Graph Equipartition can be computed with a small SDP. The matrix completion and semidefinite inequality techniques used in our proofs are of independent interest. Unfortunately, it is not clear how to formulate a general QCQP as a MSDR. In particular, the objective function for the QAP, trace AXBX T, does not immediately admit an M SDR representation. (Though, Ding and Wolkowicz 12] provide a different matrix-lifting SDP relaxation, that generally has a strictly lower bound than the vectorized SDP relaxation proposed in 35].) The above motivates the need for finding efficient matrix-lifting representations for hard QCQP that allow efficient semidefinite relaxations. References 1] A. ALFAKIH, M.F. ANJOS, V. PICCIALLI, and H. WOLKOWICZ. Euclidean distance matrices, semidefinite programming, and sensor network localization. Portugaliae Mathematica, to appear. CORR ] A. ALFAKIH and H. WOLKOWICZ. Matrix completion problems. In Handbook of semidefinite programming, volume 27 of Internat. Ser. Oper. Res. Management Sci., pages Kluwer Acad. Publ., Boston, MA, ] C.J. ALPERT and A.B. KAHNG. Recent directions in netlist partitioning: a survey. Integr. VLSI J., 19(1-2):1 81, ] K.M. ANSTREICHER, X. CHEN, H. WOLKOWICZ, and Y. YUAN. Strong duality for a trust-region type relaxation of the quadratic assignment problem. Linear Algebra Appl., 301(1-3): , ] K.M. ANSTREICHER and H. WOLKOWICZ. On Lagrangian relaxation of quadratic matrix constraints. SIAM J. Matrix Anal. Appl., 22(1):41 55, , 19 6] A. BECK. Quadratic matrix programming. SIAM J. Optim., 17(4): , , 3, 6, 15, 17 7] A. BEN-TAL, L. El GHAOUI, and A. NEMIROVSKI. Robust optimization. Princeton Series in Applied Mathematics. Princeton University Press, Princeton, NJ, ] A. BEN-TAL and A.S. NEMIROVSKI. Lectures on modern convex optimization. MPS/SIAM Series on Optimization. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, Analysis, algorithms, and engineering applications. 13 9] P. BISWAS and Y. YE. Semidefinite programming for ad hoc wireless sensor network localization. In Information Processing In Sensor Networks, Proceedings of the third international symposium on Information processing in sensor networks, pages 46 54, Berkeley, Calif.,

21 10] M.W. CARTER, H.H. JIN, M.A. SAUNDERS, and Y. YE. SpaseLoc: an adaptive subproblem algorithm for scalable wireless sensor network localization. SIAM J. Optim., 17(4): , ] E. de KLERK and R. SOTIROV. Exploiting group symmetry in semidefinite programming relaxations of the quadratic assignment problem. Math. Program., 122(2, Ser. A): , ] Y. DING and H. WOLKOWICZ. A low-dimensional semidefinite relaxation for the quadratic assignment problem. Math. Oper. Res., 34(4): , , 20 13] W.E. DONATH and A.J. HOFFMAN. Lower bounds for the partitioning of graphs. IBM J. Res. Develop., 17: , ] L. ELDÉN and H. PARK. A Procrustes problem on the Stiefel manifold. Numer. Math., 82(4): , ] B. GRONE, C.R. JOHNSON, E. MARQUES de SA, and H. WOLKOWICZ. Positive definite completions of partial Hermitian matrices. Linear Algebra Appl., 58: , , 7 16] L. HOGBEN. Graph theoretic methods for matrix completion problems. Linear Algebra Appl., 328(1-3): , ] V. JEYAKUMAR and H. WOLKOWICZ. Generalizations of Slater s constraint qualification for infinite convex programs. Math. Programming, 57(1, Ser. B):85 101, ] C.R. JOHNSON. Matrix completion problems: a survey. Proceedings of Symposium in Applied Mathematics, 40: , ] S.E. KARISCH and F. RENDL. Semidefinite programming and graph equipartition. In Topics in Semidefinite and Interior-Point Methods, volume 18 of The Fields Institute for Research in Mathematical Sciences, Communications Series, Providence, Rhode Island, American Mathematical Society. 19, 20 20] N. KRISLOCK. Efficient algorithms for large-scale Euclidean distance matrix problems with applications to wireless sensor network localization and molecular conformation. PhD thesis, University of Waterloo, ] N. KRISLOCK and H. WOLKOWICZ. Explicit sensor network localization using semidefinite representations and clique reductions. Technical Report CORR , University of Waterloo, Waterloo, Ontario, Available at URL: HTML/2009/05/2297.html. 2, 8 22] J. LIU. Eigenvalue and singular value inequalities of Schur complements. In F. ZHANG, editor, The Schur complement and its applications, volume 4 of Numerical Methods and Algorithms, pages xvi+295. Springer-Verlag, New York, , 7 23] H.D. MITTELMANN and J. PENG. Estimating bounds for quadratic assignment problems associated with the hamming and manhattan distance matrices based on semidefinite programming. SIAM J. Optim., page to appear,

22 24] Y.E. NESTEROV, H. WOLKOWICZ, and Y. YE. Semidefinite programming relaxations of nonconvex quadratic optimization. In Handbook of semidefinite programming, volume 27 of Internat. Ser. Oper. Res. Management Sci., pages Kluwer Acad. Publ., Boston, MA, , 4 25] D. OUELLETTE. Schur complements and statistics. Linear Algebra Appl., 36: , , 7 26] J. PENG and H.D. MITTELMANN. Estimating bounds for quadratic assignment problems associated with the hamming and manhattan distance matrices based on semidefinite programming. SIAM J. Optim., page to appear, ] J. POVH. Semidefinite approximations for quadratic programs over orthogonal matrices. In Journal of Global Optimization. Springer Netherlands, , 19 28] R.T. ROCKAFELLAR. Convex analysis. Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ, Reprint of the 1970 original, Princeton Paperbacks. 11, 12 29] P. H. Schonemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1 10, March ] A.M-C. SO and Y. YE. Theory of semidefinite programming for sensor network localization. Math. Program., 109(2-3, Ser. B): , ] R. STERN and H. WOLKOWICZ. Indefinite trust region subproblems and nonsymmetric eigenvalue perturbations. SIAM J. Optim., 5(2): , ] Z. WANG, S. ZHENG, S. BOYD, and Y. YE. Further relaxations of the semidefinite programming approach to sensor network localization. SIAM J. Optim., 19(2): , ] H. WOLKOWICZ. Semidefinite and Lagrangian relaxations for hard combinatorial problems. In M.J.D. Powell, editor, Proceedings of 19th IFIP TC7 Conference on System Modelling and Optimization, July, 1999, Cambridge, pages Kluwer Academic Publishers, Boston, MA, ] H. WOLKOWICZ and Q. ZHAO. Semidefinite programming relaxations for the graph partitioning problem. Discrete Appl. Math., 96/97: , Selected for the special Editors Choice, Edition ] Q. ZHAO, S.E. KARISCH, F. RENDL, and H. WOLKOWICZ. Semidefinite programming relaxations for the quadratic assignment problem. J. Comb. Optim., 2(1):71 109, Semidefinite programming and interior-point approaches for combinatorial optimization problems (Fields Institute, Toronto, ON, 1996). 2, 9, 20 22

23 Index A, Moore-Penrose pseudoinverse, 12 C λ, 11 P λ, 11 Q λ, 11 Y ij V, blocks of Y V, 9 B 0 Diag, Block-diag, 9 M nr, set of n by r matrices, 2 O 0 Diag, Block-offdiag, 9 R( ), range space, 13 S n, the space of n n symmetric matrices, 2 β λ, 11 b 0 diag, block-diag adjoint, 9 vec(x), 3 o 0 diag, block-offdiag adjoint, 9 (X), homogenized quadratic matrix function, q M j q V j 4 (X), homogenized quadratic vector function, 4 q j (X), quadratic function, 4 M SDR, matrix-lifting semidefinite relaxation, 3 MSDR 2, vector-lifting relaxation, 9 QMP, quadratic matrix programming, 2 QMP 1, first case of quadratic matrix programming, 2 QMP 2, second case of quadratic matrix programming, 3, 8 V SDR, vector-lifting semidefinite relaxation, 3 V SDR 1, vector-lifting relaxation, 5 V SDR 2, vector-lifting relaxation, 9 MSDR 1, matrix-lifting relaxation, 5 block-diag adjoint, b 0 diag, 9 Block-diag, B 0 Diag, 9 block-offdiag adjoint, o 0 diag, 9 Block-offdiag, O 0 Diag, 9 blocks of Y V, Y ij V, 9 chordal graph, 7 homogenized quadratic matrix function, q M j (X), 4 homogenized quadratic vector function, q V j (X), 4 Kronecker product, 3 lifted matrices for QMP 1, 4 matrix-lifting relaxation, MSDR 1, 5 matrix-lifting semidefinite relaxation, M SDR, 3 Moore-Penrose pseudoinverse, A, 12 partial symmetric matrix, 6 quadratic function, q j (X), 4 quadratic matrix programming, QMP, 2 quadratically constrained quadratic program, QCQP, 2 range space, R( ), 13 Schur complement, 3 second case of quadratic matrix programming, QMP 2, 3, 8 sensor network localization problem, SNL, 8 set of n by r matrices, M nr, 2 SNL, sensor network localization problem, 8 the space of n n symmetric matrices, S n, 2 trace dot product, 2, 5 trace inner-product, 2 vector-lifting relaxation, MSDR 2, 9 vector-lifting relaxation, V SDR 1, 5 vector-lifting relaxation, V SDR 2, 9 vector-lifting semidefinite relaxation, V SDR, 3 first case of quadratic matrix programming, QMP 1, 2 23

On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming

On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming MAHEMAICS OF OPERAIONS RESEARCH Vol. 36, No., February 20, pp. 88 04 issn0364-765x eissn526-547 360 0088 informs doi 0.287/moor.00.0473 20 INFORMS On Equivalence of Semidefinite Relaxations for Quadratic

More information

On Efficient Semidefinite Relaxations for Quadratically Constrained Quadratic Programming

On Efficient Semidefinite Relaxations for Quadratically Constrained Quadratic Programming On Efficient Semidefinite Relaxations for Quadratically Constrained Quadratic Programming by Yichuan Ding A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

SDP and eigenvalue bounds for the graph partition problem

SDP and eigenvalue bounds for the graph partition problem SDP and eigenvalue bounds for the graph partition problem Renata Sotirov and Edwin van Dam Tilburg University, The Netherlands Outline... the graph partition problem Outline... the graph partition problem

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Nathan Krislock, Henry Wolkowicz Department of Combinatorics & Optimization University of Waterloo First Alpen-Adria Workshop on Optimization

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract A STRENGTHENED SDP RELAXATION via a SECOND LIFTING for the MAX-CUT PROBLEM Miguel Anjos Henry Wolkowicz y July 15, 1999 University of Waterloo Department of Combinatorics and Optimization Waterloo, Ontario

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lecture 14: Optimality Conditions for Conic Problems

Lecture 14: Optimality Conditions for Conic Problems EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions

Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions Nathan Krislock and Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo Haifa Matrix

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Strong Duality for a Trust-Region Type Relaxation. of the Quadratic Assignment Problem. July 15, University of Waterloo

Strong Duality for a Trust-Region Type Relaxation. of the Quadratic Assignment Problem. July 15, University of Waterloo Strong Duality for a Trust-Region Type Relaxation of the Quadratic Assignment Problem Kurt Anstreicher Xin Chen y Henry Wolkowicz z Ya-Xiang Yuan x July 15, 1998 University of Waterloo Department of Combinatorics

More information

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

Four new upper bounds for the stability number of a graph

Four new upper bounds for the stability number of a graph Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Duality. Geoff Gordon & Ryan Tibshirani Optimization / Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Houduo Qi February 1, 008 and Revised October 8, 008 Abstract Let G = (V, E) be a graph

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Facial Reduction for Cone Optimization

Facial Reduction for Cone Optimization Facial Reduction for Cone Optimization Henry Wolkowicz Dept. Combinatorics and Optimization University of Waterloo Tuesday Apr. 21, 2015 (with: Drusvyatskiy, Krislock, (Cheung) Voronin) Motivation: Loss

More information

An Interior-Point Method for Approximate Positive Semidefinite Completions*

An Interior-Point Method for Approximate Positive Semidefinite Completions* Computational Optimization and Applications 9, 175 190 (1998) c 1998 Kluwer Academic Publishers. Manufactured in The Netherlands. An Interior-Point Method for Approximate Positive Semidefinite Completions*

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Semidefinite programming and eigenvalue bounds for the graph partition problem

Semidefinite programming and eigenvalue bounds for the graph partition problem Semidefinite programming and eigenvalue bounds for the graph partition problem E.R. van Dam R. Sotirov Abstract The graph partition problem is the problem of partitioning the vertex set of a graph into

More information

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry assoc. prof., Ph.D. 1 1 UNM - Faculty of information studies Edinburgh, 16. September 2014 Outline Introduction Definition

More information

The min-cut and vertex separator problem

The min-cut and vertex separator problem manuscript No. (will be inserted by the editor) The min-cut and vertex separator problem Fanz Rendl Renata Sotirov the date of receipt and acceptance should be inserted later Abstract We consider graph

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite

More information

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin Identifying Redundant Linear Constraints in Systems of Linear Matrix Inequality Constraints Shafiu Jibrin (shafiu.jibrin@nau.edu) Department of Mathematics and Statistics Northern Arizona University, Flagstaff

More information

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs Adam N. Letchford Daniel J. Grainger To appear in Operations Research Letters Abstract In the literature

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications Edited by Henry Wolkowicz Department of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo,

More information

Operations Research Letters

Operations Research Letters Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst

More information

Modeling with semidefinite and copositive matrices

Modeling with semidefinite and copositive matrices Modeling with semidefinite and copositive matrices Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/24 Overview Node and Edge relaxations

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

Deterministic Methods for Detecting Redundant Linear. Constraints in Semidefinite Programming

Deterministic Methods for Detecting Redundant Linear. Constraints in Semidefinite Programming Deterministic Methods for Detecting Redundant Linear Constraints in Semidefinite Programming Daniel Stover Department of Mathematics and Statistics Northen Arizona University,Flagstaff, AZ 86001. July

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

New Lower Bounds on the Stability Number of a Graph

New Lower Bounds on the Stability Number of a Graph New Lower Bounds on the Stability Number of a Graph E. Alper Yıldırım June 27, 2007 Abstract Given a simple, undirected graph G, Motzkin and Straus [Canadian Journal of Mathematics, 17 (1965), 533 540]

More information

An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones

An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones Bryan Karlovitz July 19, 2012 West Chester University of Pennsylvania

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

Strong duality in Lasserre s hierarchy for polynomial optimization

Strong duality in Lasserre s hierarchy for polynomial optimization Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization

More information

Tutorials in Optimization. Richard Socher

Tutorials in Optimization. Richard Socher Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2

More information

Polynomial Solvability of Variants of the Trust-Region Subproblem

Polynomial Solvability of Variants of the Trust-Region Subproblem Polynomial Solvability of Variants of the Trust-Region Subproblem Daniel Bienstock Alexander Michalka July, 2013 Abstract We consider an optimization problem of the form x T Qx + c T x s.t. x µ h r h,

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

A Unified Theorem on SDP Rank Reduction. yyye

A Unified Theorem on SDP Rank Reduction.   yyye SDP Rank Reduction Yinyu Ye, EURO XXII 1 A Unified Theorem on SDP Rank Reduction Yinyu Ye Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering Stanford

More information

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Author: Faizan Ahmed Supervisor: Dr. Georg Still Master Thesis University of Twente the Netherlands May 2010 Relations

More information

1. Introduction. Consider the following quadratic binary optimization problem,

1. Introduction. Consider the following quadratic binary optimization problem, ON DUALITY GAP IN BINARY QUADRATIC PROGRAMMING XIAOLING SUN, CHUNLI LIU, DUAN LI, AND JIANJUN GAO Abstract. We present in this paper new results on the duality gap between the binary quadratic optimization

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

arxiv: v1 [math.oc] 14 Oct 2014

arxiv: v1 [math.oc] 14 Oct 2014 arxiv:110.3571v1 [math.oc] 1 Oct 01 An Improved Analysis of Semidefinite Approximation Bound for Nonconvex Nonhomogeneous Quadratic Optimization with Ellipsoid Constraints Yong Hsia a, Shu Wang a, Zi Xu

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Further Relaxations of the SDP Approach to Sensor Network Localization

Further Relaxations of the SDP Approach to Sensor Network Localization Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and inyu e September 27, 2006 Abstract Recently, a semidefinite programming (SDP) relaxation

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs 2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

On the eigenvalues of Euclidean distance matrices

On the eigenvalues of Euclidean distance matrices Volume 27, N. 3, pp. 237 250, 2008 Copyright 2008 SBMAC ISSN 00-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH Department of Mathematics and Statistics University

More information