On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming

Size: px
Start display at page:

Download "On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming"

Transcription

1 MAHEMAICS OF OPERAIONS RESEARCH Vol. 36, No., February 20, pp issn x eissn informs doi 0.287/moor INFORMS On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming Yichuan Ding Department of Management Science and Engineering, Stanford University, Stanford, California 94305, Dongdong Ge Antai College of Economics and Management, Shanghai Jiao ong University, Shanghai, P. R. China, Henry Wolkowicz Department of Combinatorics and Optimization, University of Waterloo, Waterloo, Ontario N2L 3G, Canada, We analyze two popular semidefinite programming relaxations for quadratically constrained quadratic programs with matrix variables. hese relaxations are based on vector lifting and on matrix lifting; they are of different size and expense. We prove, under mild assumptions, that these two relaxations provide equivalent bounds. hus, our results provide a theoretical guideline for how to choose a less expensive semidefinite programming relaxation and still obtain a strong bound. he main technique used to show the equivalence and that allows for the simplified constraints is the recognition of a class of nonchordal sparse patterns that admit a smaller representation of the positive semidefinite constraint. Key words: semidefinite programming relaxations; quadratic matrix programming; quadratic constrained quadratic programming; hard combinatorial problems; sparsity patterns MSC2000 subject classification: Primary: 90C20, 90C22; secondary: 90C26, 90C46 OR/MS subject classification: Primary: programming; secondary: quadratic History: Received April 27, 200; revised October 26, 200. Published online in Articles in Advance February 2, 20.. Introduction. We provide theoretical insights on how to compare different semidefinite programming (SDP) relaxations for quadratically constrained quadratic programs (QCQP) with matrix variables. In particular, we study a vector lifting relaxation and compare it to a significantly smaller matrix lifting relaxation to show that the resulting two bounds are equal. Many hard combinatorial problems can be formulated as QCQPs with matrix variables. If the resulting formulated problem is nonconvex, then SDP relaxations provide an efficient and successful approach for computing approximate solutions and strong bounds. Finding strong and inexpensive bounds is essential for branch and bound algorithms for solving large hard combinatorial problems. However, there can be many different SDP relaxations for the same problem, and it is usually not obvious which relaxation is overall optimal with regard to both computational efficiency and bound quality (Ding and Wolkowicz [3). For examples of using SDP relaxations for QCQP arising from hard problems, see e.g., quadratic assignment (QAP) (de Klerk and Sotirov [2, Ding and Wolkowicz [3, Mittelmann and Peng [25, Zhao et al. [36), graph partitioning (GPP) (Wolkowicz and Zhao [35), sensor network localization (SNL) (Biswas and Ye [0, Carter et al. [, Krislock and Wolkowicz [23), and the more general Euclidean distance matrix completions (Alfakih et al. [2)... Preliminaries. he concept of quadratic matrix programming (QMP) was introduced by Beck (Beck [6), where it refers to a special instance of QCQP with matrix variables. Because we include the study of more general problems, we denote the model discussed in Beck [6 as the first case of QMP, denoted (QMP ), QMP P =min trace X Q 0 X +2trace C 0 X + 0 s.t. trace X Q j X +2trace C j X + j 0 j = 2 m X n r where n r denotes the set of n by r matrices, Q j n, j =0 m, n is the space of n n symmetric matrices, and C j n r. hroughout this paper, we use the trace inner product (dot product) C X =tracec X. he applicability of QMP is limited when compared to the more general class QCQP. However, many applications use QCQP models in the form of QMP c, e.g., robust optimization (Ben-al et al. [9) and SNL 88

2 Mathematics of Operations Research 36(), pp , 20 INFORMS 89 (Alfakih et al. [2). In addition, many combinatorial problems are formulated with orthogonality constraints in one of the two forms XX =I X X =I () When X is square, the pair of constraints in () are equivalent to each other, in theory. However, relaxations that include both forms of the constraints rather than just one can be expected to obtain stronger bounds. For example, Anstreicher et al. [5 proved that strong duality holds for a certain relaxation of QAP when both forms of the orthogonality constraints in () are included; however, there can be a duality gap if only one of the forms is used. Motivated by this result, we extend our scope of problems so that the objective and constraint functions can include both forms of quadratic terms X Q j X and XP j X. We now define the second case of QMP problems (QMP 2 ) as QMP 2 min trace X Q 0 X +trace XP 0 X +2trace C 0 X + 0 s.t. trace X Q j X +trace XP j X +2trace C j X + j 0 j = m X n r where Q j and P j are symmetric matrices of appropriate sizes. Both QMP and QMP 2 can be vectorized into the QCQP form using trace X QX =vec X I r Q vec X trace XPX =vec X P I n vec X (3) where denotes the Kronecker product (e.g., Graham [6) and vec X vectorizes X by stacking columns of X on top of each other. he difference in the Kronecker products I r Q P I n shows that there is a difference in the corresponding Lagrange multipliers and illustrates why the bounds from Lagrangian relaxation will be different for these two sets of constraints. he SDP relaxation for the vectorized QCQP is called the vector-lifting semidefinite relaxation (VSDR). Under a constraint qualification assumption, VSDR for QCQP is equivalent to the dual of classical Lagrangian relaxation (see e.g., Anstreicher and Wolkowicz [4, Nesterov et al. [26, Wolkowicz [34). From (3), we get trace X QX =trace I r Q Y if Y =vec X vec X (4) trace XPX =trace P I n Y if Y =vec X vec X VSDR is derived using (4) with the relaxation Y vec X vec X. A Schur complement argument (e.g., Liu [ [24, Ouellette [27) implies the equivalence of this relaxation to the large matrix variable constraint vec X 0. A similar result holds for trace XPX =vec X P I n vec X. vec X Y Alternatively, from (3), we get the smaller system trace X QX =traceqy if Y =XX trace XPX =tracepy if Y =X X he matrix-lifting semidefinite relaxation (MSDR) is derived using (5) with the relaxation Y XX. A Schur complement [ argument now implies the equivalence of this relaxation to the smaller matrix variable constraint I X X Y 0. Again, a similar result holds for trace XPX. Intuitively, one expects that VSDR should provide stronger bounds than MSDR. Beck [6 proved that VSDR is actually equivalent to MSDR for QMP if both SDP relaxations attain optimality and have a zero duality gap, e.g., when a constraint qualification, such as the Slater condition, holds for the dual program. In this paper we strengthen the above result by dropping the constraint qualification assumption. hen we present our main contribution, i.e., we show the equivalence between MSDR and VSDR for the more general problem QMP 2 under a constraint qualification. his result is of more interest because QMP 2 does not possess the same nice structure (chordal pattern) as QMP c. Moreover, QMP 2 encompasses a much richer class of problems and therefore has more significant applications; for example, see the unbalanced orthogonal Procrustes problem (Eldén and Park [5) discussed in 3..2 and the graph partition problem (Alpert and Kahng [3, Povh [28) discussed in Outline. In 2 we present the equivalence of the corresponding VSDR and MSDR formulations for QMP and prove Beck s result without the constraint qualification assumption (see heorem 2.). Section 3 proves the main result that VSDR and MSDR generate equivalent lower bounds for QMP 2, under a constraint qualification assumption(see heorem 3.). Numerical tests are included in Section 4 provides concluding remarks. (2) (5)

3 90 Mathematics of Operations Research 36(), pp , 20 INFORMS 2. Quadratic matrix programming: Case I. We first discuss the two relaxations for QMP c. We denote the matrices in the relaxations obtained from vector and matrix lifting by [ j M q V j = vec C j j M q M j = r I r vec C j I r Q j C j C j Q j We let y= ( x0 vec X ) nr+ Y = and we denote the quadratic and homogenized quadratic functions ( X0 X ) M r+n r q j X =trace X Q j X +2trace C j X + j q V j X x 0 = trace X Q j X +2trace C j Xx 0 + j x 2 0 = y M q V j y q M j X X 0 = trace X Q j X +2trace X 0 C j X +trace ( j = trace ( Y M q M j Y ) r X 0 I rx Lagrangian relaxation. As mentioned above, under a constraint qualification, VSDR for QCQP is equivalent with the dual of classical Lagrangian relaxation. We include this result for completeness and to illustrate the role of a constraint qualification in the relaxation. We follow the approach in Nesterov et al. [26, p. 403 and use the strong duality of the trust region subproblem (Stern and Wolkowicz [32) to obtain the Lagrangian relaxation (or dual) for QMP as an SDP. ) L = max minq 0 X + 0 X m j q j X i=j m = max min q V 0 X x0 2= 0 X x 0 + i q V j X x 0 j= ( m = max y M q V 0 + j M q V j )y+t x 2 0 min 0 t y = max min 0 t y ( trace j= m M q V 0 + j M q V j )yy +t x 2 0 max t [ t 0 m = DVSDR s.t. 0 0 j M q V j M qv 0 j= m + t j= (6) As illustrated in (6), Lagrangian relaxation is the dual program (denoted by DVSDR ) of the vector-lifting relaxation VSDR given below. Hence, under a constraint qualification, the Lagrangian relaxation is equivalent with the VSDR. he usual constraint qualification is the Slater condition, i.e., m m + s.t. M qv 0 + j M q V j 0 (7) j=

4 Mathematics of Operations Research 36(), pp , 20 INFORMS Equivalence of vector and matrix lifting for QMP. Recall that the dot product refers to the trace inner product, C X =tracec X. he vector-lifting relaxation is VSDR V =min M qv 0 Z V s.t. M q V j Z V 0 j = 2 m Z V = Z V vec X hus, the constraint matrix is blocked as Z V = [ vec X Y V. he matrix-lifting relaxation is MSDR M =min M qm 0 Z M s.t. M q M j Z M 0 j = 2 m Z M r r =I r Z M hus, the constraint matrix is blocked as Z M = [ I r X X Y M. VSDR is obtained by relaxing the quadratic equality constraint Y V =vec X vec X to Y V vec X vec X and then formulating this as Z V = [ vec X vec X Y V 0. MSDR is obtained by relaxing the quadratic equality constraint Y M =XX to Y M XX and then reformulating this to the linear conic constraint Z M = [ I r X X Y M 0. VSDR involves O nr 2 variables and O m constraints, which is often at the complexity of O nr, whereas the smaller problem MSDR has only O n+r 2 variables. he equivalence of relaxations using vector and matrix liftings is proved in Beck [6, heorem 4.3 by assuming a constraint qualification for the dual programs. We now present our first main result and prove the above-mentioned equivalence without any constraint qualification assumptions. he proof itself is of interest in that we use the chordal property and matrix completions to connect the two relaxations. heorem 2.. As numbers in the extended real line +, the optimal values of the two relaxations obtained using vector and matrix liftings are equal, i.e., V = M Proof. he proof follows by showing that both VSDR and MSDR generate the same optimal values as the following program. VSDR V =min Q 0 s.t. Q j r Y jj +2C 0 X+ 0 j= r Y jj +2C j X+ j 0 j = 2 m j= [ x j Z jj = j = 2 r x j where x j, j = 2 r, are the columns of matrix X, and Y jj, j = 2 r, represent the corresponding quadratic parts x j xj. We first show that the optimal values of VSDR and VSDR are equal, i.e., that Y jj V = V (8) he equivalence of the two optimal values can be established by showing that for each program, for each feasible solution, one can always construct a corresponding feasible solution with the same objective value.

5 92 Mathematics of Operations Research 36(), pp , 20 INFORMS First, suppose VSDR has a feasible solution Z jj = [ x j x j Y jj, j = 2 r. Construct the partial symmetric matrix x x2 xr x Y??? Z V = x 2? Y 22????? x r??? Y rr where the entries denoted by? are unknown/unspecified. By observation, the unspecified entries of Z V are not involved in the constraints or in the objective function of VSDR. In other words, giving values to the unspecified positions will not change the constraint function values and the objective value. herefore, any positive semidefinite completion of the partial matrix Z V is feasible for VSDR and has the same objective value. he feasibility of Z jj j = 2 r for VSDR implies [ x j x j Y jj 0 for each j = 2 r. So all the specified principal submatrices of Z V are positive semidefinite; hence, Z V is a partial positive semidefinite matrix (see Alfakih and Wolkowicz [, Grone et al. [7, Hogben [8, Johnson [20, Wang et al. [33 for the specific definitions of partial positive semidefinite, chordal graph, semidefinite completion). It is not difficult to verify the chordal graph property for the sparsity pattern of Z V. herefore, Z V has a positive semidefinite completion by the classical completion result (Grone et al. [7, heorem 7). hus we have constructed a feasible solution to VSDR with the same objective value as the feasible solution from VSDR ; i.e., this shows that V V. Conversely, suppose VSDR has a feasible solution x x2 xr x Y Y 2 Y r Z V = x 2 Y 2 Y 22 Y 2r x r Y r Y r2 Y rr Now we construct Z jj = [ x j x j Y jj, j = 2 r. Because each Zjj is a principal submatrix of the positive semidefinite matrix Z V, we have Z jj 0. he feasibility of Z V for VSDR also implies It is easy to check that Q i M q V i Z V 0 i= 2 m (9) r Y jj +2C i X+ i =M q V i Z V 0 i= 2 m (0) j= where X = [ x x 2 x r. herefore, Zjj, j = 2 r, is feasible for VSDR and also generates the same objective value for VSDR as Z V for VSDR by (0); i.e., this shows that V V. his completes the proof of (8). Next we prove that the optimal values of MSDR and VSDR are equal, i.e., that M = V () heproofissimilartotheonefor (8).FirstsupposeVSDR hasafeasiblesolutionz jj = [ x j x j Y jj,j = 2 m. Let X = [ x x 2 x r and YM = r j= Y jj. Now we construct Z M = [ I r X X Y M. hen by Zjj 0, j = 2 r, we have Y M = r j= Y jj r j= x jxj = XX, which implies Z M 0 by the Schur complement (Liu [24, Ouellette [27). Because M q M j Z M =Q j r Y ii +2C j X+ j j = m (2) i=

6 Mathematics of Operations Research 36(), pp , 20 INFORMS 93 we get Z M is feasible for MSDR and it generates the same objective value as the one by Z jj, j = 2 m, for VSDR ; i.e., M V. Conversely, suppose Z M = [ I r X X Y M 0 is feasible for MSDR, and X = [ x x 2 x r. Let Yii =x i x i for i = 2 r, and let Y rr = x r xr + Y M XX. As a result, Y ii x i xi for i = 2 r, and r i= Y ii = Y M. So, by constructing Z jj = [ x j x j Y jj, j = 2 r, it is easy to show that Zjj is feasible for VSDR and generates an objective value equal to the objective value of MSDR with Z M; i.e., M V. his completes the proof of (). Combining this with (8) completes the proof of the theorem. Remark 2.. hough the MSDR bound is significantly less expensive, heorem 2. implies that the quality is no weaker than that from VSDR. hus MSDR is preferable as long as the problem can be formulated as a QMP c. Moreover, a solution Z M = [ I r X X Y M to MSDR can be used to construct the following corresponding solution to VSDR : Z V 2 nr + =vec X, and Z V 2 nr + 2 nr + =Y V, where Y V is constructed by semidefinite completion, as in the proof of heorem 2.. In addition, the solution from MSDR can also be used in a warm-start strategy applied to a vectorized semidefinite relaxation where additional constraints that do not allow a matrix lifting have been added. Example 2. (SNL Problem). he SNL problem is one of the most studied problems in graph realization (e.g., Krislock [22, Krislock and Wolkowicz [23, So and Ye [3). In this problem one is given a graph with m known points (anchors) a k R d, k= 2 m, and n unknown points (sensors) x j R d, j = 2 n, where d is the embedding dimension. A Euclidean distance d kj between a k and x j or distance d ij between x i and x j is also given for some pairs of two points. he goal is to seek estimates of the positions for all unknown points. One possible formulation of the problem is as follows. min 0 s.t. trace X E ii +E jj 2E ij X =d ij i j N x trace X E ii X 2trace ([ a j X n r 0 X ) +a j a j =d ij i j N a where N x N a refers to sets of known distances. his formulation is a QMP c, so we can develop both its VSDR and MSDR relaxations. min 0 s.t. I E ii +E jj 2E ij Y =d ij i j N x min 0 I E ii Y 2a j x i+a j a j =d ij i j N a [ x x Y s.t. E ii +E jj 2E ij Y =d ij i j N x E ii Y 2 [ a j 0 X+a j a j =d ij i j N a [ I X X Y heorem 2. implies that the MSDR relaxation always provides the same lower bound as the VSDR one, although the number of variables for MSDR (O n+d 2 ) is significantly smaller than the number for VSDR (O n 2 d 2 ). he quality of the bounds combined with a lower computational complexity explains why MSDR is a favourite relaxation for researchers. 3. Quadratic matrix programming: Case II. In this section, we move to the main topic of our paper i.e., the equivalence of the vector and matrix relaxations for the more general QMP Equivalence of vector and matrix lifting for QMP 2. We first propose the VSDR, VSDR 2 for QMP 2. From applying both equations in (4), we get the following: [ 0 VSDR 2 V2 =min vec C 0 Z V vec C 0 I r Q 0 +P 0 I n (3) (4) (5)

7 94 Mathematics of Operations Research 36(), pp , 20 INFORMS s.t. [ j vec C j vec C j I r Q j +P j I n Z V = Z V rn+ + Z V 0 j = 2 m ( [ ) vec X Z V = vec X Matrix Y V is nr nr and can be partitioned into exactly r 2 block matrices Y ij V, i j = 2 r where each block is n n. From applying both equations in (5), we get the smaller MSDR, MSDR 2 for QMP 2. (We add the additional constraint tracey =tracey 2 because tracexx =tracex X.) MSDR 2 M2 =min Q 0 Y +P 0 Y 2 +2C 0 X+ 0 s.t. Q j Y +P j Y 2 +2C j X+ j 0 j = 2 m ( [ ) Ir X Y XX n + Z = 0 X Y ( [ ) In X Y 2 X X r + Z 2 = 0 X Y 2 tracey =tracey 2 VSDR 2 has O nr 2 variables, whereas MSDR 2 has only O n+r 2 variables. he computational advantage of using the smaller problem MSDR 2 motivates the comparison of the corresponding bounds. he main result is interesting and surprising, i.e., that VSDR 2 and MSDR 2 actually generate the same bound under a constraint qualification assumption. In general, the bound from VSDR 2 is at least as strong as the bound from MSDR 2. Define the block-diag and block-offdiag transformations, respectively, as B 0 Diag Q n rn+ [ 0 0 B 0 Diag Q = 0 I r Q Y V O 0 Diag P r rn+ [ 0 0 O 0 Diag P = 0 P I n (See Zhao et al. [36 for the r = n case.) It is clear that Q P 0 implies that both B 0 Diag Q 0 and O 0 Diag P 0. he adjoints b 0 diag o 0 diag are, respectively, r Y =B 0 Diag Z V =b 0 diag Z V = Y jj V j= (6) Y 2 =O 0 Diag Z V =o 0 diag Z V = tracey ij V i j= 2 r Lemma 3.. Let X n r be given. Suppose that one of the following two conditions holds. (i) Let Y V be given and Z V defined as in VSDR 2. Let the pair Z Z 2 in MSDR 2 be constructed as in (6). (ii) Let Y Y 2 be given with tracey =tracey 2, and let Z Z 2 be defined as in MSDR 2. Let Y V Z V for VSDR 2 be constructed from Y Y 2 as follows. V n Y 2 2 I n n Y 2 2 I n V 2 Y V = n Y 2 r I n n Y 2 2r I n (7) n Y 2 r r I n with V r r V i =Y tracev i = Y 2 ii i= r (8) i=

8 Mathematics of Operations Research 36(), pp , 20 INFORMS 95 hen, Z V satisfies the linear inequality constraints in VSDR 2 if, and only if, Z Z 2 satisfies the linear inequality constraints in MSDR 2. Moreover, the values of the objective functions with the corresponding variables are equal. Proof. (i) Note that [ vec C vec C I r Q+P I n Z V = +2C X+ ( B 0 Diag Q +O 0 Diag P ) Z V = +2C X+Q b 0 diag Z V +P o 0 diag Z V = +2C X+Q Y +P Y 2 by (6) (9) (ii) Conversely, we note that tracey = tracey 2 is a constraint in MSDR 2. Z V as constructed using (7) satisfies (6). In addition, the n+r assignment type constraints in (8) on the rn variables in the diagonals of the V i, i= r, can always be solved. We can now apply the argument in (9) again. Lemma 3. guarantees the equivalence of the feasible sets of the two relaxations with respect to the linear inequality constraints and the objective function. However, this ignores the semidefinite constraints. he following result partially addresses this deficiency. Corollary 3.. If the feasible set of VSDR 2 is nonempty, then the feasible set of MSDR 2 is also nonempty and M2 V2 (20) vec X Proof. Suppose Z V = [ vec X Y V is feasible for VSDR2. Recall that matrix Y V is nr nr and can be partitioned into exactly r 2 block matrices Y ij V, i j = 2 r. As above, we set Y Y 2 following (6), and we set Z = [ I r X X Y, Z2 = [ I n X X Y 2. Denote the jth column of X by X j, j = 2 r. Now Z V 0 implies Y jj V X j X j 0. herefore, r j= Yjj V r j= X jx j =Y XX 0, i.e., Z 0. Similarly, denote the kth row of X by X k, k= 2 n. Let Y ij V kk denote the kth diagonal entry of Y ij V, and define the r r matrix Y k = Y ij V kk i j= 2 r. hen Z V 0 implies Y k Xk X k 0. herefore, n k= Yk n k= X k X k =Y 2 X X 0, i.e., Z 2 0. he proof now follows from Lemma 3.. Corollary 3. holds because MSDR 2 only restricts the sum of some principal submatrices of Z V (i.e., b 0 diag Z V o 0 diag Z V ) to be positive semidefinite, whereas VSDR 2 restricts the whole matrix Z V positive semidefinite. So the semidefinite constraints in MSDR 2 are not as strong as in VSDR 2. Moreover, the entries of Y V involved in b 0 diag o 0 diag form a partial semidefinite matrix that is not chordal and does not necessarily have a semidefinite completion. herefore, the semidefinite completion technique we used to prove the equivalence between VSDR and MSDR is not applicable here. Instead, we will prove the equivalence of their dual programs. It is well known that the primal equals the dual when the generalized Slater condition holds (Jeyakumar and Wolkowicz [9, Rockafellar [29), and in this case we will then conclude that VSDR 2 and MSDR 2 generate the same bound. Definition 3.. For m, let m = 0 + j j and let C, Q, P be defined similarly. After substituting, we see that the dual of VSDR 2 is equivalent to he dual of MSDR 2 is (DVSDR 2 ) max [ vec C s.t. vec C I r Q +P I n j= m + (DMSDR 2 ) max traces traces 2 [ S R s.t. R Q ti n

9 96 Mathematics of Operations Research 36(), pp , 20 INFORMS [ S2 R 2 R 2 P +ti r R +R 2 =C m + S S r S 2 S n R R 2 n r t R he Slater condition for DVSDR 2 is equivalent to the following: he corresponding constraint qualification condition for DMSDR 2 is m + s.t. I r Q +P I n 0 (2) t m + s.t. Q ti r 0 P +ti n 0 (22) hese two conditions are equivalent because of the following lemma, which will also be used in our subsequent analysis. Lemma 3.2. Let Q S n, P S r. hen if, and only if, I r Q+P I n 0 resp. t s.t. Q ti n 0 P +ti r 0 resp. Proof. Assume i Q i= 2 n and j P j= 2 r are the sets of eigenvalues of Q and P, respectively. hus, we get the equivalences I r Q+P I n 0 if, and only if, i Q + j P >0, i j if, and only if, min i i Q +min j j P >0 if, and only if, min i i Q t>0 min j j P +t>0 for some t he equivalences hold if the strict inequalities, 0 and > are replaced by the inequalities 0 and, respectively. Now we state the main theorem of this paper on the equivalence of the two SDP relaxations for QMP 2. heorem 3.. Suppose that DVSDR 2 is strictly feasible. As numbers in the extended real line +, the optimal values of the two relaxations VSDR 2, MSDR 2, obtained using vector and matrix liftings, are equal; i.e., V2 = M Proof of (main) heorem 3.. Because DVSDR 2 is strictly feasible, Lemma 3.2 implies that both dual programs satisfy constraint qualifications. herefore, both programs satisfy strong duality (see e.g., Rockafellar [29). herefore, both have zero duality gaps; i.e., the optimal values of DVSDR 2, DMSDR 2, are V2 M2, respectively. Now assume that is feasible for DVSDR 2 (23) Lemma 3.2 implies that is also feasible for DMSDR 2, i.e., that there exists t such that Q =Q ti n P =P +ti r (24) (o simplify notation, we use Q, P to denote these dual slack matrices.) he spectral decomposition of Q, P can be expressed as [ [ Q Q=V Q V + 0 P = V V 2 V V 2 P =U P U + 0 = U U 2 U U where the columns of the submatrices U, V form an orthonormal basis that spans the range spaces R P and R Q, respectively, and the columns of U 2, V 2 span the orthogonal complements R P and R Q, respectively. Q + is a diagonal matrix where diagonal entries are nonzero eigenvalues of matrix Q, and P + is defined similarly. Let i, i denote the eigenvalues of P, Q, respectively. We similarly simplify the notation C =C c =vec C = (25) Let A denote the Moore-Penrose pseudoinverse of matrix A (e.g., Ben-Israel and Greville [7). he following lemma allows us to express V2 as a function of Q, P, c, and.

10 Mathematics of Operations Research 36(), pp , 20 INFORMS 97 Lemma 3.3. Let, P, Q, c, be defined as above in (23), (24), (25). Let =c I r Q+P I n c (26) hen, is a feasible pair for DVSDR 2. For any pair, feasible to DVSDR 2, we have + + Proof. A general quadratic function f x =x Qx+2 c x+ is nonnegative for any x R n if, and only if, the matrix [ c c Q 0 (e.g., Ben-al and Nemirovski [8, p. 63). herefore, [ [ c c = 0 (27) c I r Q +P I n c I r Q+P I n if, and only if, For a fixed, this is further equivalent to x I r Q+P I n x+2c x+ 0 x nr min x x I r Q+P I n x+2c x = c I r Q+P I n c herefore, we can choose as in (26). o further explore the structure of (26), we note that c can be decomposed as c= U V r + U V 2 r 2 + U 2 V r 2 + U 2 V 2 r 22 (28) he validity of such an expression follows from the fact that the columns of U U 2 V V 2 form an orthonormal basis of nr. Furthermore, the dual feasibility of DVSDR 2 includes the constraint [ c c I r Q+P I n 0, which implies c R I r Q+P I n. his range space is spanned by the columns in the matrices U V, U 2 V, and U V 2, which implies that c has no component in R U 2 V 2 ; i.e., r 22 =0 in (28). he following lemma provides a key observation for the connections between the two dual programs. It deduces that if c is in R U V, then the component of the objective value of DVSDR 2 in Lemma 3.3 has a specific representation. Lemma 3.4. If c R U V, then the component of the objective value of DVSDR 2 in Lemma 3.3 satisfies = c I r Q+P I n c max vec R I r Q vec R vec R 2 P I n vec R 2 (29) = s.t. R +R 2 =C R R 2 n r Proof. We can eliminate R 2 and express the maximization problem on the right-hand side of the equality as max R R, where R = vec R I r Q + P I n vec R +2c P I n vec R c P I n c (30) Because P and Q are both positive semidefinite, we get I r Q 0, P I n 0 and, therefore, I r Q + P I n 0. Hence is concave. It is not difficult to verify that P I n c R I r Q + P I n. herefore, the maximum of the quadratic concave function R is finite and attained at R, and this corresponds to a value vec R = I r Q + P I n P I n c = P Q +PP I n c (3) R = c P Q +P P I n P I n P I n c = c U V ˆ U V c

11 98 Mathematics of Operations Research 36(), pp , 20 INFORMS where ˆ = P I n 2 P Q + P I n Matrix ˆ is diagonal. Its diagonal entries can be calculated as if ˆ i j = i + i >0 j >0 j 0 if i =0 or j =0 We now compare R with c I r Q+P I n c. Let = I r Q+P I n = U V I r Q + P I n U V = U V I + Q + P I + +I 0 Q + P I 0 U V where matrix I + (resp. I 0 ) is r r, diagonal, and zero, except for the ith diagonal entries that are equal to one if i >0 (resp. i =0); and matrix I + (resp. I 0 ) is defined in the same way. Hence we know that matrix is also diagonal. Its diagonal entries can be calculated as if i + i >0 j >0 j if i j = i >0 j =0 i (33) if i =0 j >0 j 0 if i =0 j =0 By assumption, c = U V r, for some r of appropriate size. Note that U V r is orthogonal to the columns in U 2 V and U V 2. hus, only the part I + Q + P I + in the diagonal matrix is involved in computations, i.e., c I r Q+P I n c = r U V U V U V U V r = r U V U V I + Q + P I + U V U V r = r U V U V ˆ U V U V r = R For the given feasible of Lemma 3.3, we will construct a feasible solution for DMSDR 2 that generates the same objective value. Using Lemma 3.2, we choose t satisfying Q=Q ti n 0, P =P +ti r 0. We can now find a lower bound for the optimal value of DMSDR 2. Proposition 3.. Let, t, P, Q, C, c, be as above. Let R denote the maximizer of R in the proof of Lemma 3.4 and R 2 =C R. Construct R R 2 as follows: vec R =vec R + U 2 V r 2 vec R 2 =vec R 2 + U V 2 r 2 hen we obtain a lower bound for the optimal value of DMSDR 2. M2 vec R I r Q vec R vec R 2 P I n vec R 2 + (35) Proof. Consider the subproblem that maximizes the objective with, t, R, and R 2 defined as above. max traces traces 2 [ S R s.t. R Q [ S2 R 2 P R 2 S S r S 2 S n (32) (34) (36)

12 Mathematics of Operations Research 36(), pp , 20 INFORMS 99 Because, t, R, and R 2 are all feasible for DMSDR 2, this subproblem will generate a lower bound for M2. We now invoke a result from Beck [6, i.e., that there exists a symmetric matrix S such that traces and 0 if, and only if, f X =trace X QX+2C X + 0 for any X n r. his is equivalent to [ S C C Q herefore, the subproblem (36) can be reformulated as min X n rtrace X QX+2C X = trace C Q C max traces traces 2 s.t. traces trace R Q R traces 2 trace R 2 Q R 2 S S r S 2 S n (37) Hence, the optimal value of (37) has an explicit expression and provides a lower bound for DMSDR 2 M2 vec R I r Q vec R vec R 2 P I n vec R 2 + With all the above preparations, we now complete the proof of (main) heorem 3.. Proof of heorem 3.. We now compare V2 obtained by using from the expression (26) with M2 based on the lower bound expression in (35). By writing c in the form of (28), we get V2 = r U V +r 2 U V 2 +r 2 U 2 V I r Q+P I n U V r + U V 2 r 2 + U 2 V r 2 + (38) Consider the cross-term such as r U V I r Q+P I n U V 2 r 2. Because U V r is orthogonal to U V 2 r 2 and I r Q+P I n is diagonalizable by U U 2 V V 2, this term is actually zero. Similarly, we can verify that the other cross-terms equal zero. As a result, only the following sum of three quadratic terms remain, which we label using C C2 C3, respectively. V2 = r U V I r Q+P I n U V r r2 U 2 V I r Q+P I n U 2 V r 2 r2 U V 2 I r Q+P I n U V 2 r 2 + = C+C2+C3+ (39) We can also formulate the lower bound for M2 based on (35): M2 vec R I r Q vec R vec R 2 P I n vec R 2 + = vec R + U 2 V r 2 I r Q vec R + U 2 V r 2 vec R 2 + U V 2 r 2 P I n vec R 2 + U V 2 r 2 + (40) Because vec R and vec R 2 are both in R U V, and this is orthogonal to both U V 2 r 2 and U 2 V r 2, and both matrices I r Q and P I n are diagonalizable by U U 2 V V 2, we conclude that the cross-terms, such as vec R I r Q U 2 V r 2, all equal zero. herefore, the lower bound for M2 can be reformulated as M2 vec R I r Q vec R +vec R 2 P I n vec R 2 r2 U 2 V I r Q U 2 V r 2 r2 U V 2 P I n U V 2 r 2 + = (4) As above, denote the first three quadratic terms by, 2, and 3, respectively. We will show that terms C, C2, and C3 equal, 2, and 3, respectively. he equality between C and follows from Lemma 3.4. For the other terms, consider C2 first. Write I r Q+P I n as the diagonal

13 00 Mathematics of Operations Research 36(), pp , 20 INFORMS matrix by (32). Note that U 2 V r 2 is orthogonal with the columns in U V and U V 2. hus, only part I 0 Q in diagonal matrix is involved in computing term C2, i.e., r 2 U 2 V I r Q+P I n U 2 V r 2 = r 2 U 2 V U V I 0 Q U V U 2 V r 2 (42) Similarly, because U 2 V r 2 is orthogonal with eigenvectors in U V 2, we have r 2 U 2 V I r Q U 2 V r 2 = r 2 U 2 V U V I + Q +I 0 Q U V U 2 V r 2 = r 2 U 2 V U V I 0 Q U V U 2 V r 2 (43) By (42) and (43), we conclude that that the term C2 equals 2. We may use the same argument to prove the equality of C3 and 3. herefore, we conclude that c I r Q+P I n c+ = vec R I r Q vec R vec R 2 P I n vec R 2 + (44) hen by (26) and (35), we have established V2 M2. he other direction has been proved in Corollary 3.. Remark 3.. Our result implies that the dual optimal solution to DVSDR 2 coincides with that from DMSDR 2. herefore, if a QCQP contains constraints that cannot be formulated into DMSDR 2, we can first solve the corresponding DMSDR 2 without these constraints, and then we can use the dual solution in a warm-start strategy and continue solving the original QCQP Unbalanced orthogonal Procrustes problem Example 3.. In the unbalanced orthogonal Procrustes problem (Eldén and Park [5) one seeks to solve the following minimizing problem: min AX B 2 F s.t. X X =I r X M nr where A M nn, B M nr, and n r. he balanced case n=r can be solved efficiently (Schönemann [30), and this special case also admits a QMP relaxation (Beck [6). Note that the unbalanced case is a typical QMP 2. Its VSDR 2 can be written as (45) min trace I r A A Y trace 2B AX s.t. trace E ij I n Y = i j i j = 2 r [ x x Y (46) It is easy to check that the SDP in (46) is feasible and its dual is strictly feasible, which implies the equivalence between MSDR 2 and VSDR 2. hus we can obtain a nontrivial lower bound from its MSDR 2 relaxation: min trace A AY 2B AX [ Ir X s.t. X Y [ In X X I r tracey =r (47) Preliminary computational experiments appear in able. he matrices in the five instances are randomly generated, and they are solved using SeDuMi. and a 32-bit version of Matlab R2009a on a laptop running Windows XP, with a 2.53 GHz Intel Core 2 Duo processor and with 3 GB of RAM. able illustrates the computational advantage of MSDR 2 over VSDR 2.

14 Mathematics of Operations Research 36(), pp , 20 INFORMS 0 able. Solution times (CPU seconds) of two SDP relaxations on the orthogonal Procrustes problem. Problem size n r VSDR 2 (CPU sec) MSDR 2 (CPU sec) An extension to QMP with conic constraints. Some QMP problems include conic constraints such as X X S, where S is a given positive semidefinite matrix. We can prove that the corresponding MSDR 2 and VSDR 2 are still equivalent for such problems. Consider the following general form of QMP 2 with conic constraints: QMP 3 min trace X Q 0 X +trace XP 0 X +2trace C 0 X +trace H 0 Z s.t. trace X Q j X +trace XP j X +2trace C j X +trace H j Z + j 0 j = 2 m X n r Z K where K can be the direct sum of convex cones (e.g., second-order cones, semidefinite cones). Note that the constraint X X S can be formulated as trace X XE ij + trace ZE ij =S ij Z (48) he formulations of VSDR 2 and MSDR 2 for QMP 3 are the same as for QMP 2 except for the additional term H j Z and the conic constraint Z K. Correspondingly, the dual programs DVSDR 2 and DMSDR 2 for QMP 3 will both have an additional constraint m H 0 j H j K (49) j= If a dual solution is feasible for DVSDR 2, then it satisfies the constraint (49) in both DVSDR 2 and DMSDR 2. herefore, we can follow the proof of heorem 3. and construct a feasible solution for DMSDR 2 with, which generates the same objective value as V2. his yields the following. Corollary 3.2. Assume VSDR 2 for QMP 3 is strictly feasible and its dual DVSDR 2 is feasible. hen DVSDR 2 anddmsdr 2 bothattaintheiroptimumatthesame andgeneratethesameoptimalvalue V2 = M Graph partition problem Example 3.2 (GPP). GPP is an important combinatorial optimization problem with broad applications in network design and floor planning (Alpert and Kahng [3, Povh [28). Given a graph with n vertices, the problem is to find an r partition S S 2 S r of the vertex set, such that S i =m i with m = m i i= r given cardinalities of subsets, and the total number of edges across different subsets is minimized. Define matrix X n r to be the assignment of vertices; i.e., X ij = if vertex i is assigned to subset j; X ij =0 otherwise. With L the Laplacian matrix, the GPP can be formulated as an optimization problem: GPP =min 2 trace X LX s.t. X X =Diag m diag XX =e n X 0 (50) his formulation involves quadratic matrix constraints of both types trace X E ii X =, i = n and trace XE ij X =m i i j, i j = r. hus it can be formulated as a QMP 2 but not a QMP c. Anstreicher and Wolkowicz [4 proposed a semidefinite program relaxation with O n 4 variables and proved that its optimal

15 02 Mathematics of Operations Research 36(), pp , 20 INFORMS value equals the so-called Donath-Hoffman lower bound (Donath and Hoffman [4). his SDP formulation can be written in a more compact way, as Povh [28 suggested: DH =min trace I 2 r L V r s.t. V ii +W =I m n i i= trace V ij =m i i j i j = r trace I E ii V = i= n V S + rn W S+ n (5) where V has been partitioned into r 2 square blocks, with each block size of n by n, and V ij is the i j -th block of V. Note that formulation (5) reduces the number of variables to O n 2 r 2. An interesting application is the graph equipartition problem in which m i =m s are all the same. Povh s SDP formulation is actually a VSDR 2 for QMP 3 : min 2 trace X LX ( ) s.t. trace X E m ij X+E ij W = i j i j = n trace XE ij X =m i j i j = r trace X E ii X = i= n W S + n (52) It is easy to check that (5) is feasible and its dual is strictly feasible. Hence, by Corollary 3.2, the equivalence between MSDR 2 and VSDR 2 for QMP 3 implies that the Donath-Hoffman bound can be computed by solving a small MSDR 2 : DH =min 2 L Y s.t. Y m I n Y 2 =m I r diag Y =e n [ Ir X X Y [ In X X Y 2 Because X and Y 2 do not appear in the objective, formulation (53) can be reduced to a very simple form: (53) DH =min 2 L Y s.t. diag Y =e n 0 Y m I n (54) his MSDR formulation has only O n 2 variables, which is a significant reduction from O n 2 r 2. his result coincides with Karisch and Rendl s result (Karisch and Rendl [2) for the graph equipartition. heir proof derives from the particular problem structure, while our result is based on the general equivalence of VSDR 2 and MSDR Conclusion. his paper proves the equivalence of two SDP bounds for the hard QCQP in QMP 2. hus, it is clear that a user should use the smaller/inexpensive MSDR bound from matrix lifting, rather than the more expensive VSDR bound from vector lifting. In particular, our results show that the large VSDR 2 relaxation for the unbalanced orthogonal Procrustes problem can be replaced by the smaller MSDR 2. And, with an extension of the main theorem, we proved the Karisch and Rendl result (Karisch and Rendl [2) that the Donath-Hoffman bound for graph equipartition can be computed with a small SDP. he key idea of the paper is to simplify the semidefinite constraint using a sparse completion technique. Most existing literature on this topic requires the matrix to have a chordal structure (Grone et al. [7, Beck [6, Wang

16 Mathematics of Operations Research 36(), pp , 20 INFORMS 03 et al. [33); whereas in our case, the dual matrix of QMP 2 is not chordal, but it can be decomposed as a sum of two matrices, each of which admits a chordal structure. his idea, we hope, will lead to further studies on identifying other nonchordal sparse patterns that can be used to simplify the semidefinite constraints. he sparse matrix completion results and semidefinite inequality techniques used in our proofs are of independent interest. Unfortunately, it is not clear how to formulate a general QCQP as an MSDR. For example, the objective function for the QAP, traceaxbx, does not immediately admit an MSDR representation (though a relaxed MSDR is presented in Ding and Wolkowicz [3 that generally has a strictly lower bound than the vectorized SDP relaxation proposed in Zhao et al. [36). he above motivates the need for finding efficient matrix-lifting representations for hard QCQP problems. Acknowledgments. Research by the first and third author was supported by Natural Sciences Engineering Research Council Canada and a grant from AFOSR. Research by the second author was supported by the National Natural Science Foundation of China under Grants he authors express their gratitude to the editor and anonymous referees for their careful reading of the manuscript. References [ Alfakih, A., H. Wolkowicz Matrix completion problems. H. Wolkowicz, R. Saigal, L. Vandenberghe, eds. Handbook of Semidefinite Programming: heory, Algorithms, and Applications, International Series in Operations Research and Management Science, Vol. 27. Kluwer Academic Publishers, Boston, [2 Alfakih, A., M. F. Anjos, V. Piccialli, H. Wolkowicz Euclidean distance matrices, semidefinite programming, and sensor network localization. Portugal Math. Forthcoming. [3 Alpert, C. J., A. B. Kahng Recent directions in netlist partitioning: A survey. Integrated VLSI J. 9( 2) 8. [4 Anstreicher, K. M., H. Wolkowicz On Lagrangian relaxation of quadratic matrix constraints. SIAM J. Matrix Anal. Appl. 22() [5 Anstreicher, K. M., X. Chen, H. Wolkowicz, Y. Yuan Strong duality for a trust-region type relaxation of the quadratic assignment problem. Linear Algebra Appl. 30( 3) [6 Beck, A Quadratic matrix programming. SIAM J. Optim. 7(4) [7 Ben-Israel, A.,. N. E. Greville Generalized Inverses: heory and Applications. Wiley-Interscience, New York. [8 Ben-al, A., A. S. Nemirovski Lectures on Modern Convex Optimization. MPS/SIAM Series on Optimization. Society for Industrial and Applied Mathematics (SIAM), Philadelphia. [9 Ben-al, A., L. El Ghaoui, A. Nemirovski Robust Optimization. Princeton Series in Applied Mathematics. Princeton University Press, Princeton, NJ. [0 Biswas, P., Y. Ye Semidefinite programming for ad hoc wireless sensor network localization. Proc. hird Internat. Sympos. Inform. Processing in Sensor Networks, Berkeley, CA, [ Carter, M. W., H. H. Jin, M. A. Saunders, Y. Ye SpaseLoc: An adaptive subproblem algorithm for scalable wireless sensor network localization. SIAM J. Optim. 7(4) [2 de Klerk, E., R. Sotirov Exploiting group symmetry in semidefinite programming relaxations of the quadratic assignment problem. Math. Programming 22(2, Ser. A) [3 Ding, Y., H. Wolkowicz A low-dimensional semidefinite relaxation for the quadratic assignment problem. Math. Oper. Res. 34(4) [4 Donath, W. E., A. J. Hoffman Lower bounds for the partitioning of graphs. IBM J. Res. Development 7(5) [5 Eldén, L., H. Park A Procrustes problem on the Stiefel manifold. Numer. Math. 82(4) [6 Graham, A. 98. Kronecker Products and Matrix Calculus: With Applications. Halsted Press, oronto. [7 Grone, B., C. R. Johnson, E. Marques de Sa, H. Wolkowicz Positive definite completions of partial Hermitian matrices. Linear Algebra Appl [8 Hogben, L Graph theoretic methods for matrix completion problems. Linear Algebra Appl. 328( 3) [9 Jeyakumar, V., H. Wolkowicz Generalizations of Slater s constraint qualification for infinite convex programs. Math. Programming 57(, Ser. B) [20 Johnson, C. R Matrix completion problems: A survey. Proc. Sympos. Appl. Math., Vol. 40. American Mathematical Society, Providence, RI, [2 Karisch, S. E., F. Rendl Semidefinite programming and graph equipartition. P. M. Pardalos, H. Wolkowicz, eds. opics in Semidefinite and Interior-Point Methods, Fields Institute Communications Series, Vol. 8. American Mathematical Society, Providence, RI, [22 Krislock, N Semidefinite facial reduction for low-rank euclidean distance matrix completion. Doctoral dissertation, University of Waterloo, Waterloo, Ontario. [23 Krislock, N., H. Wolkowicz Explicit sensor network localization using semidefinite representations and facial reductions. SIAM J. Optim. 20(5) [24 Liu, J Eigenvalue and singular value inequalities of Schur complements. C. Brezinski, F. Zhang, eds. he Schur Complement and Its Applications, Numerical Methods and Algorithms, Vol. 4. Springer-Verlag, New York, [25 Mittelmann, H. D., J. Peng Estimating bounds for quadratic assignment problems associated with the hamming and Manhattan distance matrices based on semidefinite programming. SIAM J. Optim. 20(6) [26 Nesterov, Y. E., H. Wolkowicz, Y. Ye Semidefinite programming relaxations of nonconvex quadratic optimization. Handbook of Semidefinite Programming: heory, Algorithms, and Applications, International Series in Operations Research and Management Science, Vol. 27. Kluwer Academic Publishers, Boston,

17 04 Mathematics of Operations Research 36(), pp , 20 INFORMS [27 Ouellette, D. V. 98. Schur complements and statistics. Linear Algebra Appl [28 Povh, J Semidefinite approximations for quadratic programs over orthogonal matrices. J. Global Optim. 48(3) [29 Rockafellar, R Convex Analysis. Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ. [30 Schönemann, P. H A generalized solution of the orthogonal Procrustes problem. Psychometrika 3() 0. [3 So, A. M.-C., Y. Ye heory of semidefinite programming for sensor network localization. Math. Programming 09(2 3, Ser. B) [32 Stern, R., H. Wolkowicz Indefinite trust region subproblems and nonsymmetric eigenvalue perturbations. SIAM J. Optim. 5(2) [33 Wang, Z., S. Zheng, S. Boyd, Y. Ye Further relaxations of the semidefinite programming approach to sensor network localization. SIAM J. Optim. 9(2) [34 Wolkowicz, H Semidefinite and Lagrangian relaxations for hard combinatorial problems. M. J. D. Powell, S. Scholtes, eds. Proc. 9th IFIP C7 Conf. System Modelling Optim. Kluwer Academic Publishers, Boston, [35 Wolkowicz, H., Q. Zhao Semidefinite programming relaxations for the graph partitioning problem. J. Discrete Appl. Math. 96/97() [36 Zhao, Q., S. E. Karisch, F. Rendl, H. Wolkowicz Semidefinite programming relaxations for the quadratic assignment problem. J. Combin. Optim. 2() 7 09.

18 Copyright 20, by INFORMS, all rights reserved. Copyright of Mathematics of Operations Research is the property of INFORMS: Institute for Operations Research and its content may not be copied or ed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or articles for individual use.

On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming

On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming Yichuan Ding Dongdong Ge Henry Wolkowicz April 28, 2010 Contents 1 Introduction 2 1.1 Preliminaries........................................

More information

On Efficient Semidefinite Relaxations for Quadratically Constrained Quadratic Programming

On Efficient Semidefinite Relaxations for Quadratically Constrained Quadratic Programming On Efficient Semidefinite Relaxations for Quadratically Constrained Quadratic Programming by Yichuan Ding A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract A STRENGTHENED SDP RELAXATION via a SECOND LIFTING for the MAX-CUT PROBLEM Miguel Anjos Henry Wolkowicz y July 15, 1999 University of Waterloo Department of Combinatorics and Optimization Waterloo, Ontario

More information

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion

Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Semidefinite Facial Reduction for Euclidean Distance Matrix Completion Nathan Krislock, Henry Wolkowicz Department of Combinatorics & Optimization University of Waterloo First Alpen-Adria Workshop on Optimization

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

SDP and eigenvalue bounds for the graph partition problem

SDP and eigenvalue bounds for the graph partition problem SDP and eigenvalue bounds for the graph partition problem Renata Sotirov and Edwin van Dam Tilburg University, The Netherlands Outline... the graph partition problem Outline... the graph partition problem

More information

Further Relaxations of the SDP Approach to Sensor Network Localization

Further Relaxations of the SDP Approach to Sensor Network Localization Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and inyu e September 27, 2006 Abstract Recently, a semidefinite programming (SDP) relaxation

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin Identifying Redundant Linear Constraints in Systems of Linear Matrix Inequality Constraints Shafiu Jibrin (shafiu.jibrin@nau.edu) Department of Mathematics and Statistics Northern Arizona University, Flagstaff

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

a Λ q 1. Introduction

a Λ q 1. Introduction International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

More information

Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions

Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions Explicit Sensor Network Localization using Semidefinite Programming and Clique Reductions Nathan Krislock and Henry Wolkowicz Dept. of Combinatorics and Optimization University of Waterloo Haifa Matrix

More information

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction

More information

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications Edited by Henry Wolkowicz Department of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo,

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Operations Research Letters

Operations Research Letters Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

On the eigenvalues of Euclidean distance matrices

On the eigenvalues of Euclidean distance matrices Volume 27, N. 3, pp. 237 250, 2008 Copyright 2008 SBMAC ISSN 00-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH Department of Mathematics and Statistics University

More information

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation

More information

1. Introduction and background. Consider the primal-dual linear programs (LPs)

1. Introduction and background. Consider the primal-dual linear programs (LPs) SIAM J. OPIM. Vol. 9, No. 1, pp. 207 216 c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER,

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Deterministic Methods for Detecting Redundant Linear. Constraints in Semidefinite Programming

Deterministic Methods for Detecting Redundant Linear. Constraints in Semidefinite Programming Deterministic Methods for Detecting Redundant Linear Constraints in Semidefinite Programming Daniel Stover Department of Mathematics and Statistics Northen Arizona University,Flagstaff, AZ 86001. July

More information

An Interior-Point Method for Approximate Positive Semidefinite Completions*

An Interior-Point Method for Approximate Positive Semidefinite Completions* Computational Optimization and Applications 9, 175 190 (1998) c 1998 Kluwer Academic Publishers. Manufactured in The Netherlands. An Interior-Point Method for Approximate Positive Semidefinite Completions*

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Houduo Qi February 1, 008 and Revised October 8, 008 Abstract Let G = (V, E) be a graph

More information

Facial Reduction for Cone Optimization

Facial Reduction for Cone Optimization Facial Reduction for Cone Optimization Henry Wolkowicz Dept. Combinatorics and Optimization University of Waterloo Tuesday Apr. 21, 2015 (with: Drusvyatskiy, Krislock, (Cheung) Voronin) Motivation: Loss

More information

Further Relaxations of the SDP Approach to Sensor Network Localization

Further Relaxations of the SDP Approach to Sensor Network Localization Further Relaxations of the SDP Approach to Sensor Network Localization Zizhuo Wang, Song Zheng, Stephen Boyd, and Yinyu Ye September 27, 2006; Revised May 2, 2007 Abstract Recently, a semi-definite programming

More information

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás

More information

An Even Order Symmetric B Tensor is Positive Definite

An Even Order Symmetric B Tensor is Positive Definite An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Lecture 14: Optimality Conditions for Conic Problems

Lecture 14: Optimality Conditions for Conic Problems EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems

More information

Four new upper bounds for the stability number of a graph

Four new upper bounds for the stability number of a graph Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

arxiv: v1 [math.oc] 14 Oct 2014

arxiv: v1 [math.oc] 14 Oct 2014 arxiv:110.3571v1 [math.oc] 1 Oct 01 An Improved Analysis of Semidefinite Approximation Bound for Nonconvex Nonhomogeneous Quadratic Optimization with Ellipsoid Constraints Yong Hsia a, Shu Wang a, Zi Xu

More information

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

The Newton Bracketing method for the minimization of convex functions subject to affine constraints

The Newton Bracketing method for the minimization of convex functions subject to affine constraints Discrete Applied Mathematics 156 (2008) 1977 1987 www.elsevier.com/locate/dam The Newton Bracketing method for the minimization of convex functions subject to affine constraints Adi Ben-Israel a, Yuri

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

1. Introduction. Consider the following quadratic binary optimization problem,

1. Introduction. Consider the following quadratic binary optimization problem, ON DUALITY GAP IN BINARY QUADRATIC PROGRAMMING XIAOLING SUN, CHUNLI LIU, DUAN LI, AND JIANJUN GAO Abstract. We present in this paper new results on the duality gap between the binary quadratic optimization

More information

Strong duality in Lasserre s hierarchy for polynomial optimization

Strong duality in Lasserre s hierarchy for polynomial optimization Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization

More information

Strong Duality for a Trust-Region Type Relaxation. of the Quadratic Assignment Problem. July 15, University of Waterloo

Strong Duality for a Trust-Region Type Relaxation. of the Quadratic Assignment Problem. July 15, University of Waterloo Strong Duality for a Trust-Region Type Relaxation of the Quadratic Assignment Problem Kurt Anstreicher Xin Chen y Henry Wolkowicz z Ya-Xiang Yuan x July 15, 1998 University of Waterloo Department of Combinatorics

More information

Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization

Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization Yichuan Ding Nathan Krislock Jiawei Qian Henry Wolkowicz October 27, 2008 University of Waterloo Department of

More information

Mustafa Ç. Pinar 1 and Marc Teboulle 2

Mustafa Ç. Pinar 1 and Marc Teboulle 2 RAIRO Operations Research RAIRO Oper. Res. 40 (2006) 253 265 DOI: 10.1051/ro:2006023 ON SEMIDEFINITE BOUNDS FOR MAXIMIZATION OF A NON-CONVEX QUADRATIC OBJECTIVE OVER THE l 1 UNIT BALL Mustafa Ç. Pinar

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

New Lower Bounds on the Stability Number of a Graph

New Lower Bounds on the Stability Number of a Graph New Lower Bounds on the Stability Number of a Graph E. Alper Yıldırım June 27, 2007 Abstract Given a simple, undirected graph G, Motzkin and Straus [Canadian Journal of Mathematics, 17 (1965), 533 540]

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

TRUST REGION SUBPROBLEM WITH A FIXED NUMBER OF ADDITIONAL LINEAR INEQUALITY CONSTRAINTS HAS POLYNOMIAL COMPLEXITY

TRUST REGION SUBPROBLEM WITH A FIXED NUMBER OF ADDITIONAL LINEAR INEQUALITY CONSTRAINTS HAS POLYNOMIAL COMPLEXITY TRUST REGION SUBPROBLEM WITH A FIXED NUMBER OF ADDITIONAL LINEAR INEQUALITY CONSTRAINTS HAS POLYNOMIAL COMPLEXITY YONG HSIA AND RUEY-LIN SHEU Abstract. The trust region subproblem with a fixed number m

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

2nd Symposium on System, Structure and Control, Oaxaca, 2004

2nd Symposium on System, Structure and Control, Oaxaca, 2004 263 2nd Symposium on System, Structure and Control, Oaxaca, 2004 A PROJECTIVE ALGORITHM FOR STATIC OUTPUT FEEDBACK STABILIZATION Kaiyang Yang, Robert Orsi and John B. Moore Department of Systems Engineering,

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations. HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i

More information

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA BULL. AUSRAL. MAH. SOC. VOL. 24 (1981), 357-366. 9C3 INVEX FUNCIONS AND CONSRAINED LOCAL MINIMA B.D. CRAVEN If a certain weakening of convexity holds for the objective and all constraint functions in a

More information

The min-cut and vertex separator problem

The min-cut and vertex separator problem manuscript No. (will be inserted by the editor) The min-cut and vertex separator problem Fanz Rendl Renata Sotirov the date of receipt and acceptance should be inserted later Abstract We consider graph

More information

Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming

Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming Mathematical Problems in Engineering Volume 2010, Article ID 346965, 12 pages doi:10.1155/2010/346965 Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

Quadratic Optimization over a Second-Order Cone with Linear Equality Constraints

Quadratic Optimization over a Second-Order Cone with Linear Equality Constraints JORC (2014) 2:17 38 DOI 10.1007/s40305-013-0035-6 Quadratic Optimization over a Second-Order Cone with Linear Equality Constraints Xiao-ling Guo Zhi-bin Deng Shu-Cherng Fang Zhen-bo Wang Wen-xun Xing Received:

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

1. Introduction. To motivate the discussion, consider an undirected graph G

1. Introduction. To motivate the discussion, consider an undirected graph G COORDINATE SHADOWS OF SEMI-DEFINITE AND EUCLIDEAN DISTANCE MATRICES DMITRIY DRUSVYATSKIY, GÁBOR PATAKI, AND HENRY WOLKOWICZ Abstract. We consider the projected semi-definite and Euclidean distance cones

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Duality. Geoff Gordon & Ryan Tibshirani Optimization / Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider

More information

WHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES?

WHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES? MATHEMATICS OF OPERATIONS RESEARCH Vol. 6, No. 4, November 00, pp. 796 85 Printed in U.S.A. WHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES? MICHEL X. GOEMANS and LEVENT TUNÇEL

More information