The Simplest Semidefinite Programs are Trivial

Similar documents
Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Semidefinite Programming

Advances in Convex Optimization: Theory, Algorithms, and Applications

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming

The Trust Region Subproblem with Non-Intersecting Linear Constraints

Largest dual ellipsoids inscribed in dual cones

Linear Programming: Chapter 5 Duality

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Relaxations and Randomized Methods for Nonconvex QCQPs

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

15. Conic optimization

DEPARTMENT OF MATHEMATICS

4. Algebra and Duality

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Lecture 5. The Dual Cone and Dual Problem

ELEMENTARY LINEAR ALGEBRA

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

On the Sandwich Theorem and a approximation algorithm for MAX CUT

Lecture 8: Semidefinite programs for fidelity and optimal measurements

Lecture: Introduction to LP, SDP and SOCP

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Mustapha Ç. Pinar 1. Communicated by Jean Abadie

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Introduction to Semidefinite Programming I: Basic properties a

Knowledge Discovery and Data Mining 1 (VO) ( )

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Lecture 6: Conic Optimization September 8

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

Approximation Algorithms

CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren

5. Duality. Lagrangian

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Summer School: Semidefinite Optimization

Lecture 7: Positive Semidefinite Matrices

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

arxiv: v1 [math.oc] 26 Sep 2015

Semidefinite Programming

We describe the generalization of Hazan s algorithm for symmetric programming

Solving large Semidefinite Programs - Part 1 and 2

The Ongoing Development of CSDP

Lecture Note 18: Duality

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS

Research Reports on Mathematical and Computing Sciences

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

Absolute Value Programming

Linear and non-linear programming

Copositive Programming and Combinatorial Optimization

Chapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices

Primal-dual path-following algorithms for circular programming

Homework 4. Convex Optimization /36-725

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

Projection methods to solve SDP

ELEMENTARY LINEAR ALGEBRA

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

III. Applications in convex optimization

Global Optimization of Polynomials

Absolute value equations

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Review of Linear Programming

ICS 6N Computational Linear Algebra Matrix Algebra

1 Determinants. 1.1 Determinant

ELEMENTARY LINEAR ALGEBRA

Introduction to Matrix Algebra

Support Vector Machines

Interior points of the completely positive cone

Lecture #21. c T x Ax b. maximize subject to

MAT Linear Algebra Collection of sample exams

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

On John type ellipsoids

Convex Optimization M2

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Convex Optimization Boyd & Vandenberghe. 5. Duality

Interior Point Algorithms for Constrained Convex Optimization

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

א K ٢٠٠٢ א א א א א א E٤

Transcription:

The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12 Abstract We consider optimization problems of the following type: min{tr(cx) : A(X) = B, X positive semidefinite}. Here, tr( ) denotes the trace operator, C and X are symmetric n n matrices, B is a symmetric m m matrix and A( ) denotes a linear operator. Such problems are called semidefinite programs and have recently become the object of considerable interest due to important connections with max-min eigenvalue problems and with new bounds for integer programming. In the context of symmetric matrices, the simplest linear operators have the following form: A(X) = MXM T, where M is an arbitrary m n matrix. In this paper, we show that for such linear operators the optimization problem is trivial in the sense that an explicit solution can be given. Key words: semidefinite programming. AMS 1991 Subject Classification: Primary 65K10, Secondary 90C25 Research partially support by AFOSR through grant AFOSR-91-0359.

1 Introduction. For each integer n, let S n denote the space of symmetric n n matrices and let denote the Löwner partial order on S n induced by the cone of positive semidefinite matrices. The trace operator provides an inner product on S n by the following formula C, D = tr(cd). With this inner product, S n is a Hilbert space. The semidefinite programming problem is defined as minimize tr(cx) (1.1) subject to A(X) = B X 0. Here, X and C belong to S n, B is an element of S m (m n), and A is a linear operator from S n to S m. This class of optimization problems is a subclass of the general problems considered in 4. They have important applications in min-max eigenvalue problems (see e.g. 5, 6, 7) and also in obtaining excellent bounds for branch-and-bound algorithms 2, 3, 1. The dual of (1.1) is maximize tr(by ) (1.2) subject to A T (Y ) + Z = C Z 0. Here, A T denotes the adjoint (or transpose) of A determined by the inner products on S n and S m. 2 Linear Operators from S n to S m. It is interesting to consider what constitutes a linear operator from S n to S m. The simplest such operators have the following form: A(X) = MXM T, (2.1) where M is an arbitrary m n matrix. In this case, it is easy to see that A T (Y ) = M T Y M. 1

Indeed, this fact follows from the following trace identity: tr(y MXM T ) = tr(m T Y MX). The second simplest such operators are of the following type: A(X) = MXN T + NXM T, (2.2) where M and N are arbitrary m n matrices. In this case, A T (Y ) = M T Y N + N T Y M. These operators play an important role in certain Lyaponov inequalities 8, which arise, for example, in the stability analysis of dynamical systems. Theorem 1 Every linear operator from S n to S m can be decomposed as a sum of operators of type (2.2). Proof. First consider linear operators from the space R n2 of n n matrices to the space R m2 and for each i, j, k, l let L ijkl be such an operator defined by L ijkl X = E ij XEkl, T where E ij is the m n matrix with all zeros except for a one in the (i, j)th position. It is easy to see that the L ijkl are linear operators, that they are independent and since there are n 2 m 2 of them they form a basis for the space of all such linear transformations. Hence, any such linear transformation A can be decomposed as A(X) = a ij kl E ijxekl. T ijkl Now to get the desired result for linear operators from S n to S m, simply lift the operator to act in the larger space, employ the above decomposition and then project back down to the smaller space. 2

3 Duality The following weak duality theorem connects the primal (1.1) to the dual (1.2): Theorem 2 Given X, Y, and Z, suppose that: 1. X is feasible for the primal; 2. the pair (Y, Z) is feasible for the dual; 3. the pair (X, Z) is complementary in the sense that XZ = 0. Then, X is optimal for the primal and the pair (Y, Z) is optimal for the dual. Proof. The usual proof works. Indeed, first we use primal and dual feasibility to compute the duality gap: Hence, tr(cx) tr(zx) = tr((c Z)X) = tr(a T (Y )X) = tr(y A(X)) = tr(y B) = tr(by ). tr(cx) tr(by ) = tr(zx) 0. Then, we remark that tr(zx) = 0 if and only if ZX = 0. (Actually, we only need the trivial direction, but for a proof of both directions, see 1.) 4 The Semidefinite Program for Simple Operators. In this section, we study the semidefinite programming problem in the case where the linear operator A is simple. Indeed, throughout this section, we make the following assumptions: Assumptions. 3

1. The linear operator A has the following simple form: A(X) = MXM T. 2. The primal problem is feasible. 3. The dual problem is strictly feasible. 4. The matrix M has full rank. Hence, the semidefinite programming problem we wish to study in this section is and its dual is minimize tr(cx) (4.1) subject to MXM T = B X 0 maximize tr(by ) (4.2) subject to M T Y M + Z = C Z 0. Note that the feasibility assumptions imply that B is positive semidefinite. We shall show that the primal-dual pair (4.1),(4.2) can be solved explicitly. To this end, we first find an explicit solution to the following system: MXM T = B (4.3) M T Y M + Z = C (4.4) XZ = 0. (4.5) Then by magic we will see that X and Z are positive semidefinite. Therefore, Theorem 2 of the previous section will imply that the explicit solution indeed solves the semidefinite programming problem. Let Q and R denote the matrices obtained from a QR-factorization of the n n matrix M T 0 : QR = M T 0. 4

Then, Q and R are both n n matrices, QQ T = I, and R = P T 0 0 0 where P T is an m m upper triangular matrix. Let Then, M can be written as, J = I 0. (4.6) M = P JQ T. Substituting this expression for M into (4.3) and (4.4), we get P JQ T XQJ T P T = B (4.7) QJ T P T Y P JQ T + Z = C. (4.8) Now, we multiply (4.7) on the left and right by P 1 and P T, respectively. And, we multiply (4.8) on the left and right by Q T and Q, respectively. The result is where JRJ T = G (4.9) J T SJ + T = H, (4.10) R = Q T XQ, (4.11) S = P T Y P, (4.12) T = Q T ZQ, (4.13) G = P 1 BP T, (4.14) H = Q T CQ. (4.15) Now, partition R, T, and H to match the partition of J given in (4.6) and rewrite (4.9) and (4.10) accordingly to get I 0 R 11 R 12 R T 12 R 5 I 0 = G (4.16)

and I 0 From (4.16), we see that S I 0 + and from (4.17), we find that T11 T 12 T T 12 T = H11 H 12 H T 12 H. (4.17) R 11 = G (4.18) S + T 11 = H 11 (4.19) T 12 = H 12 (4.20) T = H (4.21) Soon, we will need to know that H is invertible. To see this, first note that it depends only on the data for the problem: H = (Q T CQ). But also, (4.21) together with (4.13) implies that for any dual feasible pair (Y, Z), we have H = T = (Q T ZQ). Now our assumption of strict dual feasibility allows us to pick a positive definite Z from which we see that H is positive definite. If we view (4.19) as defining S, then the arrays R 12, R and T 11 still need to be determined. We also have not yet used the requirement that XZ = 0. This will give us more conditions which we will then use to determine these three remaining arrays. From (4.11) and (4.13), we see that RT = Q T XQQ T ZQ = Q T XZQ = 0. (4.) Writing this equation in terms of partitioned matrices, we have R11 R 12 T11 T 12 0 0 R12 T R T12 T =. T 0 0 From this we get four equations: R 11 T 11 + R 12 T T 12 = 0 (4.23) R 11 T 12 + R 12 T = 0 (4.24) R T 12T 11 + R T T 12 = 0 (4.25) R T 12T 12 + R T = 0. (4.26) 6

Now, we use (4.24) to solve for R 12, and then use (4.26) and (4.27) to solve for R, R 12 = R 11 T 12 T 1 = GH 12 H 1, (4.27) R = R T 12T 12 T 1 = H T H T 12G T H 12 H 1, (4.28) and lastly use (4.26) and (4.23) to solve for T 11, T 11 = R11 1 R 12 T12 T = G 1 GH 12 H 1 H12 T = H 12 H 1 H12. T (4.29) With the formulas given by (4.27), (4.29), (4.28), and (4.20) for R 12, T 11, R, and T 12, respectively, it is easy to check that (4.25) holds automatically. At this point, we reassemble matrices R and T from their respective submatrices to get and R = = T = = H T G GH 12 H 1 H12G T H T H12GH T 12 H 1 I H T H T 12 G I H12 H 1 H12 T H 12 H12 T H H 12H 1/2 H 1/2 H 12 H 1 H 1/2 H12 T H 1/2 (4.30) (4.31) (where we have used the symmetry of G and H to obtain the given expression for R). Our feasibility assumption that B is positive semidefinite together with (4.14) implies that G is positive semidefinite. Hence, we see from (4.30) and (4.31) that R and T are positive semidefinite. Therefore, (4.11) and (4.13) show that X and Z are positive semidefinite too. We have proved the following theorem: 7

Theorem 3 The matrices X = QRQ T Y = P T SP 1 Z = QT Q T, with R given by (4.30), S given by (4.19), and T given by (4.31) solves problems (4.1) and (4.2). Perhaps a particularly interesting application of this theorem arises when M is simply a row vector a T. In this case, the dual is especially attractive. Indeed, it is the problem of finding the largest scalar multiple of a symmetric rank one matrix that is not larger (in the Löwner sense) than a given matrix C: maximize subject to yaa T C. As far as we know, the explicit solution described in Theorem 3 is the first such solution given for this problem. y Acknowledgement: The authors would like to thank Henry Wolkowicz for introducing us to this problem area and for carefully critiquing early drafts of this paper. References 1 F. Alizadeh. Combinatorial optimization with semidefinite matrices. In Proceedings of the Second Annual Integer Programming and Combinatorial Optimization Conference, pages 373 395, Carnegie-Mellon University, 1992. 2 N.K. Karmarkar and S.A. Thakur. An interior point approach to a tensor optimization problem with application to upper bounds in integer quadratic optimization problems. Technical report, AT&T Bell Laboratories, Murray Hill, NJ, 1992. 8

3 L. Lovasz and A. Schrijver. Cones of matrices and set-functions and 0-1 optimization. SIAM J. Optimization, 1(2):166 190, 1991. 4 Y.E. Nesterov and A.S. Nemirovsky. Interior Point Polynomial Methods in Convex Programming : Theory and Algorithms. SIAM Publications. SIAM, Philadelphia, USA, 1993. 5 M.L. Overton. On minimizing the maximum eigenvalue of a symmetric matrix. SIAM J. Matrix Analysis and Applications, 9:256 268, 1988. 6 M.L. Overton. Large-scale optimization of eigenvalues. SIAM J. Optimization, 2:88 120, 1992. 7 F. Rendl, R.J. Vanderbei, and H. Wolkowicz. Interior-point methods for max-min eigenvalue problems. Technical Report SOR 92-15, Princeton, University, 1992. 8 L. Vandenberghe and S. Boyd. Primal-dual potential reduction method for problems involving matrix inequalities. Technical report, Electrical Engineering Department, Stanford University, Stanford, CA 94305, 1993. 9