MS&E 314/CME 336 Assignment 2 Conic Linear Programming January 3, 215 Prof. Yinyu Ye 6 Pages ASSIGNMENT 2 SOLUTIONS Problem 1 (Exercise 2.2, Monograph) We prove the part ii) of Theorem 2.1 (Farkas Lemma for CLP). (a) It s easy to show that the set C is convex. To see why, take arbitrary two points S A T y, S A T y C and a constant α (, 1) then α(s A T y ) + (1 α)(s A T y ) = {αs + (1 α)s } A T (αy + (1 αy )) C. }{{} K Now we prove that C is closed. Let s take a converging sequence z k := (S k A T y k ) z and show that z C. Note that the convergence implies {z k } is bounded. So for the given ˆX (K ), there is a constant c > such that c z k ˆX = (S k A T y k ) ˆX = S k ˆX A T y k ˆX = S k ˆX (y k ) T }{{} A ˆX = S k ˆX. = Since S k K and ˆX (K ), by the Proposition 1.3, {S k } is a bounded sequence and thus {y k } is bounded. By the Bolzano-Weierstrass theorem 1, there is a convergent subsequence S kn K S K( K is closed) and we can take a convergent subsequence y knm ȳ (Note: {k nm } {k n } N). Hence z knm = S knm A T y knm S A T ȳ. But z k z gives which proves the closedness. z = S A T ȳ C (b) ( ) Suppose F d is feasible, say we have a vector ȳ satisfies C A T ȳ K. 1 http://en.wikipedia.org/wiki/bolzano-weierstrass_theorem
Suppose further that for some X K we have AX =. Then by the definition of the dual K, ( C A T ȳ ) }{{}}{{} X = C X ȳ T }{{} AX = C X K K = thus {X : AX =, X K, C X < } =. ( ) Suppose F d = {y : C A T y K} =. That is, C / {S A T y : S K}, which is proven to be a closed and convex set. So by the Separating Hyperplane theorem, there exists X such that C X < inf (S y R m,s K AT y) X = inf y R m,s K S X (AT y) X = inf y R m,s K S X yt (AX). (1) We claim that X K. Suppose not, then there exists S K such that S X < and this gives (α S) X as α. However, this contradicts to the boundedness shown in (1). So we have X K. Similarly, we can show that AX = from (1). Also note that if we set y = and S =, we obtain C X < and this completes the proof. Problem 2 (Exercise 2.6, Monograph) If we define Q := ] Q b b T, I 1:n := ] I T, I 1:n := ] T, 1 then we can reformulate the given problem as minimize Q X s.t. I 1:n X = 1, I n+1 X = 1, X. 2
which has the dual minimize y 1 + y 2 s.t. y 1 I 1:n + y 2 I n+1 S = Q, S. Then from the optimality condition of SDP, we must have which gives the desired condition. XS =, AX = (1, 1), A T y S = Q, X, S Problem 3 (Exercise 2.7, Monograph) The objective function can be rewritten as x T Ay = i,j x i a ij y j = i,j and the two constraints are equivalent to So if we define x 2 = 1 i the equivalent SDP is (x i y j )a ij = x i y j ] i,j A = xy T A x 2 i = 1 xx T I = 1, y 2 = 1 yy T I = 1. minimize s.t. ] xx T xy Z := T yx T yy T 1 A ] 2 1 Z 2 AT ] In Z = 1. ] Z = 1, I m Z. Since there are only two equality constraints, we can relax the rank constraint rank(z) = 1 because the above SDP always has a rank-1 solution by the rank reduction theorem 2.5. 3
Problem 4 (a) We can follow the proof for Theorem 2.5 in the Monograph. Let X = V V T, V R n r. Then the projected problem is minimize V T CV U s.t. V T A i V U, i = 1,..., m, V T Q i V U, i = 1,..., q, Z. SInce Q j is positive semidefinite, Q j X = and X = V V T imply V T Q j V =, which means V T Q j V U = is always satisfied. The remaining proof exactly follows the proof for Theorem 2.5. (b) The equivalent SDP is minimize s.t. ] Q c c T Z ] A T A Z =, ] In Z = 1, ] Z = 1, 1 Z. Since there are only two non-homogeneous equality constraints (the other is homogeneous with coefficient matrix positive semidefinite), we can relax the rank constraint rank(z) = 1, because the above SDP always has a rank-1 solution by the rank reduction we proved in part (a) (it s easy to see that there exists an optimal solution). Problem 5 (Exercise 2.4, Monograph) (a) Suppose we have a realization {x i } i=1,...,n for the given problem and let {x j + a} 1 j n be any translation of this realization with minimum norm, that is, n j=1 x j + a 2 4
is minimized. Then the gradient of this sum of two-norm should be zero: Therefore we get 2 n (x j + a) =. j=1 a = 1 n n x j = x j=1 and the solution is given by x j x, j = 1,..., n. It is still subject to roation and reflection since rotation and reflection preserve the 2-norm, i.e., for any orthogonal matrix Q, Qx = x. Note that any rotation and reflection is an orthogonal transformation. (b) Let X = x 1 x n ] be the d n matrix that needs to be determined. Then x i x j 2 = Xe i Xe j 2 = Xe ij 2 = e T ijx T Xe ij, where we use the notation e ij := e i e j. Also the objective function can be written as n x j 2 = tr(x T X). j=1 So we can write this problem as an SDP relaxation problem by plugging in Y = X T X: or equivalently, (SDP ) minimize tr(y ) s.t. e T ijy e ij = d 2 ij, (i, j) N x, Y, (SDP ) minimize I Y s.t. (e ij e T ij) Y = d 2 ij, (i, j) N x, Y. And the dual of the SDP relaxation is given by: (SDD) maximize (i,j) N w ij d 2 ij s.t. I (i,j) N w ij e ij e T ij. Note that the dual is always feasible and has an interior, since w ij = for all (i, j) N x is an interior feasible solution. 5
Problem 6 We omit the MATLAB code. (a) The solution ranks are: (i) 3, (ii) 3, (iii) 3. (b) The solution ranks are: (i) 2, (ii) 1, (iii) 1. 6