U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that a matrx M R n n s postve semdefnte (abbrevated PSD and wrtten M 0) f t s symmetrc and all ts egenvalues are non-negatve. We wll use wthout proof the followng facts from lnear algebra:. If M R n n s a symmetrc matrx, then all the egenvalues of M are real, and, f we call λ λ λ n the egenvalues of M wth repetton, we have M = λ v () (v () ) T where the v () are orthonormal egenvectors of the λ.. The smallest egenvalue of M has the characterzaton y T My λ = mn y 0 y and the optmzaton problem n the rght-hand sde s solvable up to arbtrarly good accuracy From part () above we have that M s PSD f and only f for every vector y we have y T My 0. We wll also use the followng alternatve characterzaton of PSD matrces Lemma A matrx M R n n s PSD f and only f there s a collecton of vectors x (),..., x (n) such that, for every,, we have M = x (), x (). Proof: Suppose that M and x (),..., x (n) are such that M = x (), x () for all and. Then M s PSD because for every vector y we have
y T My = y y M = y y x (), x () = y x () 0 Conversely, f M s PSD and we wrte t as M = k λ v (k) (v (k) ) T we have M = k λ k v (k) v (k) and we see that we can defne n vectors x (),, x (n) by settng and we do have the property that x k := λ k v (k) M = x (), x () Wth these characterzatons n mnd, we defne a semdefnte program as an optmzaton program n whch we have n real varables X, wth, n, and we want to maxmze, or mnmze, a lnear functon of the varables such that lnear constrants over the varables are satsfed (so far ths s the same as a lnear program) and subect to the addtonal constrant that the matrx X s PSD. Thus, a typcal semdefnte program (SDP) looks lke max C X A () X b. X 0 A (m) X b m where the matrces C, A (),..., A (m) and the scalars b,..., b m are gven, and the entres of X are the varables that we are optmzng over. If A and B are two matrces such that A 0 and B 0, and f a 0 s a scalar, then t s easy to see that a A 0 and A + B 0, by usng the characterzaton that M 0 ff y T My 0 for every y. Ths means that the set of PSD matrces s a convex subset of R n n, and that the above optmzaton problem s a convex problem.
Usng the ellpsod algorthm, one can solve n polynomal tme (up to arbtrarly good accuracy) any optmzaton problem n whch one wants to optmze a lnear functon over a convex feasble regon, provded that one has a separaton oracle for the feasble regon, that s, an algorthm that, gven a pont, checks whether t s feasble and, f not, constructs an nequalty that s satsfed by all feasble pont but not satsfed by the gven pont. In order to construct a separaton oracle for a SDP, t s enough to solve the followng problem: gven a matrx M, decde f t s PSD or not and, f not, construct an nequalty that s satsfed by the entres of all PSD matrces but that s not satsfed by M. In order to do so, recall that the smallest egenvalue of M s mn y y T My y and that the above mnmzaton problem s solvable n polynomal tme (up to arbtrarly good accuracy). If the above optmzaton problem has a non-negatve optmum, then M s PSD. If t s a negatve optmum y, then the matrx s not PSD, and the nequalty X y y 0 s satsfed for all PSD matrces X but fals for X := M. Thus we have a separaton oracle and we can solve SDPs n polynomal tme up to arbtrarly good accuracy. In lght of our characterzaton of PSD matrces, SDPs have the followng equvalent formulaton: max C x (), x () A () x(), x () b. A (m) x(), x () b m where our varables are vectors x (),, x (n). SDP Relaxaton of Max Cut and Random Hyperplane Roundng The Max Cut problem n a gven graph G = (V, E) has the followng equvalent characterzaton, as a quadratc optmzaton problem over real varables x,..., x n, where V = {,..., n}: 3
max () E x = 4 (x x ) V Any quadratc optmzaton problem has a natural relaxaton to an SDP, n whch we relax real varables to take vector values and we change multplcaton to nner product: max () E x = 4 x x V Solvng the above SDP, whch s doable n polynomal tme up to arbtrarly good accuracy, gves us a unt vector x for each vertex. A smple way to convert ths collecton to a cut (S, V S) s to take a random hyperplane through the orgn, and then defne S to be the set of vertces such that x s above the hyperplane. Equvalently, we pck a random vector g accordng to a rotaton-nvarant dstrbuton, for example a Gaussan dstrbuton, and let S be the set of vertces such that g, x 0. Let (, ) be an edge: One sees that f θ s the angle between x and x, then P[(, ) s cut ] = θ and the contrbuton of (, ) to the cost functon s 4 x x = x, x = cos θ some calculus shows that for every 0 we have ( θ >.878 ) cos θ and so E[ number of edges cut by (S, V S)].878 () E =.878 SDP MaxCut(G).878 MaxCut(G) 4 x x so we have a polynomal tme approxmaton algorthm wth worst-case approxmaton guarantee.878. Next tme, we wll see how the SDP relaxaton behaves on random graphs, but frst let us how t behaves on a large class of graphs. 4
3 Max Cut n Bounded-Degree Trangle-Free Graphs Theorem If G = (V, E) s a trangle-free graph n whch every vertex has degree at most d, then ( ) MaxCut(G) + Ω d E Proof: Consder the followng feasble soluton for the SDP: we assocate to each node an n-dmensonal vector x () such that x () =, x () = / deg() s (, ) E, and = 0 otherwse. We mmedately see that x () = for every and so the soluton s feasble. x () Let us transform ths SDP soluton nto a cut S, V S) usng a random hyperplane. We see that, for every edge (, ) we have x (), x () = d() d( d The probablty that (, ) s cut by (S, V S) s arccos d and arccos d = arcsn + d so that the expected number of cut edges s at least + Ω ( d ) ( ) + Ω d E. 5