Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Similar documents
SDP Relaxations for MAXCUT

Lecture 16: Constraint Satisfaction Problems

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

Lecture Semidefinite Programming and Graph Partitioning

Convex and Semidefinite Programming for Approximation

Positive Semi-definite programing and applications for approximation

Introduction to Semidefinite Programming I: Basic properties a

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Approximation Algorithms

Lecture 22: Hyperplane Rounding for Max-Cut SDP

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

16.1 L.P. Duality Applied to the Minimax Theorem

Lecture 3: Semidefinite Programming

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Lecture 13 March 7, 2017

approximation algorithms I

Canonical SDP Relaxation for CSPs

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

Network Optimization

How hard is it to find a good solution?

Lecture 8: The Goemans-Williamson MAXCUT algorithm

6.854J / J Advanced Algorithms Fall 2008

Grothendieck s Inequality

Convex Optimization M2

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

6.854J / J Advanced Algorithms Fall 2008

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

CSC 2414 Lattices in Computer Science September 27, Lecture 4. An Efficient Algorithm for Integer Programming in constant dimensions

ACO Comprehensive Exam October 18 and 19, Analysis of Algorithms

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

Handout 6: Some Applications of Conic Linear Programming

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 13: Spectral Graph Theory

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Topics in Theoretical Computer Science April 08, Lecture 8

Preliminaries and Complexity Theory

Lecture 5. Max-cut, Expansion and Grothendieck s Inequality

Clustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion

Chapter 4. Solving Systems of Equations. Chapter 4

On the efficient approximability of constraint satisfaction problems

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Name: Final Exam MATH 3320

Approximation Algorithms and Hardness of Approximation May 14, Lecture 22

Lecture: Examples of LP, SOCP and SDP

Semidefinite programs and combinatorial optimization

CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory

Convex relaxation. In example below, we have N = 6, and the cut we are considering

47-831: Advanced Integer Programming Lecturer: Amitabh Basu Lecture 2 Date: 03/18/2010

Lecture 12 : Graph Laplacians and Cheeger s Inequality

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

APTAS for Bin Packing

Lecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme

Solving Systems of Linear Equations

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Introduction to LP and SDP Hierarchies

Symmetric matrices and dot products

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Lecture #21. c T x Ax b. maximize subject to

16.1 Min-Cut as an LP

Semidefinite Programming

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

A better approximation ratio for the Vertex Cover problem

22A-2 SUMMER 2014 LECTURE 5

Semidefinite Programming Basics and Applications

An Algorithmist s Toolkit September 10, Lecture 1

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 1 and 2: Random Spanning Trees

Math 3191 Applied Linear Algebra

Iterative Rounding and Relaxation

Math 2B Spring 13 Final Exam Name Write all responses on separate paper. Show your work for credit.

Approximation & Complexity

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

1 The independent set problem

Polynomiality of Linear Programming

SEMIDEFINITE PROGRAM BASICS. Contents

Data Mining and Analysis: Fundamental Concepts and Algorithms

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Approximation algorithm for Max Cut with unit weights

Partitioning Algorithms that Combine Spectral and Flow Methods

CPSC 536N: Randomized Algorithms Term 2. Lecture 2

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

Lecture 4: Polynomial Optimization

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

MATH 110: LINEAR ALGEBRA PRACTICE MIDTERM #2

LINEAR PROGRAMMING III

Linear Algebra March 16, 2019

LECTURE NOTE #11 PROF. ALAN YUILLE

Linear Systems of n equations for n unknowns

Transcription:

Lecture 10 Semidefinite Programs and the Max-Cut Problem In this class we will finally introduce the content from the second half of the course title, Semidefinite Programs We will first motivate the discussion of SDP with the Max-Cut problem, then we will formally define SDPs and introduce ways to solve them 101 Max Cut Let G = (V, E) with edge weights w e > 0 for all e E where e E w e = 1 Our goal is to maximize e (A) w e over A V, or equivalently maximize 1[A(u) A(v)] over functions A : V {0, 1} Remark 101 Couple quick observations: Opt = 1 G is bipartite Opt 1 2 Proof (Algorithmic) Pick A : V {0, 1} at random [ ] E [cut val] = E 1 [A(u) A(v)] = Pr[A(u) A(v)] = 1 2 = 1 2 * Lecturer: Ryan O Donnell Scribe: Franklin Ta 1

G complete Opt = 1 2 + 1 2(n 1) Max-cut is NP-hard Integer Programming Formulation For {x v } v V, {z e } e E {0, 1}, Now lets look at the IP formulation for Max-Cut st max z uv z uv x u + x v z uv 2 (x u + x v ) Where x encodes which partition the vertex is in, and z encodes whether the edge is cut To see why this works, suppose that x u x v, then z uv 1, z uv 1, so z uv = 1 Otherwise, suppose x u = x v, then z uv 2, z uv 0, so z uv = 0 Linear Programming Relaxation To get the LP relaxation, just let x v, z e [0, 1] But unfortunately, this LP relaxation isn t very good Set x v = 1 for all v, then z 2 e = 1 for all e, which makes LP Opt = 1 for all graph G Another Formulation Seeing that didn t work, let s try another formulation For y v R, max ( 1 2 1 ) 2 y uy v st y v y v = 1 v V Here we changed our indicator, y v, to use 1 or 1 to encode which partition we are in Note that if y u = y v, then ( 1 1y ) 2 2 uy v = 0, and if yu y v, then ( 1 1y ) 2 2 uy v = 1 SDP Relaxation[Delorme Poljak 90] Note that the previous formulation is still exactly Max-Cut (which is NP-hard) so we won t be able to solve it So to relax it, we can allow y v R d : (this is the SDP ) max ( 1 2 1 ) 2 y u y v st y v y v = 1 v V To visualize what this SDP is doing, note that since y v = 1, it is possible to embed {y v } v V onto a unit sphere: 2

Figure 101: Vectors y v embedded onto a unit sphere in R d Denote σ uv = y u y v = cos( ( y u, y v )) Then for (u, v) E, we want σ uv 1 as possible since this will maximize the sum Translating to the figure above, we want to have vectors pointing away from each other as much as possible Also note that if d is 1, the solution is exact, so SDP Opt Opt Also note that d n in general Example 102 Let s see some examples of how this SDP compares with Opt Let G be K 3 Clearly we can embed y 1, y 2, y 3 120 degrees apart in the unit circle in R 2, so we get: ( 1 2 1 ) 2 y u y v SDP Opt(G) = 3 i=1 ( 1 1 3 2 1 ( )) 2π 2 cos 3 = 3 4 This bound can be shown to be tight, so SDP Opt(G) = 3 It can also be shown that 4 Opt(G) = 2 and LP Opt(G) = 1 Thus 8 SDP Opt(G) = Opt(G) 3 9 Another example is G being C 5 In this case we have Opt(G) = 88 SDP Opt(G) Remark 103 Ellipsoid algorithm can (weakly) solve SDP in polytime So assume you can find optimal y v Randomized Rounding [Goemans Williamson 94] At this point, we know we can relax Max-Cut into a SDP, and we know we can solve SDPs, but we still need to somehow convert that back to a solution for Max-Cut We can do this using Goemans-Williamson randomized rounding: We want to cut the vectors {y v } v V with a random hyperplane through zero where everything on one side of the hyperplane is in one partition, and everything on the other side 3

of the hyperplane is in the other We do this by choosing g R d \ {0} (where g is normal to hyperplane) from any rotationally symmetric distribution Then set y u = sgn( g y u ) { 1, 1} Figure 102: Vectors being separated by a hyperplane with normal g Analysis of the rounding Fix (u, v) E Then the probability that an edge (u, v) gets cut is the same as the probability that the hyperplane splits y u and y v So consider just the 2-D plane containing y u, y v Since the hyperplane was chosen from a rotationally symmetric distribution, the probability that the hyperplane cuts these two vectors is the same as the probability that a random diameter lies in between the angle θ of the two vectors Figure 103: The plane of two vectors being cut by the hyperplane Thus: Pr[(u, v) gets cut] = θ π = cos 1 ( y u y v ) π = cos 1 (σ uv ) π E[cut val] = cos 1 (σ uv ) π Now recall that SDP Opt = ( 1 2 1 2 σ uv) Opt 4

So if we find α such that cos 1 (σ uv ) π ( 1 α 2 1 ) 2 σ uv then we can conclude E[cut val] α SDP Opt αopt Plotting the above, σ uv [ 1, 1] Figure 104: α ( 1 1σ) vs cos 1 (σ uv) 2 2 π we see that α = 87856 works Theorem 104 E[Goemans Williamson cut] 87856 SDP Opt 87856Opt Note that this gives a polytime 878-factor approximation algorithm for Max-Cut 102 Semidefinite Programming 1021 Positive Semidefinite Matrices Theorem 105 S R n n symmetric is positive semidefinite (PSD) iff equivalently 1 S = Y Y for some Y R d n, d n 2 S s eigenvalues are all greater than or equal to 0 3 x Sx = i,j x ix j s ij 0 for all x R n 1 0 1 4 S = LDL where D diagonal, D 0, L unit lower-triangular (ie, L = ) 1 5

5 There exist joint real random variables Z 1,, Z n such that E[Z i Z j ] = s ij 6 (S R n n ) convex-hull of {xx : x R n } Remark 106 3 and 6 from Theorem 105 implies {S R n2 : S PSD} is convex Remark 107 Recall the SDP of Max-Cut is: ) max ( 1 2 1 2 σ uv st σ uv = y u y v σ vv = 1 v V Thus, Y = y v1 y v2 y vn st (σ uv ) u,v V = Y Y matrix S = (σ uv ) is PSD by Theorem 1051 Definition 108 A semidefinite program is an LP over n 2 variables σ ij with extra constraints S := (σ ij ) is PSD (This is really n(n+1) 2 variables since it is symmetric) Theorem 109 Omitting technical conditions SDP can be solved in polytime 1022 Strong Separation Oracle for PSDness Given symmetric matrix S Q n n, we want to assert S is PSD or find x Q n st x Sx < 0 Idea: Compute smallest eigenvalue of S If it is greater than or equal to 0, then S is PSD, otherwise we can use the corresponding eigenvector z to show that z Sz < 0 Theorem 1010 Strongly polytime algorithm such that if S P SD, returns S = LDL, and if S not P SD, returns x st x Sx < 0 (L, D, x Q) Bonus: Since can compute D to any accuracy (square root each term in D) and S = Y Y, we compute Y = DL Where columns of Y are vectors Proof Do (symmetric) Gaussian Elimination on S: S = 6

Clear out first column with L 1 (which adds multiples of first row to other rows) 1 0 1 0 S = } {{ 1 } 0 L 1 Since symmetric, clear out first row with L 1 0 0 L 1 SL 0 1 = 0 Then clear out second row and column with L 2 and L 2, and so on 0 0 0 0 0 0 L 2 L 1 SL 1 L 2 = 0 0 0 0 L n L 2 L }{{} 1 S L 1 L 2 L n = D }{{} L L LSL = D S = L 1 DL This will run to completion unless we hit the following cases: 7

If you just finished pivoting with a negative number, stop and output not PSD: 0 0 0 0 a 0 0 LSL = 0 0 0 0 0 0 0 0 a 0 0 e i LSL e i = e i 0 0 e i 0 0 = a < 0 Output x = L e i If pivot is zero, and rows/cols of that pivot is not zero, stop and output not PSD: 0 b LSL = b c ( ) c b LSL c = ( c b b ) 0 b b c c b = cb 2 0 If c 0, output x = L c, else output x = L 1 b b In all other cases we can run to completion and output S = L 1 DL 8