Lecture 22: Hyperplane Rounding for Max-Cut SDP

Similar documents
Lecture 16: Constraint Satisfaction Problems

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

SDP Relaxations for MAXCUT

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

Approximation & Complexity

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

Canonical SDP Relaxation for CSPs

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Lecture Semidefinite Programming and Graph Partitioning

Approximation Algorithms and Hardness of Approximation May 14, Lecture 22

Convex and Semidefinite Programming for Approximation

16.1 Min-Cut as an LP

Lecture 8: The Goemans-Williamson MAXCUT algorithm

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

6.854J / J Advanced Algorithms Fall 2008

approximation algorithms I

8 Approximation Algorithms and Max-Cut

Maximum cut and related problems

Introduction to Semidefinite Programming I: Basic properties a

Unique Games and Small Set Expansion

Notice that lemma 4 has nothing to do with 3-colorability. To obtain a better result for 3-colorable graphs, we need the following observation.

On the efficient approximability of constraint satisfaction problems

Lecture 20: Course summary and open problems. 1 Tools, theorems and related subjects

16.1 L.P. Duality Applied to the Minimax Theorem

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

On the Optimality of Some Semidefinite Programming-Based. Approximation Algorithms under the Unique Games Conjecture. A Thesis presented

Iterative Rounding and Relaxation

Lecture 2: November 9

Topics in Theoretical Computer Science April 08, Lecture 8

Lecture: Local Spectral Methods (3 of 4) 20 An optimization perspective on local spectral methods

Lecture 3: Semidefinite Programming

Positive Semi-definite programing and applications for approximation

Lecture 13 March 7, 2017

linear programming and approximate constraint satisfaction

Introduction to LP and SDP Hierarchies

Inapproximability Ratios for Crossing Number

CSC 2414 Lattices in Computer Science September 27, Lecture 4. An Efficient Algorithm for Integer Programming in constant dimensions

Global Optimization of Polynomials

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

Overview. 1 Introduction. 2 Preliminary Background. 3 Unique Game. 4 Unique Games Conjecture. 5 Inapproximability Results. 6 Unique Game Algorithms

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

Lecture 13: Lower Bounds using the Adversary Method. 2 The Super-Basic Adversary Method [Amb02]

Learning symmetric non-monotone submodular functions

Approximating bounded occurrence ordering CSPs

Approximability of Constraint Satisfaction Problems

Unique Games Conjecture & Polynomial Optimization. David Steurer

Lecture 9: Low Rank Approximation

Lecture: Expanders, in theory and in practice (2 of 2)

Max Cut (1994; 1995; Goemans, Williamson)

Lecture 10 + additional notes

Convex relaxation. In example below, we have N = 6, and the cut we are considering

How hard is it to find a good solution?

Increasing the Span of Stars

The Trust Region Subproblem with Non-Intersecting Linear Constraints

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Lecture 5. Max-cut, Expansion and Grothendieck s Inequality

Lecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem

Approximating Maximum Constraint Satisfaction Problems

Approximating maximum satisfiable subsystems of linear equations of bounded width

6.854J / J Advanced Algorithms Fall 2008

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms.

Notes for Lecture 15

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

Approximation algorithm for Max Cut with unit weights

PCP Course; Lectures 1, 2

Convex relaxation. In example below, we have N = 6, and the cut we are considering

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Lecture notes for quantum semidefinite programming (SDP) solvers

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Lecture 4: LMN Learning (Part 2)

ORIE 6334 Spectral Graph Theory December 1, Lecture 27 Remix

Bounds on the radius of nonsingularity

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

Lecture 20: November 1st

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

Integrality Gaps for Sherali Adams Relaxations

Hierarchies. 1. Lovasz-Schrijver (LS), LS+ 2. Sherali Adams 3. Lasserre 4. Mixed Hierarchy (recently used) Idea: P = conv(subset S of 0,1 n )

ORIE 6334 Spectral Graph Theory October 25, Lecture 18

Provable Approximation via Linear Programming

Lecture #21. c T x Ax b. maximize subject to

Some Open Problems in Approximation Algorithms

Improved bounds on crossing numbers of graphs via semidefinite programming

A DETERMINISTIC APPROXIMATION ALGORITHM FOR THE DENSEST K-SUBGRAPH PROBLEM

ORIE 6334 Spectral Graph Theory November 8, Lecture 22

There are several approaches to solve UBQP, we will now briefly discuss some of them:

Approximate Gaussian Elimination for Laplacians

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

What Computers Can Compute (Approximately) David P. Williamson TU Chemnitz 9 June 2011

Label Cover Algorithms via the Log-Density Threshold

Linear Programming. Scheduling problems

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Advanced Combinatorial Optimization Updated April 29, Lecture 20. Lecturer: Michel X. Goemans Scribe: Claudio Telha (Nov 17, 2009)

On the efficient approximability of constraint satisfaction problems

6.841/18.405J: Advanced Complexity Wednesday, April 2, Lecture Lecture 14

Grothendieck s Inequality

Graph Theory. Thomas Bloom. February 6, 2015

Transcription:

CSCI-B609: A Theorist s Toolkit, Fall 016 Nov 17 Lecture : Hyperplane Rounding for Max-Cut SDP Lecturer: Yuan Zhou Scribe: Adithya Vadapalli 1 Introduction In the previous lectures we had seen the max-cut problem. In this lecture we will present a 0.878-approximation algorithm. We will in fact, also see that we do not have much hope of doing better. Recap Let us have quick recap of the semi-definite Programming for the max-cut problem. Figure 1 is the Qudratic Program for the Max-Cut. Figure 3 is the relaxation for the SDP. Fact 1. Since SDP is a relaxation of the QIP, SDP QIP. We can think of SDP as a relaxation of the Integer Program, because if we add an extra constraint rank(y = 1), it will become a Quadratic Integer Program. We know from the previous lecture that, Theorem 1. SDP is solvable in polynomial time using ellipsoid method and a separation oracle. 3 Vector View Recall Choleskey Decomposition. Given Y 0, we can write Y as Y = L T DL where L is Corollary. We can also write Y = L T L where L = DL. In fact, Y contains inner product of two vectors. y uv = w u, w v. Write L = [W 1, W 1,, W n ], We have Y uv = w T u w v. Finally the SDP can be written as it is in Figure 3. This Integer Program can be relaxed in the following manner. 1

Lecture : Hyperplane Rounding for Max-Cut SDP maximize: 1 x u x v subject to: x u = 1 u V Figure 1: Quadratic Integer Program for Max-Cut maximize: 1 y uv subject to: y uv = y vu u, v V y uu = 1 u V Y 1 Figure : SDP for Max-Cut maximize: 1 w u, w v subject to: w u = 1 u V Figure 3: SDP for Max-Cut

Lecture : Hyperplane Rounding for Max-Cut SDP 3 3.1 Hyperplane Rounding The main idea is to use a random hyperplane that goes through the origin to divide the vectors in to two sets, corresponding to a cut. It can summarized in the following steps. 1. Choose a uniform random hyperplane through the origin that divides the sphere. In other words, Choose a norm-vector with a uniform random direction; sample g = (g 1,, g n ) N (0, 1) n.. Set x u = sign(g, w u ) u V Theorem 3. [GW95] E[ 1 x ux v ] 0.878(SDP) Proof. By linearity of expectation. LHS = [ ] 1 xu x v E = Pr[w u, w v separated by a random hyperplane] Now for a pair of fixed u and v, let us focus on the plane containing w u and w v. By symmetry the random hyperplane s projection becomes a random line through the origin. Thus we have the following. Pr[w v and w u are separated] = (w v, w u ) = arccos(w v, w u ) = Pr[w u, w v separated by a random hyperplane] = arccos(w v, w u ) Now let α GW = min ρ [ 1,1] { arccos(w v, w u ) (arccos ρ)/ }. Using this we have (1 ρ)/ α GW. 1 w v, w u = α GW.(SDP) Numerical results show that, α GW 0.87856 > 0.878

Lecture : Hyperplane Rounding for Max-Cut SDP 4 W W1 Figure 4: Hyperplane Rounding 4 Recognizing almost Bipartite Graphs Suppose OPT (1 ɛ)m. m is the number of edges ( E ). The graph is bipartite after removing ɛm edges. Recall that, E[rounding] (1 ɛ)0.878m. The question we ask here is can we do better than this? Intuition: When ɛ = 0, poly-time algorithm: returns a cut with m edges (bipartite graph recognition). Let alg(c) be the best found by a polytime algorithm when OPT = c.m. The curve of alg(c) is not continuous at c = 1. Is this really the case? GW finds a cut of size (1 O( ɛ))m given OPT = (1 ɛ)m. Now we can state the following theorem. Theorem 4. alg(1 ɛ) O( ɛ) Observation: cos x = 1 x + O(x4 ) = arccos(1 y) = y + O(y). Claim 1. arccos(1 y) 4 y y [0, ] Proof. at y = 0 : LHS = RHS = 0 at y (0, 1) : dlhs 1 = 1 1 dy y( y) y y = drhs dy at y [1, ]: LHS arccos( 1) = RHS 4

Lecture : Hyperplane Rounding for Max-Cut SDP 5 Corollary 5. arccos(1 y) = arccos(1 y) 4 y Theorem 6. Assuming the Unique Games Conjecture, there exists no poly-time algorithm that (α GW + δ) or (1 ɛ, 1 o( ɛ)-approximate max-cut for all ɛ, δ > 0. Proof. E[rounding] = = = m 4 arccos( w u, w v ) arccos(ɛ uv 1) 4 ɛ uv ɛuv ( ) Using Jensen s inequality we have ( ) m 4 m ɛ uv = m + Pr[w u, w v separated] 1 m = m + (m OPT SDP ) (m OPT QIP ) ɛ ɛ uv ( ) w u, w v Therefore, ( ) m 4 m ɛ 5 Constraint Satisfaction Problem In this section we talk about the Constraint Satisfaction Problems (CSP). Domain: Ω = {1,,, q} Predicates: Π = { : Ω k {0, 1}} Input: n variables x 1,, x n, m constraints in the form (x i1,, x ik, Π) Goal: Find assignment σ : {x 1,, x n } Ω to maximize number of satisfied constraints. A constraint is satisfied if and only if (σ(x i1 ),, σ(x ik )) = 1

Lecture : Hyperplane Rounding for Max-Cut SDP 6 maximize: w ia, w jb a,b Ω:(a,b)=1 subject to: w i,a, w i,b = 0 i [n], a, b Ω, a b w i,a, w i,b 0 i, j [n], a, b Ω w i,a = I i [n] a ω I = 1 Figure 5: Basic SDP 5.1 Basic SDP Relaxation for CSP Here we will deal with k =, (binary CSP). For each x i and a ω introduce w i,a correspond to σ(x i ) = a. Note that in the integral solution w i,a = I, if σ(x i ) = a and w i,a = 0 otherwise. Thus we get the Basic SDP as in Figure 5. Theorem 7. [Rag08] For every CSP, polynomial time rounding scheme for BasicSDP achieving the optimal approximation guarantee among all ploy-time algorithms assuming the Unique Games Conjecture. References [GW95] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM, 4(6):1115 1145, November 1995. [Rag08] Prasad Raghavendra. Optimal algorithms and inapproximability results for every csp? In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, STOC 08, pages 45 54, New York, NY, USA, 008. ACM.