Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

Similar documents
SDP Relaxations for MAXCUT

Approximation Algorithms

Lecture Semidefinite Programming and Graph Partitioning

6.854J / J Advanced Algorithms Fall 2008

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Lecture Examples of problems which have randomized algorithms

Notes for Lecture 3... x 4

1 Randomized Computation

Lecture 8: The Goemans-Williamson MAXCUT algorithm

Lecture 17 November 8, 2012

Pr[X = s Y = t] = Pr[X = s] Pr[Y = t]

Lecture 15: Expanders

Lecture 12: Randomness Continued

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Lecture 17. In this lecture, we will continue our discussion on randomization.

Lecture 26: Arthur-Merlin Games

By allowing randomization in the verification process, we obtain a class known as MA.

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 1/29/2002. Notes for Lecture 3

On the efficient approximability of constraint satisfaction problems

CSCI 1590 Intro to Computational Complexity

Randomized Computation

1 From previous lectures

8 Approximation Algorithms and Max-Cut

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1.

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms.

Lecture 23: Alternation vs. Counting

Handout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0.

Lecture 18: PCP Theorem and Hardness of Approximation I

Lecture 24: Approximate Counting

Computational Complexity of Bayesian Networks

Notes on Complexity Theory Last updated: November, Lecture 10

Lecture 5. Max-cut, Expansion and Grothendieck s Inequality

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

Lecture 22: Counting

Complexity Classes V. More PCPs. Eric Rachlin

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

: On the P vs. BPP problem. 30/12/2016 Lecture 12

2 P vs. NP and Diagonalization

Almost transparent short proofs for NP R

Dinur s Proof of the PCP Theorem

Problem Set 2. Assigned: Mon. November. 23, 2015

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Lecture 16: Constraint Satisfaction Problems

1 Randomized complexity

Lecture 18: Zero-Knowledge Proofs

CS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5

Lecture 13 March 7, 2017

Grothendieck s Inequality

6-1 Computational Complexity

Lecture 24: Randomized Complexity, Course Summary

Lecture 19: Interactive Proofs and the PCP Theorem

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

15-855: Intensive Intro to Complexity Theory Spring Lecture 7: The Permanent, Toda s Theorem, XXX

P versus NP over Various Structures

Lecture 4 : Quest for Structure in Counting Problems

Convex relaxation. In example below, we have N = 6, and the cut we are considering

CS 301. Lecture 17 Church Turing thesis. Stephen Checkoway. March 19, 2018

Exponential time vs probabilistic polynomial time

ORIE 6334 Spectral Graph Theory December 1, Lecture 27 Remix

Great Theoretical Ideas in Computer Science

Lecture 2: From Classical to Quantum Model of Computation

On Bounded Queries and Approximation

Automata Theory. Lecture on Discussion Course of CS120. Runzhe SJTU ACM CLASS

Arthur and Merlin as Oracles

3 Probabilistic Turing Machines

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Handout 6: Some Applications of Conic Linear Programming

Lecture 2 Sep 5, 2017

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Lecture 12: Interactive Proofs

Randomized Complexity Classes; RP

Lecture: Examples of LP, SOCP and SDP

20.1 2SAT. CS125 Lecture 20 Fall 2016

NP Completeness and Approximation Algorithms

1 Maximum Budgeted Allocation

22 Approximations - the method of least squares (1)

Convex and Semidefinite Programming for Approximation

Lecture 3: Reductions and Completeness

Convex relaxation. In example below, we have N = 6, and the cut we are considering

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

: Computational Complexity Lecture 3 ITCS, Tsinghua Univesity, Fall October 2007

Relaxations and Randomized Methods for Nonconvex QCQPs

Lecture 16 November 6th, 2012 (Prasad Raghavendra)

6.842 Randomness and Computation Lecture 5

Complexity Classes IV

Mathematics 308 Geometry. Chapter 2. Elementary coordinate geometry

Lecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem

Introduction to Advanced Results

Overview. 1 Introduction. 2 Preliminary Background. 3 Unique Game. 4 Unique Games Conjecture. 5 Inapproximability Results. 6 Unique Game Algorithms

Review of Complexity Theory

The Shortest Vector Problem (Lattice Reduction Algorithms)

Lecture 7: The Polynomial-Time Hierarchy. 1 Nondeterministic Space is Closed under Complement

On the Crossing Number of Complete Graphs with an Uncrossed Hamiltonian Cycle

Lecture Notes CS:5360 Randomized Algorithms Lecture 20 and 21: Nov 6th and 8th, 2018 Scribe: Qianhang Sun

Introduction Long transparent proofs The real PCP theorem. Real Number PCPs. Klaus Meer. Brandenburg University of Technology, Cottbus, Germany

Lecture 3: Semidefinite Programming

Lecture Notes Each circuit agrees with M on inputs of length equal to its index, i.e. n, x {0, 1} n, C n (x) = M(x).

Transcription:

CS 80: Introduction to Complexity Theory 0/03/03 Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm Instructor: Jin-Yi Cai Scribe: Christopher Hudzik, Sarah Knoop Overview First, we outline the Goemans-Williamson Approximation Algorithm for MAXCUT and demonstrate that can be expected to randomly produce a cut that is approximately 88% of the size of the maximum cut. The algorithm depends on nonlinear programming relaxation and semidefinite programming to achieve this result. These techniques have proven quite promising for designing approximation algorithms. The full paper published by Goemans-Williamson in JAMC (995) can be found at http://www-math.mit.edu/ goemans, and was used to aid in this write up. The second portion is dedicated to defining the hierarchy of languages that can be decided,with certain probability, by a randomized one-sided error or a randomized two-sided error in p-time TM. 2 Goemans-Williamson Approximation Algorithm for MAXCUT 2. Preliminaries: Defintions & Propositions Before the algorithm is presented, we ll provided some needed definitions and propositions. Definition (positive semidefinite). An n n matrix, A, is positive semidefinite if x n, x t Ax 0. Definition 2 (semidefinite program). A semidefinite program is the problem of optimizing a linear function of a symmetric matrix subject to linear equality constriaints and the constraint that the matrix be positive semidefintite. It pays to note that a semidefinite program is just a special case of a linear program. Thus it can be solved (using a generalized Simplex method, or other) in polynomial time. Proposition. For a symmetric matrix A, TFAE:. A is positive semidefinite. 2. All eigenvalues of A are non-negative 3. There exists a m n (m n) matrix B such that A = B t B. We also note that finding such a B can be done in p-time using Cholesky decomposition. 2.2 The Algorithm We are given a graph, G = (V, E) of n vertices. For simplicity, we denote V = {, 2,..., n} and E = {(i, j) there is an edge between i and j in the graph, i j(no double counting)}. We can view a maximum cut, M, for G as the solution to the following integer quadratic program:

Equivalently, and useful later: Maximize (x i x j ) 2 subject to x i {, } i V (QP) Maximize (x 2 i x 2 j + 2x i x j ) subject to x i {, } i V Here the optimal values for the x i s correspond to vertex i being on the one side of the cut (x i = ) or the other (x i = -). Solving QP is NP-complete. We ll relax the constraints of QP to formulate a semidefinite program that will find an approximate maximum to QP. This solution can be decomposed and partitioned to reveal an approximate maximal cut. So, hold on to your horses!! We ll relax QP as follows:. We can view x i as dimensional (rank ) unit vectors. Denote these as y i. Now consider span{y, y 2,...y n }. These vectors span a space no larger in dimension than n. So these vectors can be seen as elements of n. So we can restate the quadratic function in QP, using the dot product, as follows: (y i y i y j y j + 2y i y j ) 2. Form the matrix A = (a ij ) where a ij = y i y j. Now we obtain the following semidefinite program that approximately solves QP. (SD) Maximize subject to a ii =, i V (a ii a jj + 2a ij ) A is symmetric positive semidefinite. Now we can proceed to find an approximate maximum cut as follows:. Solve SD to obtain an optimal solution, z, for SD. Note that the optimal solution for SD could be irrational. In such a case we can obtain, for any ɛ > 0, an approximation z > z > z ɛ in p-time. 2. Say the maximum value of SD, z, is produced when the matrix A is used. By the proposition above and using Cholesky Decomposition, in p-time we can decompose A = U t U. Consider column vectors of the matrix U = {u,...u n }. Since A is positive semidefinite with a ii = these vectors are unit vectors: = a ii = ut i u i = u i 2. 2

3. We can imbed these vectors in n (since m n). Now we randomly pick a hyperplane through the origin. We can, in p-time, determine which of the vectors u i have endpoints above or below the plane. The u i s above will highlight vertices i on one side of the cut and the u j s below the plane will give verticies on the other side of the cut. Different hyperplanes separating the u i will produce different cuts, however, any cut made by this proceedure will be guaranteed to have an expected size significantly larger than 2 the size of the maximum cut. 2.3 Analysis Here we attend to the details of why this random algorithm produces a cut of size approximately 88% of the size of the maximum cut. Let C denote the cut produced by this proceedure. We easily establish: E[ C ] = Pr[(i, j) is on the cut ] and Pr[(i, j) is on the cut] is the same as the probability that the hyperplane randomly selected separtes the vectors u i and u j. Lemma. The probability of a random hyperplane separating two vectors is proportional to the angle,, between the two vectors. Proof. Let N denote a normal of unit length to the hyperplane selected. Now, saying that u i and u j are on opposite sides of this plane is equivalent to sign(u i N) sign(u j N) Thus, Pr[hyperplane separates u i and u j ] = Pr[sign(u i N) sign(u j N)]. Using an equivalent statement and by symmetry we have Pr[sign(u i N) sign(u j N)] = 2 Pr[v i N 0 and v j N < 0] Now the set of all unit length normals, N, for which v i N 0 and v j N < 0, geometrically is the intersection of two half spaces and the unit sphere in which the dihedral angle between the half spaces is precicely. This resultant solid has volume: 2π (volume of the full sphere). (See http://mathforum.org/dr.math/faq/formulas/faq.sphere.html#lune for 3D picture. Here you can think of θ = and the volume as depicted in light blue) In other words, the chance that our plane falls between u i and u j is the equivalently the chance that our plane lands in this solid. Thus, P r[v i N 0 and v j N < 0] = 2π. Thus, Pr[plane is between u i and u j ] = Pr[sign(u i N) sign(u j N)] = 2 Pr[v i N 0 and v j N < 0] = 2π So we have E[ C ] = = = Pr[plane is between u i and u j ] (2π) π u i u j 2 (2 sin 2 )2 3

The last equality holds because 2 sin 2 = u i u j by simple geometry. Using calculus, we can show that the minimum of π for 0 θ (2 sin ij π is greater than.87856. Call the minimum α. 2 )2 Thus, we get E[ C ] α u i u j 2 But u i u j 2 is the exact size of maximum cut, M. Therefore, we have an α- approximation for MAXCUT. 2. Closing Notes It is known that there exists a constant c < such that the existence of a c-approximation algorithm for MAXCUT (and other NP problems) would imply that P = NP (Arora, Lund, Motwani, Sudan, Szegedy). It has been shown (Bellare, Goldreich, Sudan: 995) that c is as small as 83 8 for MAXCUT. Thus, it is not likely that vast improvements can be made on this approximation scheme. 3 Probabilistic Complexity Classes Define BPP, RP, co-rp, ZPP and their hierarchy. Definition 3 (Probabilistic Turing Machine, PTM). A PTM M is a DTM, M, that takes in two inputs; x the regular input, and a random bit string γ {0, } p( x ) that indicates probabilistic moves of M where p is some polynomial. The probability that M accepts x is defined as follows: P M (x) = Pr γ {0,} p( x ) [M(x, γ) = ] Definition (BPP). The class BPP consists of all languages L in which membership in L can be checked with two-sided error by a PTM. Formally, L BPP iff a p-time PTM, M, such that x L P M (x) 3 x / L P M (x) Definition 5 (RP). The class RP consists of all languages L in which membership in L can be checked with one-sided error by a PTM. Formally, L RP iff a p-time PTM, M, such that x L P M (x) 2 x / L P M (x) = 0 Definition 6 (co-rp). The class co-rp consists of all languages L in which membership in L can be checked with one-sided error by a PTM. Formally, L co-rp iff a p-time PTM, M, such that x L P M (x) = x / L P M (x) 2

Definition 7 (ZPP). ZPP = RP co-rp. Intuitively, A language L ZPP if there exists a PTM that with high probability will correctly determine membership in L and with small probability will say I don t know. This small probability can be made exponentially small. With these definitions, we have established the relationships among these classes as depicted in Figure. BPP RP co RP ZPP Figure : Probabilistic Complexity Class Hierarchy 5