Approximation algorithm for Max Cut with unit weights

Similar documents
Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Keywords. Approximation Complexity Theory: historical remarks. What s PCP?

Non-Approximability Results (2 nd part) 1/19

Overview. 1 Introduction. 2 Preliminary Background. 3 Unique Game. 4 Unique Games Conjecture. 5 Inapproximability Results. 6 Unique Game Algorithms

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms.

Approximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta

Lecture 18: PCP Theorem and Hardness of Approximation I

PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai

Approximation Preserving Reductions

Lecture 20: Course summary and open problems. 1 Tools, theorems and related subjects

Lecture Notes CS:5360 Randomized Algorithms Lecture 20 and 21: Nov 6th and 8th, 2018 Scribe: Qianhang Sun

Lecture 18: Inapproximability of MAX-3-SAT

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

Advanced Algorithms (XIII) Yijia Chen Fudan University

Approximation Resistance from Pairwise Independent Subgroups

SDP Relaxations for MAXCUT

P,NP, NP-Hard and NP-Complete

Notes for Lecture Notes 2

Probabilistically Checkable Proofs. 1 Introduction to Probabilistically Checkable Proofs

What Computers Can Compute (Approximately) David P. Williamson TU Chemnitz 9 June 2011

Complexity Classes IV

Complexity Classes V. More PCPs. Eric Rachlin

On the efficient approximability of constraint satisfaction problems

Notes for Lecture 3... x 4

Some Open Problems in Approximation Algorithms

1 Computational Problems

Notes for Lecture 3... x 4

2 P vs. NP and Diagonalization

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

Algorithm Design and Analysis

Towards Optimal Lower Bounds For Clique and Chromatic Number

CS294: PCP and Hardness of Approximation January 25, Notes for Lecture 3

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

BCCS: Computational methods for complex systems Wednesday, 16 June Lecture 3

Some Open Problems in Approximation Algorithms

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 2 Luca Trevisan August 29, 2017

CS21 Decidability and Tractability

PROBABILISTIC CHECKING OF PROOFS AND HARDNESS OF APPROXIMATION

Lecture 16: Constraint Satisfaction Problems

Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death

Introduction to Advanced Results

Approximating MAX-E3LIN is NP-Hard

Lecture 12 Scribe Notes

A An Overview of Complexity Theory for the Algorithm Designer

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Approximability of Constraint Satisfaction Problems

Complexity Theory. Jörg Kreiker. Summer term Chair for Theoretical Computer Science Prof. Esparza TU München

Lecture 22: Counting

Computability and Complexity Theory: An Introduction

Convex and Semidefinite Programming for Approximation

COMP Analysis of Algorithms & Data Structures

Inapproximability of Combinatorial Optimization Problems

Essential facts about NP-completeness:

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization

Today. Few Comments. PCP Theorem, Simple proof due to Irit Dinur [ECCC, TR05-046]! Based on some ideas promoted in [Dinur- Reingold 04].

CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

The Complexity of Optimization Problems

NP and Computational Intractability

DRAFT. PCP and Hardness of Approximation. Chapter 19

NP Completeness and Approximation Algorithms

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

More on NP and Reductions

Monotone Submodular Maximization over a Matroid

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

NP-complete Problems

NATIONAL UNIVERSITY OF SINGAPORE CS3230 DESIGN AND ANALYSIS OF ALGORITHMS SEMESTER II: Time Allowed 2 Hours

Increasing the Span of Stars

PCP Course; Lectures 1, 2

Lecture 59 : Instance Compression and Succinct PCP s for NP

Lecture 10: Hardness of approximating clique, FGLSS graph

Computational Complexity of Bayesian Networks

Discrete Optimization 2010 Lecture 12 TSP, SAT & Outlook

Hardness of Approximation

Lecture 13 March 7, 2017

Chapter 2. Reductions and NP. 2.1 Reductions Continued The Satisfiability Problem (SAT) SAT 3SAT. CS 573: Algorithms, Fall 2013 August 29, 2013

PCP Soundness amplification Meeting 2 ITCS, Tsinghua Univesity, Spring April 2009

NP-Completeness. Sections 28.5, 28.6

6.841/18.405J: Advanced Complexity Wednesday, April 2, Lecture Lecture 14

Fundamentals of optimization problems

NP-problems continued

Approximation Basics

Polynomial kernels for constant-factor approximable problems

6.854J / J Advanced Algorithms Fall 2008

CSC 5170: Theory of Computational Complexity Lecture 13 The Chinese University of Hong Kong 19 April 2010

Theory of Computation Chapter 9

Approximation & Complexity

Lecture 15: A Brief Look at PCP

An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint

1 Computational problems

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

Exact Max 2-SAT: Easier and Faster. Martin Fürer Shiva Prasad Kasiviswanathan Pennsylvania State University, U.S.A

Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/21/2010

Design and Analysis of Algorithms

CS154, Lecture 18: 1

Almost transparent short proofs for NP R

COL 352 Introduction to Automata and Theory of Computation Major Exam, Sem II , Max 80, Time 2 hr. Name Entry No. Group

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

On the efficient approximability of constraint satisfaction problems

Graph coloring, perfect graphs

Transcription:

Definition Max Cut Definition: Given an undirected graph G=(V, E), find a partition of V into two subsets A, B so as to maximize the number of edges having one endpoint in A and the other in B. Definition: Given an undirected graph G=(V, E) and a positive weght w e for each edge, find a partition of V into two subsets A, B so as to maximize the combined weight of the edges having one endpoint in A and the other in B.

NP- Hardness Max- Cut: is NP- Hard (reduction from 3-NAESAT). is the same as finding maximum bipartite subgraph of G. can be thought of a variant of the 2-coloring problem in which we try to maximize the number of edges consisting of two different colors. is APX-hard [Papadimitriou, Yannakakis, 1991]. There is no PTAS for Max Cut unless P=NP.

Max Cut with unit weights The algorithm iteratively updates the cut. 1. Initialize the cut arbitrarily. 2. For any vertex υ which has less than 1 2 of its edges crossing the cut, we move the vertex to the other side of the cut. 3. If no such vertex exists then stop else go to step 2. Each iteration involves examining at most V vertices before moving one, hence O( V 2 ) time. The cut value is increased by at least 1 after each iteration. The maximum possible cut value is E, hence there are at most E iterations. Overall time complexity: O( V 2 E ).

What is the approximation ratio of this algorithm? At least half of the edges of each vertex contributes to the solution. υ A : u B w (υ,u) 1 2 (υ,u) E w (υ,u) u B : υ A w (u,υ) 1 2 (u,υ) E w (u,υ) Summing over all vertices: 2 υ A,u B w (u,υ) e E w e OPT What is a tight example for this algorithm? A better ratio cannot be achieved using e E w e as an upper bound for the maxcut. In complete graphs the max cut is half the size of the upper bound.

The previous algorithm for For the general case of weighted graphs, the time complexity becomes O( V 2 e E w e) which is not always polynomial in the size of the input. Note. There exist modifications to the algorithm that give a strongly polynomial running time, with an additional ɛ term in the approximation coefficient.

Randomized Approximation Algorithm for Max Cut There is a simple randomized algorithm for Max Cut: Assign each vertex at random to A or to B with equal probability, such that the random decisions for the different vertices are mutually independent. The expected weight of the solution is: ) E( e E(A,B) w e = e E w e Pr(e E(A, B)) = e E w e 1 2 OPT. = 1 2 In the case of Max Cut with unit weights the expected number of cut edges is E 2.

Las Vegas algorithm for Max Cut with unit weights In the case of unit weights (w e = 1 e E), we can obtain a Las Vegas algorithm ( repeating the previous procedure. ) Let, p = Pr e E(A,B) w e E 2. Then, ( E ) 2 = E e E(A,B) w e = ( ) i E 2 1 i Pr e E(A,B) w e = i + + i E 2 (1 p) ( E 2 1) + p E, which implies that ) i Pr( e E(A,B) w e = i p 1 and the expected number of steps before finding a cut +1 E 2 of value at least E 2 is E 2 + 1.

Derandomization using conditional expectations Instead of random choices, we evaluate both alternatives according to the conditional expectation of the objective function if we fix the decisions until now. We need three sets A, B, C. Initially C=V and A=B=. At an intermediate state, the expected weight w(a, B, C) of the random cut produced by this procedure is: w(a, B, C) = e E(A,B) w e + 1 2 e E(A,C) w e + 1 2 e E(B,C) w e + 1 2 e E(C,C) w e.

Derandomization using conditional expectations Initialize A = B =, C = V. for all υ V do Compute w(a + υ, B, C υ) and w(a, B + υ, C υ). if w(a + υ, B, C υ) > w(a, B + υ, C υ) then A = A + υ else B = B + υ end if C = C υ end for return A,B

Derandomization using conditional expectations The analysis of the algorithm is based on the observation that: Initially w(a, B, C) = 1 2 w e. For every A,B,C and υ V : max{w(a + υ, B, C υ), w(a, B + υ, C υ)} w(a, B, C). The algorithm computes a partition (A,B) such that the weight of the cut is at least 1 2 w e.

Obtaining a greedy algorithm The algorithm can be simplified! Observe that: w(a + υ, B, C υ) w(a, B + υ, C υ) = e E(υ,B) w e e E(υ,A) w e. The following algorithm is a 2-approximation algorithm running in linear time!

Greedy Algorithm Initialize A = B =. for all υ V do if e E(υ,B) w e e E(υ,A) w e > 0 then A = A + υ else B = B + υ end if end for return A,B This algorithm first appeared in the 1967 paper On bipartite subgraphs of graphs of Erdős.

Greedy Algorithm The property e E(A,B) w e e E(A,A) w e + e E(B,B) w e is a loop invariant of the algorithm. It is easy to see that after any loop more edges (weight of edges) are added to the cut than added inside the set A or B. But, e E(A,B) w e + ( e E(A,A) w e + e E(B,B) w ) e = e E w e, hence e E(A,B) w e 1 2 e E w e.

PCP Theorem and Inapproximability A decision problem L belongs to PCP c(n),s(n) [r(n), q(n)], if there is a randomised oracle Turing Machine V (verifier) that, on input x and oracle access to a string w (the proof or witness), satisfies the following properties: Completeness: If x L then for some w, V w (x) accepts with probability at least c(n). Soundness: If x L then for every w, V w (x) accepts with probability at most s(n). The randomness complexity r(n) of the verifier is the maximum number of random bits that V uses over all x of length n. The query complexity q(n) of the verifier is the maximum number of queries that V makes to w over all x of length n.

PCP Theorem and Inapproximability In Some Optimal Inapproximability results, Håstad [2001], proved inapproximability results based on the following theorem. PCP Theorem For every ɛ > 0, NP = PCP 1 ɛ,1/2+ɛ [O(logn), 3]. Furthermore, the verifier behaves as follows: it uses its randomness to pick three entries i, j, k in the witness w and a bit b, and it accepts if and only if w i w j w k = b.

Gap introducing reduction from SAT to MaxE3Lin2 For every problem in NP, for example SAT, and for every ɛ > 0 there is a reduction that given a 3CNF formula φ constructs a system of linear equations with three variables per equation and: If φ is satisfiable, there is an assignment to the variables that satisfies a 1 ɛ fraction of the equations If φ is not satisfiable, there is no assignment that satisfies more than a 1 2 + ɛ fraction of equations.

Obtaining Inapproximability results Let a reduction f from L 1 to some optimization problem L 2 satisfying the following: If x L 1, then OPT (f (x)) > k 1 If x L 1, then OPT (f (x)) k 2 Then a k 1 k 2 -approximation algorithm for L 2 can be used to decide L 1. How? We have that OPT k 1 k 2 SOL, or SOL k 2 k 1 OPT. Using this in the case of x L 1, SOL k 2 k 1 OPT > k 2 k 1 k 1 = k 2.

Gap preserving reduction from MaxE3Lin2 to Max Cut In Gadgets, Approximation, and Linear Programming [Trevisan et al., 2000] was given a gap preserving reduction from MaxE3Lin2 to Max CUT. Max Cut can be thought of as a boolean constraint satisfaction problem and from every equation of MaxE3LIN2 instance cut constraints are constructed. When the construction is completed a gap between the yes and the no instances of SAT has been achieved. PCP Theorem If there is an r-approximation algorithm for Max CUT, where r < 17 16, then P = NP.

Other results In 1995, Michel Goemans and David Williamson discovered an algorithm with approximation factor 1.14, based on semidefinite programming. If the unique games conjecture is true, this is the best possible approximation ratio for maximum cut[khot, 2004].