U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 2 Luca Trevisan August 29, 2017

Similar documents
U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 3 Luca Trevisan August 31, 2017

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017

Lecture 4. 1 Estimating the number of connected components and minimum spanning tree

Notes for Lecture 14

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 1/29/2002. Notes for Lecture 3

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Notes for Lecture 27

CS294: PCP and Hardness of Approximation January 25, Notes for Lecture 3

CS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

ACO Comprehensive Exam March 17 and 18, Computability, Complexity and Algorithms

Lower Bounds for Testing Bipartiteness in Dense Graphs

Lecture 12 : Graph Laplacians and Cheeger s Inequality

Notes on State Minimization

1 Computational Problems

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

Lecture Notes CS:5360 Randomized Algorithms Lecture 20 and 21: Nov 6th and 8th, 2018 Scribe: Qianhang Sun

Approximation algorithm for Max Cut with unit weights

1 Primals and Duals: Zero Sum Games

Reductions. Example 1

Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/21/2010

1 Maximum Budgeted Allocation

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

Graph coloring, perfect graphs

Lecture 13: 04/23/2014

Theory of Computer Science to Msc Students, Spring Lecture 2

THE SZEMERÉDI REGULARITY LEMMA AND ITS APPLICATION

Connections between exponential time and polynomial time problem Lecture Notes for CS294

Notes on MapReduce Algorithms

Notes for Lecture Notes 2

Sublinear-Time Algorithms

Decomposition of random graphs into complete bipartite graphs

Some notes on streaming algorithms continued

Topics in Approximation Algorithms Solution for Homework 3

Lecture 22: Counting

Ma/CS 6b Class 25: Error Correcting Codes 2

CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT

Notes for Lecture 25

Exact Algorithms for Dominating Induced Matching Based on Graph Partition

A sequence of triangle-free pseudorandom graphs

Lecture 4: An FPTAS for Knapsack, and K-Center

6 Euler Circuits and Hamiltonian Cycles

Lecture notes on the ellipsoid algorithm

More Approximation Algorithms

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

Lecture 5. Shearer s Lemma

Acyclic subgraphs with high chromatic number

Notes on Space-Bounded Complexity

ACO Comprehensive Exam October 14 and 15, 2013

PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai

Out-colourings of Digraphs

The expansion of random regular graphs

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

The concentration of the chromatic number of random graphs

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify)

This means that we can assume each list ) is

Dominating Set. Chapter 7

Extremal Graphs Having No Stable Cutsets

Induced Subgraph Isomorphism on proper interval and bipartite permutation graphs

Stanford University CS366: Graph Partitioning and Expanders Handout 13 Luca Trevisan March 4, 2013

Lecture 5: Probabilistic tools and Applications II

Essential facts about NP-completeness:

Dynamic Programming on Trees. Example: Independent Set on T = (V, E) rooted at r V.

Duality of LPs and Applications

Notes on Space-Bounded Complexity

Topics in Theoretical Computer Science April 08, Lecture 8

Recall: Matchings. Examples. K n,m, K n, Petersen graph, Q k ; graphs without perfect matching

CSC 5170: Theory of Computational Complexity Lecture 13 The Chinese University of Hong Kong 19 April 2010

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

Advanced Combinatorial Optimization Feb 13 and 25, Lectures 4 and 6

The minimum G c cut problem

Lecturer: Shuchi Chawla Topic: Inapproximability Date: 4/27/2007

Advanced Combinatorial Optimization September 22, Lecture 4

Chapter 2. Reductions and NP. 2.1 Reductions Continued The Satisfiability Problem (SAT) SAT 3SAT. CS 573: Algorithms, Fall 2013 August 29, 2013

P, NP, NP-Complete, and NPhard

arxiv: v1 [cs.dc] 4 Oct 2018

CSCE 551 Final Exam, Spring 2004 Answer Key

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 9/6/2004. Notes for Lecture 3

Chapter 34: NP-Completeness

Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems

1 Matroid intersection

Notes for Lecture 21

C&O 355 Mathematical Programming Fall 2010 Lecture 2. N. Harvey

A Different Perspective For Approximating Max Set Packing

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

MTAT Complexity Theory October 13th-14th, Lecture 6

Lecture Semidefinite Programming and Graph Partitioning

COMP Analysis of Algorithms & Data Structures

Non-Interactive Zero Knowledge (II)

Lecture 2: November 9

Lecture 19: Interactive Proofs and the PCP Theorem

Lecture 5: January 30

The Simplest Construction of Expanders

Lecture 9: Counting Matchings

Lecture 6. Today we shall use graph entropy to improve the obvious lower bound on good hash functions.

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1

Transcription:

U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout Luca Trevisan August 9, 07 Scribe: Mahshid Montazer Lecture In this lecture, we study the Max Cut problem in random graphs. We compute the probable value of its optimal solution, we give a greedy algorithm which is nearly optimal on random graphs and we compute a polynomial time upper bound certificate for it using linear algebra methods. We also study the problem of Maximum Independent Set in random graphs and we compute an upper bound to the probable value for its optimal solution. Max Cut Definition Max Cut: In an un-weighted graph G = (V, E), a cut is defined as a partition of its vertices into two sets V and V. Let E(V, V ) be the size of the cut (V, V ) which is the number of the edges with one endpoint in V and one endpoint in V. Max Cut is the the problem of finding a cut of largest size. To give a clear example, in every bipartite graph, a bipartition is a maximum cut. It is easy to show that the size of the maximum cut would be at least half of the number of the graph edges. One question that arises here is that how much more than half of the edges can we cut. The answer is: not that much in random graphs. We will show this claim in the following section. Probable Value of Max Cut Optimal Solution In this section, we compute the probable value of Max Cut optimal solution in random graphs. Our result is for samples of G n,, but the analysis will generalize to G n,p. Lemma For every fixed cut (S, V S), E[E(S, V \ S)] n 8. Proof: E[E(S, V \ S)] = S V \ S = n 8. Lemma 3 P[E(S, V \ S) n 8 + ɛ n 4 ] e Ω(ɛ n ) where 0 ɛ. Proof: The proof is by applying Chernoff bounds on the result of lemma.

Lemma 4 There is a constant c > 0 such that where ɛ =. G n, Proof: P[ (S, V \ S) E(S, V \ S) n 8 + ɛn 4 ] n c n and the probability is taken over the choice of G = (V, E) from the distribution for an appropriate choice of c. P[ (S, V \ S) E(S, V \ S) n 8 + ɛn 4 ] n e Ω(ɛ n ) The above lemma clearly leads us to the following theorem. n. Theorem 5 There is a constant c such that w.h.p. Max Cut in G n, n 8 + c n.5. is of size at most Thus, we showed that in G n,/, the probable value of Max Cut is at most n 8 + c n.5. 3 Greedy Algorithm for Max Cut Consider the following greedy algorithm for Max Cut: A, B for v V if v has more neighbors in A than in B, then B B {v} else A A {v} return A and B The above algorithm can be applied to any graph, but we will analyze it on random graphs. A naive analysis of the algorithm guarantees that our greedy algorithm cuts at least half of the edges, giving us an approximation ratio of. The reason is that at each step, we add at least half of the processing vertex s incident edges to the cut. However, a more careful analysis of the algorithm shows that it is near-optimal for random graphs. Below, we prove our claim for G n,. Lemma 6 With high probability over the choice of G from G n,, the greedy algorithm finds a cut of size n 8 + Ω(n.5 ).

Proof: Let G(V, E) G n, be the given graph and let v, v,, v n be the order in which we process the vertices. Note that at the time of processing v i ( i < n), we do not need to know the edges that connect v i to any vertex v j (j > i). Let a i = A and b i = B be the size of sets A and B before processing v i, respectively. Although G is given before we run the algorithm, for the sake of the analysis, we can assume that we are building it on the go and while processing each of the vertices. Remember that each edge of the graph would exists independently with probability. For deciding where to put v i, we generate a i random bits and call their summation X i. We also generate b i random bits and call their summation Y i. We put v i in set A (respectively, B) if X i Y i (respectively, Y i < X i ). Note that the more balanced A and B get, the worse it gets for the analysis. Also, note that the extra edges that the algorithm cuts other than half of the edges would be: i n X i Y i = E(A, B) E. We know that Note that and X i Y i = a i b i. E[ X i Y i ] = Ω( i) Var( X i Y i ) = O(i). Thus, we have that i n X i Y i has mean Ω(n.5 ) and standard deviation O(n). Thus, with O() probability we have: i n X i Y i = i n Ω( i) Ω(n.5 ). E(A, B) n 8 + Ω(n.5 ). 4 Polynomial Time Upper Bound for Max Cut In this section, we find polynomial time upper bound certificates for Max Cut in random graphs using linear algebra techniques. Lemma 7 Let G = (V, E) be a graph, A be its adjacency matrix, J be the matrix all whose entries are and (S, V \ S) be the Max Cut of G. Then E(S, V \ S) n 8 + n A J/ 3

Proof: we have: E(S, V \ S) n 8 T S (A J/) V \S A J/ S V \S A J/ S V \ S A J/ n () Recall that, with high probability over the choice of a graph G from G n,, if A is the adjacency matrix of G then we have A J/ O( n) with high probability. We conclude that, with high probability over the choice of G from G n, we can find in polynomial time a certificate the max cut optimum of G is at most n 8 + O(n.5 ). 5 Maximum Independent Set In this section, we discuss the Maximum Independent Set problem for G n,p (especially G n, ) and we show its close connection with Max Clique problem. Finally, we compute its optimal solution s probable value. Definition 8 Maximum Independent Set: In a graph G(V, E), an independent set is a set of vertices that are mutually disconnected. A Maximum Independent Set in G is an independent set of largest possible size. The Maximum Independent Set problem is the problem of finding such a set. Note that the Maximum Independent Set in G n,p corresponds to the Maximum Clique in G n, p. Thus, for p =, everything that we argued for Max Clique is usable for Maximum Independent Set as well. In this section, we compute an upper bound to the probable value of Maximum Independent Set s optimal solution in G n,p. Fix a set S V of size k. We have P[S is an independent set in G] = ( p) (k ) where the probability is over the choice of G G n,p. The following lemma holds. Lemma 9 P[ Independent Set of size k] e k Proof: ( ) ((k ) ln p ln n k 4

P[ Independent Set of size k] E[#Independent Sets of size k] n = ( p) (k ) k ( n ) k k ( p) k k = e k ln n k k k ln p = e k ((k ) ln p ln n k ). () Now, what would be the maximum value of k such that with high probability we can still make sure that there exists an independent set of size k? Note that the value of () goes to n 0 when k log p k +. n A sufficient condition for k log p k + is to have k = log n +, showing us that p there is a high probability that maximum independent set in G n,p is at most O log n = p O p log n n. A more careful bound is that we can have k log p k + provided, say, k 3 log np + 00, and so with high probability the maximum independent set in G n,p p is at most O p log pn. If we call d = pn, then the bound is O ( n d log d) 5