PCP Course; Lectures 1, 2

Similar documents
Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

On the efficient approximability of constraint satisfaction problems

Lecture 2: November 9

Keywords. Approximation Complexity Theory: historical remarks. What s PCP?

Approximation Basics

CS21 Decidability and Tractability

NP-Hardness reductions

Overview. 1 Introduction. 2 Preliminary Background. 3 Unique Game. 4 Unique Games Conjecture. 5 Inapproximability Results. 6 Unique Game Algorithms

Essential facts about NP-completeness:

Lecture 18: PCP Theorem and Hardness of Approximation I

NP Completeness. CS 374: Algorithms & Models of Computation, Spring Lecture 23. November 19, 2015

Lecture 10: Hardness of approximating clique, FGLSS graph

BCCS: Computational methods for complex systems Wednesday, 16 June Lecture 3

Approximation & Complexity

CS151 Complexity Theory. Lecture 15 May 22, 2017

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms.

Correctness of Dijkstra s algorithm

NP and NP Completeness

Beyond NP [HMU06,Chp.11a] Tautology Problem NP-Hardness and co-np Historical Comments Optimization Problems More Complexity Classes

Lecture 11 October 7, 2013

Complexity Classes IV

Lecture 20: Course summary and open problems. 1 Tools, theorems and related subjects

Inapproximability Ratios for Crossing Number

Class Note #20. In today s class, the following four concepts were introduced: decision

NP-Completeness. Algorithmique Fall semester 2011/12

Hardness of Approximation

PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai

Unprovability of circuit upper bounds in Cook s theory PV

FPT hardness for Clique and Set Cover with super exponential time in k

What Computers Can Compute (Approximately) David P. Williamson TU Chemnitz 9 June 2011

Approximation algorithm for Max Cut with unit weights

CS5371 Theory of Computation. Lecture 19: Complexity IV (More on NP, NP-Complete)

Approximation Preserving Reductions

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

CSCI3390-Lecture 18: Why is the P =?NP Problem Such a Big Deal?

Complexity Theory. Jörg Kreiker. Summer term Chair for Theoretical Computer Science Prof. Esparza TU München

15-451/651: Design & Analysis of Algorithms October 31, 2013 Lecture #20 last changed: October 31, 2013

Lecture 16 November 6th, 2012 (Prasad Raghavendra)

Introduction to Computational Complexity

CSC 373: Algorithm Design and Analysis Lecture 15

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: NP-Completeness I Date: 11/13/18

Lecture 4: NP and computational intractability

NP-Complete Reductions 1

CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

CS294: PCP and Hardness of Approximation January 25, Notes for Lecture 3

Notes on Complexity Theory Last updated: October, Lecture 6

HW11. Due: December 6, 2018

Show that the following problems are NP-complete

Non-Deterministic Time

1 Primals and Duals: Zero Sum Games

On the Optimality of Some Semidefinite Programming-Based. Approximation Algorithms under the Unique Games Conjecture. A Thesis presented

SAT, NP, NP-Completeness

NP-problems continued

Great Theoretical Ideas in Computer Science

Chapter 34: NP-Completeness

Harvard CS 121 and CSCI E-121 Lecture 22: The P vs. NP Question and NP-completeness

Lecture 19: Interactive Proofs and the PCP Theorem

Lecture 18: More NP-Complete Problems

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

Umans Complexity Theory Lectures

Formal definition of P

Canonical SDP Relaxation for CSPs

Lecture 16: Constraint Satisfaction Problems

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

1 Reductions and Expressiveness

Lecture 19: Finish NP-Completeness, conp and Friends

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

CS151 Complexity Theory. Lecture 13 May 15, 2017

CPSC 536N: Randomized Algorithms Term 2. Lecture 2

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

CS21 Decidability and Tractability

Intro to Theory of Computation

Some Open Problems in Approximation Algorithms

Provable Approximation via Linear Programming

CSE200: Computability and complexity Space Complexity

Umans Complexity Theory Lectures

Complexity Classes V. More PCPs. Eric Rachlin

Basic Probabilistic Checking 3

Lecture 3: Nondeterminism, NP, and NP-completeness

6.045: Automata, Computability, and Complexity (GITCS) Class 15 Nancy Lynch

Lecture 10 + additional notes

PCP Theorem and Hardness of Approximation

CSE 4111/5111/6111 Computability Jeff Edmonds Assignment 6: NP & Reductions Due: One week after shown in slides

SDP Relaxations for MAXCUT

CS154, Lecture 18: 1

Theory of Computation Chapter 9

The Polynomial Hierarchy

Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death

Topics in Complexity Theory

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

ITCS:CCT09 : Computational Complexity Theory Apr 8, Lecture 7

NP-Completeness Review

How hard is it to find a good solution?

Mathematics, Proofs and Computation

Chapter 2. Reductions and NP. 2.1 Reductions Continued The Satisfiability Problem (SAT) SAT 3SAT. CS 573: Algorithms, Fall 2013 August 29, 2013

Convex and Semidefinite Programming for Approximation

Lecture 20: PSPACE. November 15, 2016 CS 1010 Theory of Computation

Some Open Problems in Approximation Algorithms

CSE 105 THEORY OF COMPUTATION

Transcription:

PCP Course; Lectures 1, 2 Mario Szegedy September 10, 2011

Easy and Hard Problems Easy: Polynomial Time 5n; n 2 ; 4n 3 + n Hard: Super-polynomial Time n log n ; 2 n ; n! Proving that a problem is easy: provide an algorithm Proving that a problem is hard:??? Proving hardness is the biggest open problem in complexity theory.

Conditional Hardness We want to show that if Problem A is hard then problem B is also hard. So we prove the counter-positive: If B is easy then A is also easy. (And to prove easiness is easy! We just have to find an algorithm.) Polynomial Time Reduction: INPUTS(A) φ INPUTS(B) φ is a polynomial time map such that A(x) = B(y) for every x INPUTS(A) and y = φ(x) INPUTS(B). Terminology: B is hard conditioned on A

Combinatorial Search Problems Property P depends on some input x together with witness y; P can be computed in polynomial time. Is there y {0.1} m such that property P holds for y, and for input x? (In mathematical notation: y P y (x)?) x is a positive input (or 1-input) if and only if y P y (x) Note: We put the witness into the upper index to differentiate it from x, but some simply write P(x, y).

Example: Exact-CUT-Language Find a subset S V (G) of a graph G such that the cut (S, S) has size exactly k. 3 4 5 S 1 2 INPUT = G, k OUTPUT { yes, no }

Can you identify our concepts in case of the Exact-CUT-Language? Input x is the (graph, number) pair (G, k) Witness (or advice) is the cut S Property P is the property that S cuts exactly k edges of G. { P S 1 (or yes ) if cut S cuts exactly k edges of G (G, k) = 0 (or no ) otherwise (G, k) is a positive input instance if there exists an S such that P S (G, k) = 1: (G, k) P S (G, k).

How do we think about it? We think of property P as fixed. For most of the time we think of input x as fixed, given to us. We think of advice y as something that we have to find, or if it is given to us as an advise, we need to verify (check) it. So (somewhat unfairly to x:) y is really the main character of the story here.

Hardness and Combinatorial Search Problems To cycle through all y {0, 1} m s would take 2 n poly(n) time. Is there anything quicker? ETH (Exponential Time Hypothesis): 3SAT with n variables requires (1 + ɛ) n time to solve for some fixed ɛ > 0. This is a stronger assumption than P NP.

Why did I mention ETH? Because sometimes we shall use assumptions like NP DTIME(n log n ) NP DTIME(n log log n )...... These all follow from ETH (in fact, much weaker assumptions than ETH, but stronger than P NP).

Another remark: Terminology 1. Combinatorial Search Problems 2. Non-determinististic (Poly Time) Problems 3. NP Problems MEAN ALL THE SAME y is called the element of the search domain, advise, or non-determinism, in short.

Cook-Karp-Levin-reduction between NP problems Reduction from P 1 to P 2 : Instance Transformation : { inputs for P 1 } Witness Transformation : { witnesses for φ(x)} φ { inputs for P 2 } ψ x { witnesses for x} Here x denotes a (generic) input of P 1. Note: In the literature the instance transformation part is called Karp reduction. I have learned the witness reduction from Leonid Levin, so sometimes I will call the whole system Levin-Reduction.

Focus on the witness transformation The input transformation is dull:) (input transformation ensures that positive P 1 -instances go into positive P 2 -instances) What requires smartness is the witness transformation. Witness transformation ensures that positive P 2 -instances have positive pre-images. Therefore, by contradiction, negative P 1 -instances go into negative instances. Well, in reality you design the two reductions together.

An example: The transformation 3SAT Max Clique (x 1 x 2 x 3 ) (x 1 x 2 x 4 ) (x 1 = 0; x 2 = 0; x 3 = 1) (x 1 = 0; x 2 = 0; x 4 = 0)...... (x 1 = 1; x 2 = 1; x 3 = 1) (x 1 = 1; x 2 = 1; x 4 = 1) Alltogether 7m nodes, where m is the number of clauses. Column i contains the 7 assignments that satisfy the i th clause. We call these local satisfying assignments. Two local assignments (in different columns) are connected by an edge if they are non-contradictory. Example: (x 1 = 0; x 2 = 0; x 3 = 1) and (x 1 = 1; x 2 = 1; x 4 = 1) are contradictory, because x 1 is evaluated to 0 and 1, respectively.

A note you should read MAX-CLIQUE is not a language, because it returns a number: the size of the maximal clique in the input graph (rather than yes or no ). In order to make it a language, we apply the same trick as in the case of the Exact-CUT-Language: We shall add a target number to the graph the size of the clique we are searching for. In our reduction we fix this target number. We shall set this number to m. (Recall: m was the number of clauses of the 3SAT instance = the number of columns in our clique instance) We show that even this problem is as hard as 3SAT.

So what is the witness transformation here? 1. A witness of the clique instance: Any clique of size m (a clique that picks a node from every column). 2. We have to transform such a witness to a satisfying assignment of the 3SAT instance. 3. Witness transformation: Create an assignment from the clique by combining the local assignments at the clique nodes together: (e.g. (x 1 = 0; x 2 = 0; x 3 = 1) and (x 1 = 0; x 2 = 0; x 4 = 0) becomes x 1 = 0; x 2 = 0; x 3 = 1; x 4 = 0. We will never find a contradiction, since we started from a clique, so every two local assignments are joined by an edge, meaning they are non contradictory. Note: the construction satisfies all clauses, since the clique had size m.

Turning Max Clique into a 3-SAT assignment

Try to combine the local assignments in the clique of size three below

Hello there!

The topics of this course 1. Proving hardness of approximating combinatorial optimization problems. 2. Probabilistic verification (PCPs). What is the relation between the two? You will learn. But first you have to understand each separately.

NP Optimization Problems (NPO) (The same as combinatorial optimization problems.) Comes in two flavors: Maximization Problems; Minimization Problems. Let P now be instead of a property, some integer-valued function, computable in polynomial time. Find y such that P y (x) is maximal (for fixed input x). Call this maximal value OPT(x), and call y an optimal witness.

Example: MAX-CUT Find a subset S V (G) of a graph G such that the size of the cut (S, S) is maximized. 3 4 Is S maximal? 5 1 2 INPUT = G OUTPUT = largest cut

No, you can actually cut 5 edges 3 4 5 1 2

1 2-approximation algorithm for MAX CUT Select every node into S, independently with probability 0.5. On expectation we get a cut of size E(G) /2. PROOF: Prob = 1/2 1 2 Prob = 1/2 yes no no yes Prob(yes,no) = Prob(no, yes) = 1/4, so the probability that a fixed edge is cut is 1/4 + 1/4 = 1/2.

Approximation Ratio Let P be a maximization problem with optimal value OPT (x) for input x. Definition: If for every input x an algorithm finds a witness y such that P y (x) α OPT (x), then α is called the approximation ratio of the algorithm.

Best Approximation Ratio Assume we cannot solve a polynomial time. Then WE LOOK FOR THE POLYNOMIAL TIME ALGORITHM THAT ACHIEVES THE BEST APPROXIMATION RATIO POSSIBLE. Puzzle: Assume that for a problem every ɛ > 0 there is an algorithm that achieves approximation ratio 1 ɛ. Does it follow that then there exists a single algorithm with approximation ratio 1? Note: such sequence of algorithms is called Polynomial Time Approximation Scheme (PTAS).

Best Approximation Ratio for MAX-CUT Is it true that 1/2 the best approximation ratio for MAX-CUT? No. Goemans-Williamson: A semidefinite programming based algorithm achieves an approximation ratio of 0.87856... Subhash Khot, Guy Kindler, Elchanan Mossel, Ryan ODonnell: The Unique Games Conjecture implies that no better approximation ratio than 0.87856... can be achieved.

Apropos, what is the Unique Games Conjecure? It is a fantastic new hypothesis you will learn about. Here is the history of approximation in a nutshell: Until PCP theory, they had tried to prove in-approximability via approximation preserving reductions starting from some basic in-approximability assumptions, with little success. In PCP theory we use P NP as the basic hardness assumption, and Levin reduction (in a certain novel way). This was a breakthrough. In post-pcp theory we use Unique Games Conjecture as the as the basic hardness assumption, and Levin reduction. we get a large number of exact bounds, like that the GW algorithm achieves the best approximation ratio for MAX-CUT.