On the efficient approximability of constraint satisfaction problems

Similar documents
Approximating Maximum Constraint Satisfaction Problems

How hard is it to find a good solution?

On the efficient approximability of constraint satisfaction problems

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

On the Optimality of Some Semidefinite Programming-Based. Approximation Algorithms under the Unique Games Conjecture. A Thesis presented

BCCS: Computational methods for complex systems Wednesday, 16 June Lecture 3

Approximability of Constraint Satisfaction Problems

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

PCP Course; Lectures 1, 2

Approximation Resistance from Pairwise Independent Subgroups

Lecture 19: Interactive Proofs and the PCP Theorem

1 Probabilistically checkable proofs

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

Lecture 16: Constraint Satisfaction Problems

Almost transparent short proofs for NP R

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

SDP Relaxations for MAXCUT

Overview. 1 Introduction. 2 Preliminary Background. 3 Unique Game. 4 Unique Games Conjecture. 5 Inapproximability Results. 6 Unique Game Algorithms

Lecture 15: A Brief Look at PCP

PCP Soundness amplification Meeting 2 ITCS, Tsinghua Univesity, Spring April 2009

Keywords. Approximation Complexity Theory: historical remarks. What s PCP?

Lecture 16 November 6th, 2012 (Prasad Raghavendra)

Complexity Classes V. More PCPs. Eric Rachlin

Lecture 12: Interactive Proofs

Approximation & Complexity

PCP Theorem and Hardness of Approximation

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

DD2446 Complexity Theory: Problem Set 4

Lecture 10: Hardness of approximating clique, FGLSS graph

Lecture 2: November 9

Complexity Classes IV

Lecture 17 November 8, 2012

Lecture 20: Course summary and open problems. 1 Tools, theorems and related subjects

Lecture 12 Scribe Notes

Non-Approximability Results (2 nd part) 1/19

Complexity Theory. Jörg Kreiker. Summer term Chair for Theoretical Computer Science Prof. Esparza TU München

1 Computational problems

Lecture 10 + additional notes

PROBABILISTIC CHECKING OF PROOFS AND HARDNESS OF APPROXIMATION

Approximation algorithm for Max Cut with unit weights

Essential facts about NP-completeness:

On the Usefulness of Predicates

NP-problems continued

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

Unique Games Conjecture & Polynomial Optimization. David Steurer

Formal definition of P

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Intro to Learning Theory Date: 12/8/16

A On the Usefulness of Predicates

Positive Semi-definite programing and applications for approximation

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms.

Lecture 18: PCP Theorem and Hardness of Approximation I

Approximation Algorithms and Hardness of Approximation May 14, Lecture 22

Approximation Basics

Lecture 26. Daniel Apon

Today. Few Comments. PCP Theorem, Simple proof due to Irit Dinur [ECCC, TR05-046]! Based on some ideas promoted in [Dinur- Reingold 04].

Probabilistically Checkable Proofs. 1 Introduction to Probabilistically Checkable Proofs

Complexity theory, proofs and approximation

Locally Expanding Hypergraphs and the Unique Games Conjecture

-bit integers are all in ThC. Th The following problems are complete for PSPACE NPSPACE ATIME QSAT, GEOGRAPHY, SUCCINCT REACH.

Great Theoretical Ideas in Computer Science

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017

Lecture 13 March 7, 2017

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

Approximation Algorithms and Hardness of Approximation. IPM, Jan Mohammad R. Salavatipour Department of Computing Science University of Alberta

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 11: From random walk to quantum walk

A Working Knowledge of Computational Complexity for an Optimizer

6.841/18.405J: Advanced Complexity Wednesday, April 2, Lecture Lecture 14

PAC Learning. prof. dr Arno Siebes. Algorithmic Data Analysis Group Department of Information and Computing Sciences Universiteit Utrecht

1 Reductions and Expressiveness

Canonical SDP Relaxation for CSPs

Lecture 11: Proofs, Games, and Alternation

Provable Approximation via Linear Programming

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Lecture 3: Reductions and Completeness

CS286.2 Lecture 8: A variant of QPCP for multiplayer entangled games

Advanced Algorithms (XIII) Yijia Chen Fudan University

Lecture 18: Inapproximability of MAX-3-SAT

A An Overview of Complexity Theory for the Algorithm Designer

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

1 Primals and Duals: Zero Sum Games

CS154, Lecture 18: 1

Algorithms: COMP3121/3821/9101/9801

Probabilistically Checkable Proofs

Unique Games and Small Set Expansion

The power and weakness of randomness (when you are short on time) Avi Wigderson Institute for Advanced Study

Some Open Problems in Approximation Algorithms

Towards the Sliding Scale Conjecture (Old & New PCP constructions)

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

an efficient procedure for the decision problem. We illustrate this phenomenon for the Satisfiability problem.

Two Query PCP with Sub-Constant Error

2 Natural Proofs: a barrier for proving circuit lower bounds

CSC 5170: Theory of Computational Complexity Lecture 13 The Chinese University of Hong Kong 19 April 2010

APPROXIMATION RESISTANCE AND LINEAR THRESHOLD FUNCTIONS

PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai

Dinur s Proof of the PCP Theorem

Hardness of Approximation

Computability and Complexity Theory: An Introduction

Lecture 22. m n c (k) i,j x i x j = c (k) k=1

Transcription:

On the efficient approximability of constraint satisfaction problems July 13, 2007

My world Max-CSP Efficient computation. P Polynomial time BPP Probabilistic Polynomial time (still efficient) NP Non-deterministic Polynomial time

Randomness in computation In practice for free and usually very low probability of error. Whether needed is a very basic theoretical question.

Randomness in computation In practice for free and usually very low probability of error. Whether needed is a very basic theoretical question. Primality in deterministic polynomial time. Great progress in theory, of no importance in practice.

World view Max-CSP We assume P NP and even NP BPP. Stronger assumptions that NP (or some particular problem in NP) cannot be solved in time 1 2 O((log n)k). 2 2 O(nδ) some δ > 0. 3 O(2 cn ) some c > 0. Not needed in this talk but at least the first is not controversial and all are used.

Hard problems NP-complete The hardest problems in NP, assumed hard. NP-hard Even harder problems, if in P then NP = P. Many times non-decision problems closely related to NP.

Interesting family of problems Constraints on constant size subsets of Boolean variables. Constraint Satisfaction Problems, CSPs. Model problems for now 3-Sat, 3-Lin.

3-Sat Max-CSP Satisfiability of 3-CNF formulas, i.e. ϕ = (x 1 x 7 x 12 ) (x 2 x 3 x 8 )... (x 5 x 23 x 99 ) n variables, m clauses (i.e. disjunctions) ϕ = m i=1c i

Basics for 3-Sat Probably the most classical NP-complete problem, from Cook s original list in 1971. No algorithm is known to run faster than 2 cn and work has been done trying to improve the value of c.

3-Lin Max-CSP System of linear equations modulo 2 with at most three variables in each equation. x 1 + x 2 + x 3 = 1 x 1 + x 2 = 1 x 1 + x 2 + x 4 = 1 x 2 + x 4 = 0 x 1 + x 3 + x 4 = 0 x 2 + x 3 + x 4 = 1 x 1 + x 3 = 0 mod 2 m equations n variables. Easy to solve by Gaussian elimination.

Max-CSP New view. Given the set of constraints, maybe not all simultaneously satisfiable, try to satisfy as many as possible. Optimization as opposed to decision.

Hope and fears Hope for Max-3Sat: We know we cannot find the best solution but maybe we can find something reasonably good. Fear for Max-3Lin: If we cannot satisfy all equations, Gaussian elimination does not seem to do anything interesting.

Max-3-Lin vs Max-3-Sat Observation: (x 1 x 2 x 3 ) is true iff 4 of the equations x 1 + x 2 + x 3 = 1 x 1 + x 2 = 1 x 1 + x 3 = 1 x 2 + x 3 = 1 x 1 = 1 x 2 = 1 x 3 = 1 are satisfied (and otherwise none). mod 2

A reduction Max-CSP Take 3-CNF ϕ = m i=1 C i create 7m equations using last page giving system L. Easy fact: ϕ is satisfiable iff we can simultaneously satisfy 4m equations of L. Max-3-Lin is NP-hard!

A reduction Max-CSP Take 3-CNF ϕ = m i=1 C i create 7m equations using last page giving system L. Easy fact: ϕ is satisfiable iff we can simultaneously satisfy 4m equations of L. Max-3-Lin is NP-hard! The fear was justified.

Performance measure for hope Approximation ratio, α = Value(Found solution) Value(Best solution) worst case over all instances. α = 1 the same as finding optimal, otherwise α < 1. For a randomized algorithm we allow expectation over internal randomness, worst case over inputs.

The mindless algorithm Give each variable a random value with looking at the constraints.

The mindless algorithm Give each variable a random value with looking at the constraints. For Max-3-Sat with each clause of length 3 we satisfy each clause with probability 7/8.

The mindless algorithm Give each variable a random value with looking at the constraints. For Max-3-Sat with each clause of length 3 we satisfy each clause with probability 7/8. Approximation ratio at least 7/8 (even deterministically).

The hope for Max-3-Sat If a formula with m clauses is satisfiable then we can find an assignment that satisfies αm clauses where α > 7/8.

The hope for Max-3-Sat If a formula with m clauses is satisfiable then we can find an assignment that satisfies αm clauses where α > 7/8. It turns out that this hope cannot be fulfilled, but first a detour.

The basic question For which types of constraints can we beat the random mindless algorithm and on what instances? As soon as optimal value is significantly better than random, i.e. (1 + ɛ) times random fraction. When the optimal value is (very) large, i.e. (1 ɛ)m. When we can satisfy all constraints, satisfiable instances.

Two branches Positive results. Efficient algorithms with provable performance ratios. Negative results. Proving that certain tasks are NP-hard, or possibly hard given some other complexity assumption.

The favorite techniques Algorithms: Semi-definite programming. Introduced in this context by Goemans and Williamson. Lower bounds: The PCP-theorem and its consequences. Arora, Lund, Motwani, Sudan and Szegedy.

The favorite techniques Algorithms: Semi-definite programming. Introduced in this context by Goemans and Williamson.

Max-Cut Max-CSP Given a graph, partition the graph into two parts cutting as many edges as possible. Famous NP-complete problem. Constraints: x i x j for any edge (i, j).

Max-Cut in formulas The task is to maximize with x i { 1, 1} and edges E, (i,j) E 1 x i x j. 2

Max-Cut in formulas The task is to maximize with x i { 1, 1} and edges E, (i,j) E 1 x i x j. 2 Relax by setting y ij = x i x j and requiring that Y is a symmetric positive semidefinite matrix with y ii = 1.

Positive semidefinite matrices? Y symmetric matrix is positive semidefinite iff one of the following is true All eigenvalues λ i 0. z T Yz 0 for any vector z R n. Y = V T V for some matrix V. y ij = x i x j is in matrix language Y = xx T.

By a result by Alizadeh we can to any desired accuracy solve max ij c ij y ij subject to aijy k ij b k and Y positive semidefinite. ij

By a result by Alizadeh we can to any desired accuracy solve max ij c ij y ij subject to and Y positive semidefinite. aijy k ij b k ij Intuitive reason, set of PSD is convex and we should be able to find optimum of linear function (as is true for LP).

View using Y = V T V Want to solve max x { 1,1} n (i,j) E but as Y = V T V we instead maximize max v i =1,i=1,...n (i,j) E 1 x i x j. 2 1 (v i, v j ), 2 i.e. optimizing over vectors instead of real numbers.

Going vector to Boolean The vector problem accepts a more general set of solutions. Gives higher objective value.

Going vector to Boolean The vector problem accepts a more general set of solutions. Gives higher objective value. Key question: How to use the vector solution to get back a Boolean solution that does almost as well.

Rounding vectors to Boolean values Great suggestion by GW. Given vector solution v i pick random vector r and set x i = Sign((v i, r)), where (v i, r) is the inner product.

Intuition of rounding Contribution 1 (v i, v j ) 2 to objective function large, implying angle between v i, v j large, Sign((v i, r)) Sign((v j, r)) likely. v i v j

Analyzing GW Do term by term, θ angle between vectors. Contribution to semi-definite objective function Probability of being cut 1 (v i, v j ) 2 = 1 cos θ. 2 Pr[Sign((v i, r)) Sign((v j, r))] = θ π. Minimal quotient gives approximation ratio α GW = min θ 2θ π(1 cos θ).8785

Immediate other application Original GW-paper derived same bound for approximating Max-2-Sat. Improved [LLZ] to.9401 (not analytically proved). Obvious semi-definite program. More complicated rounding.

Immediate other application Original GW-paper derived same bound for approximating Max-2-Sat. Improved [LLZ] to.9401 (not analytically proved). Obvious semi-definite program. More complicated rounding. Many other applications some using many additional ideas.

Switching sides Let us turn to hardness results.

Proving NP-hardness results for approximability problems Want to study problem X.

Proving NP-hardness results for approximability problems Want to study problem X. Given a Sat-formula ϕ, produce an instance, I of X such that: ϕ satisfiable Value(I ) c. ϕ not satisfiable Value(I ) s.

Proving NP-hardness results for approximability problems Want to study problem X. Given a Sat-formula ϕ, produce an instance, I of X such that: ϕ satisfiable Value(I ) c. ϕ not satisfiable Value(I ) s. It is NP-hard to approximate our problem within s/c + ɛ.

Proving NP-hardness results for approximability problems Want to study problem X. Given a Sat-formula ϕ, produce an instance, I of X such that: ϕ satisfiable Value(I ) c. ϕ not satisfiable Value(I ) s. It is NP-hard to approximate our problem within s/c + ɛ. Running approximation algorithm on I tells us whether ϕ is satisfiable.

Inapproximability for Max-3-Sat Given a Sat-formula ϕ, produce a different Sat-formula ψ with m clauses such that: ϕ satisfiable ψ satisfiable. ϕ not satisfiable Can only simultaneously satisfy only (1 ɛ)m of the clauses of ψ. Gives inapproximability ratio (1 ɛ).

Probabilistically Checkable Proofs (PCPs) A proof that 3-Sat formula ϕ is satisfiable. Traditionally an assignment to the variables. Checked by reading all variables and checking.

Probabilistically Checkable Proofs (PCPs) A proof that 3-Sat formula ϕ is satisfiable. Traditionally an assignment to the variables. Checked by reading all variables and checking. We want to read much less of the proof, only a constant number of bits.

Sought reduction gives PCP! Proof: An assignment to variables of ψ. Checking: Pick a random clause and read the variables that appear in the clause and check if it is satisfied.

Sought reduction gives PCP! Proof: An assignment to variables of ψ. Checking: Pick a random clause and read the variables that appear in the clause and check if it is satisfied. Preserving satisfiability: ϕ satisfiable implies ψ satisfiable and we always accept.

Sought reduction gives PCP! Proof: An assignment to variables of ψ. Checking: Pick a random clause and read the variables that appear in the clause and check if it is satisfied. Preserving satisfiability: ϕ satisfiable implies ψ satisfiable and we always accept. Amplifying non-satisfiability: ϕ not satisfiable implies ψ only (1 ɛ)-satisfiable and we reject with probability ɛ.

Sought reduction gives PCP! Proof: An assignment to variables of ψ. Checking: Pick a random clause and read the variables that appear in the clause and check if it is satisfied. Preserving satisfiability: ϕ satisfiable implies ψ satisfiable and we always accept. Amplifying non-satisfiability: ϕ not satisfiable implies ψ only (1 ɛ)-satisfiable and we reject with probability ɛ. Repeat a constant number of times to decrease fooling probability.

Thinking more carefully Our type of reduction is equivalent to a good PCP.

The PCP theorem PCP theorem: [ALMSS] There is a proof system for satisfiability that reads a constant number of bits such that Verifier always accepts a correct proof of correct statement. Verifier rejects any proof for incorrect statement with probability 1/2.

The PCP theorem PCP theorem: [ALMSS] There is a proof system for satisfiability that reads a constant number of bits such that Verifier always accepts a correct proof of correct statement. Verifier rejects any proof for incorrect statement with probability 1/2. Translates to any NP statement by a reduction.

Proof of PCP theorem Original proof: Algebraic techniques, properties of polynomials, proof composition, aggregation of queries, etc. Many details.

Proof of PCP theorem Original proof: Algebraic techniques, properties of polynomials, proof composition, aggregation of queries, etc. Many details. Interesting new proof by Dinur (2005) that is essentially combinatorial. Relies on recursion and expander graphs.

Proof of PCP theorem Original proof: Algebraic techniques, properties of polynomials, proof composition, aggregation of queries, etc. Many details. Interesting new proof by Dinur (2005) that is essentially combinatorial. Relies on recursion and expander graphs. These basic proofs give BAD inapproximability constants

Improving constants A long story, one final point: Theorem [H]: For any ɛ > 0, it is NP-hard to approximate Max-3-Lin within 1/2 + ɛ. Matches mindless algorithm up to ɛ. No nontrivial approximation in non-satisfiable case.

Improving constants A long story, one final point: Theorem [H]: For any ɛ > 0, it is NP-hard to approximate Max-3-Lin within 1/2 + ɛ. Matches mindless algorithm up to ɛ. No nontrivial approximation in non-satisfiable case. Fear realized in worst possible way.

Ingredients in proof/construction Two prover games. Parallel repetition for two-prover games. [R] Coding strings by the long code. [BGS] Using discrete Fourier transforms in the analysis. [H]

Classifying CSPs 1 Hard to approximate better than random mindless algorithm on satisfiable instances. 2 Hard to do better than random mindless algorithm on (almost) satisfiable instances. 3 Can beat random mindless algorithm as soon as optimal is significantly better than random. 4 Have an approximation constant better than achieved by random mindless algorithm, but not in previous class.

Classifying CSPs 1 Hard to approximate better than random mindless algorithm on satisfiable instances. Approximation resistant on satisfiable instances, (Max-3-Sat). 2 Hard to do better than random mindless algorithm on (almost) satisfiable instances. 3 Can beat random mindless algorithm as soon as optimal is significantly better than random. 4 Have an approximation constant better than achieved by random mindless algorithm, but not in previous class.

Classifying CSPs 1 Hard to approximate better than random mindless algorithm on satisfiable instances. Approximation resistant on satisfiable instances, (Max-3-Sat). 2 Hard to do better than random mindless algorithm on (almost) satisfiable instances. Approximation resistant, (Max-3-Lin). 3 Can beat random mindless algorithm as soon as optimal is significantly better than random. 4 Have an approximation constant better than achieved by random mindless algorithm, but not in previous class.

Classifying CSPs 1 Hard to approximate better than random mindless algorithm on satisfiable instances. Approximation resistant on satisfiable instances, (Max-3-Sat). 2 Hard to do better than random mindless algorithm on (almost) satisfiable instances. Approximation resistant, (Max-3-Lin). 3 Can beat random mindless algorithm as soon as optimal is significantly better than random. Fully approximable, (Max-Cut). 4 Have an approximation constant better than achieved by random mindless algorithm, but not in previous class.

Classifying CSPs 1 Hard to approximate better than random mindless algorithm on satisfiable instances. Approximation resistant on satisfiable instances, (Max-3-Sat). 2 Hard to do better than random mindless algorithm on (almost) satisfiable instances. Approximation resistant, (Max-3-Lin). 3 Can beat random mindless algorithm as soon as optimal is significantly better than random. Fully approximable, (Max-Cut). 4 Have an approximation constant better than achieved by random mindless algorithm, but not in previous class. Somewhat approximation resistant.

Constraints of 2 variables All such predicates are fully approximable, even over larger domains. Semi-definite programming is all powerful.

Constraints of 3 variables Approximation resistant iff we accept either all strings of even parity or all strings of odd parity. Fully approximable (class 3) iff un-correlated with parity of all three variables. Other (nontrivial) cases belong to class 4.

Constraints of 3 variables Approximation resistant iff we accept either all strings of even parity or all strings of odd parity. Fully approximable (class 3) iff un-correlated with parity of all three variables. Reduces to (real) sum of predicates on two variables. Other (nontrivial) cases belong to class 4.

Constraints of 3 variables Approximation resistant iff we accept either all strings of even parity or all strings of odd parity. Fully approximable (class 3) iff un-correlated with parity of all three variables. Reduces to (real) sum of predicates on two variables. Other (nontrivial) cases belong to class 4. Approximation resistance applies to satisfiable instances if accepts at least 6 inputs.

Unknown width 3 What happens with the not two ones predicate on satisfiable instances. Could we do better than random? Not true for just (1 ɛ)-satisfiable instances!

Unknown width 3 What happens with the not two ones predicate on satisfiable instances. Could we do better than random? Not true for just (1 ɛ)-satisfiable instances! Parity is different for satisfiable and almost satisfiable instances!

Width 4 Max-CSP Partial classification by Hast. 400 essentially different predicates. 79 approximation resistant. 275 not approximation resistant. 46 not classified. # Acc 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Non-res 1 4 6 19 27 50 50 52 27 26 9 3 1 0 0 Res 0 0 0 0 0 0 0 16 6 22 11 15 4 4 1 Unkn 0 0 0 0 0 0 6 6 23 2 7 1 1 0 0 Satisfiability ignored.

Large width Max-CSP General facts, assume width k Accepts very few inputs, non-trivially approximable. Exists rather sparse approximation resistant predicates. The really dense predicates are approximation resistant.

General result on sparse predicates Any k-ary Boolean predicate can be approximated within ck2 k [CMM]. No predicate with ck accepting configurations is approximation resistant.

General result on sparse predicates Any k-ary Boolean predicate can be approximated within ck2 k [CMM]. No predicate with ck accepting configurations is approximation resistant. Uses semi-definite programming.

Sparse resistant predicates For any l 1 and l 2 there are predicates on k = l 1 + l 2 + l 1 l 2 Boolean variables that accept 2 l 1+l 2 vectors and are approximation resistant. Only 2 O( k) accepted inputs.

Very dense predicates If k l 1 + l 2 + l 1 l 2 any predicate on k Boolean variables that rejects fewer than 2 l 1l 2 inputs is approximation resistant [Hast]. This is 2 o(k) but still a reasonable number. For small k constants can be improved.

General fact? Max-CSP It seems like the more inputs a predicate accepts the more likely it is to be approximation resistant.

General fact? Max-CSP It seems like the more inputs a predicate accepts the more likely it is to be approximation resistant. Approximation resistance is not a monotone property. Have example P, Q, P approximation resistant. Q not approximation resistant. P(x) Q(x)

Asymptotic question For large k is a random predicate of Boolean variables approximation resistant?

Asymptotic question For large k is a random predicate of Boolean variables approximation resistant? Probably...

Unique games conjecture. Made by Khot. CSP of width 2 over large domains. C i (x a, x b ), for each value of x a exists a unique value of x b to satisfy the constraint and vice versa.

Unique games conjecture. Made by Khot. CSP of width 2 over large domains. C i (x a, x b ), for each value of x a exists a unique value of x b to satisfy the constraint and vice versa. Conjecture: Hard to distinguish (1 ɛ)-satisfiable from ɛ-satisfiable.

Unique games conjecture. Made by Khot. CSP of width 2 over large domains. C i (x a, x b ), for each value of x a exists a unique value of x b to satisfy the constraint and vice versa. Conjecture: Hard to distinguish (1 ɛ)-satisfiable from ɛ-satisfiable. True? A new complexity class?

Consequences of UGC Many, some central: Constant α GW sharp for Max-Cut [KKMO]. Vertex Cover is hard to approximate within 2 ɛ [KR]. Optimal constant.9401 for Max-2-Sat [A]. Random predicates are approximation resistant [H].

Summing up Max-CSP We have a huge classification problem ahead of us. NP-completeness of decision problem is almost universally true and understood since the 1970-ies [S]. Approximation resistance on satisfiable instances means that efficient computation cannot do anything. Much stronger notion of hardness.

Summing up Max-CSP We have a huge classification problem ahead of us. NP-completeness of decision problem is almost universally true and understood since the 1970-ies [S]. Approximation resistance on satisfiable instances means that efficient computation cannot do anything. Much stronger notion of hardness. Open: Settle the unique games conjecture!

Summing up Max-CSP We have a huge classification problem ahead of us. NP-completeness of decision problem is almost universally true and understood since the 1970-ies [S]. Approximation resistance on satisfiable instances means that efficient computation cannot do anything. Much stronger notion of hardness. Open: Settle the unique games conjecture! Most basic fact: Max-3-Sat is approximation resistant on satisfiable instances.