Approximating Maximum Constraint Satisfaction Problems

Similar documents
On the efficient approximability of constraint satisfaction problems

Approximation Resistance from Pairwise Independent Subgroups

Approximation & Complexity

On the Usefulness of Predicates

The Complexity of Somewhat Approximation Resistant Predicates

Approximability of Constraint Satisfaction Problems

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

On the efficient approximability of constraint satisfaction problems

A On the Usefulness of Predicates

Lecture 16: Constraint Satisfaction Problems

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

APPROXIMATION RESISTANT PREDICATES FROM PAIRWISE INDEPENDENCE

On the Optimality of Some Semidefinite Programming-Based. Approximation Algorithms under the Unique Games Conjecture. A Thesis presented

RANDOMLY SUPPORTED INDEPENDENCE AND RESISTANCE

Lecture 2: November 9

Lecture 20: Course summary and open problems. 1 Tools, theorems and related subjects

APPROXIMATION RESISTANCE AND LINEAR THRESHOLD FUNCTIONS

SDP Relaxations for MAXCUT

Unique Games Conjecture & Polynomial Optimization. David Steurer

Postprint.

The Unique Games and Sum of Squares: A love hate relationship

Improved Hardness of Approximating Chromatic Number

How hard is it to find a good solution?

Overview. 1 Introduction. 2 Preliminary Background. 3 Unique Game. 4 Unique Games Conjecture. 5 Inapproximability Results. 6 Unique Game Algorithms

Unique Games and Small Set Expansion

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017

Approximation algorithm for Max Cut with unit weights

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

PCP Theorem and Hardness of Approximation

Approximating bounded occurrence ordering CSPs

Improved NP-inapproximability for 2-variable linear equations

SDP gaps for 2-to-1 and other Label-Cover variants

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

Locally Expanding Hypergraphs and the Unique Games Conjecture

Canonical SDP Relaxation for CSPs

Improved NP-inapproximability for 2-variable linear equations

Chapter 2. Reductions and NP. 2.1 Reductions Continued The Satisfiability Problem (SAT) SAT 3SAT. CS 573: Algorithms, Fall 2013 August 29, 2013

Approximation Algorithms and Hardness of Approximation May 14, Lecture 22

What Computers Can Compute (Approximately) David P. Williamson TU Chemnitz 9 June 2011

Intro to Theory of Computation

Notes for Lecture 3... x 4

PCP Course; Lectures 1, 2

Optimal Sherali-Adams Gaps from Pairwise Independence

The Complexity of Optimization Problems

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover

ORF 523. Finish approximation algorithms + Limits of computation & undecidability + Concluding remarks

Fourier analysis and inapproximability for MAX-CUT: a case study

Essential facts about NP-completeness:

Lecture 18: Inapproximability of MAX-3-SAT

Provable Approximation via Linear Programming

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death

Show that the following problems are NP-complete

NP-Completeness Part II

Notes on PCP based inapproximability results

Computability and Complexity Theory: An Introduction

Improved NP-inapproximability for 2-variable linear equations

CS154, Lecture 15: Cook-Levin Theorem SAT, 3SAT

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

Lecture 12 Scribe Notes

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Two Query PCP with Sub-Constant Error

1 Probabilistically checkable proofs

CS 151 Complexity Theory Spring Solution Set 5

c 2014 by Naman Agarwal. All rights reserved.

Recap from Last Time

Limitations of Algorithm Power

approximation algorithms I

Lecture 5. 1 Review (Pairwise Independence and Derandomization)

Introduction to LP and SDP Hierarchies

A Working Knowledge of Computational Complexity for an Optimizer

Advanced Algorithms (XIII) Yijia Chen Fudan University

Lecture 22: Hyperplane Rounding for Max-Cut SDP

Some Open Problems in Approximation Algorithms

CSCI3390-Second Test with Solutions

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Announcements. Friday Four Square! Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Did you lose a phone in my office?

Lecture 13 March 7, 2017

8 Approximation Algorithms and Max-Cut

BCCS: Computational methods for complex systems Wednesday, 16 June Lecture 3

CSE 4111/5111/6111 Computability Jeff Edmonds Assignment 6: NP & Reductions Due: One week after shown in slides

Exact Max 2-SAT: Easier and Faster. Martin Fürer Shiva Prasad Kasiviswanathan Pennsylvania State University, U.S.A

Some Open Problems in Approximation Algorithms

Algorithm Design and Analysis

Tecniche di Verifica. Introduction to Propositional Logic

Lecture 15: A Brief Look at PCP

NP and Computational Intractability

Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/21/2010

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

Clonoids and Promise CSP

Sparse analysis Lecture II: Hardness results for sparse approximation problems

CS154 Final Examination

P, NP, NP-Complete, and NPhard

Comp487/587 - Boolean Formulas

1 Propositional Logic

Randomized Complexity Classes; RP

Design and Analysis of Algorithms

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014

Polynomial kernels for constant-factor approximable problems

Transcription:

Approximating Maximum Constraint Satisfaction Problems Johan Håstad some slides by Per Austrin Shafi and Silvio celebration, December 10, 2013

The Swedish Angle Today we are celebrating a Turing Award. December 10th is the day of the Nobel prizes. Maybe the Israeli dress code is a bit too relaxed for a merger.

The Swedish Angle Today we are celebrating a Turing Award. December 10th is the day of the Nobel prizes. Maybe the Israeli dress code is a bit too relaxed for a merger.

The personal angle You do not appreciate your parents until your own kids have grown up. You do not appreciate your advisor until you have graduated a number of your own students

What was the best? Freedom to do what I wanted. Comments on talks and papers.

Enough Back to business.

Apology This is a survey talk but I focus on the story. Will fail to acknowledge some critical contributions. Essentially no technical details.

MAX k -SAT The MAX k -SAT problem: Given: k-cnf formula Goal: satisfy as many clauses as possible (Note: exactly k literals in every clause) E.g. MAX 3-SAT: (x 1 x 2 x 3 ) (x 1 x 2 x 4 ) (x 1 x 3 x 4 ) (x 2 x 3 x 4 ) (x 2 x 3 x 4 ) (x 2 x 3 x 4 ) (x 2 x 3 x 5 ) (x 2 x 3 x 5 ) (x 2 x 4 x 5 ) (x 3 x 4 x 5 ) (x 3 x 4 x 5 ) (x 3 x 4 x 5 )

MAX k -SAT The MAX k -SAT problem: Given: k-cnf formula Goal: satisfy as many clauses as possible (Note: exactly k literals in every clause) E.g. MAX 3-SAT: (x 1 x 2 x 3 ) (x 1 x 2 x 4 ) (x 1 x 3 x 4 ) (x 2 x 3 x 4 ) (x 2 x 3 x 4 ) (x 2 x 3 x 4 ) (x 2 x 3 x 5 ) (x 2 x 3 x 5 ) (x 2 x 4 x 5 ) (x 3 x 4 x 5 ) (x 3 x 4 x 5 ) (x 3 x 4 x 5 ) x 1 = FALSE x 2 = FALSE x 3 = TRUE x 4 = FALSE x 5 = FALSE

Basics for MAX k -SAT NP-hard; Approximation ratio, α = Value(Found solution) Value(Best solution) worst case over all instances. α = 1 the same as finding optimal solution, otherwise α < 1.

Approximating MAX k -SAT Trivial algorithm: Pick random assignment. Approximation ratio 3/4 for MAX 2-SAT. In fact satisfies fraction 3/4 of clauses. Surely something smarter can be done?

Approximating MAX k -SAT Trivial algorithm: Pick random assignment. Approximation ratio 3/4 for MAX 2-SAT. In fact satisfies fraction 3/4 of clauses. Surely something smarter can be done?

Approximating MAX k -SAT Trivial algorithm: Pick random assignment. Approximation ratio 3/4 for MAX 2-SAT. In fact satisfies fraction 3/4 of clauses. Surely something smarter can be done? Yes: Goemans-Williamson [GW95] used semi-definite programming to obtain 0.878 approximation for MAX 2-SAT.

Approximating MAX k -SAT Trivial algorithm: Pick random assignment. Approximation ratio 7/8 for MAX 3-SAT. Surely something smarter can be done?

Approximating MAX k -SAT Trivial algorithm: Pick random assignment. Approximation ratio 7/8 for MAX 3-SAT. Surely something smarter can be done?

Approximating MAX k -SAT Trivial algorithm: Pick random assignment. Approximation ratio 7/8 for MAX 3-SAT. Surely something smarter can be done? No: NP-hard to achieve ratio 7/8 + ɛ [H01] Random assignment gives optimal approximation ratio!

CSPs defined by a predicate P For predicate P : {0, 1} k {0, 1}, the MAX CSP(P) problem: Given: set of constraints, each of the form P(l 1, l 2,..., l k ) = 1, literals l 1,..., l k Goal: satisfy as many constraints as possible E.g. MAX CSP(x 1 x 2 x 3 ) = MAX 3-SAT

Max-P, k = 3 P(x 1, x 2, x 6 ) P(x 1, x 2, x 3 ) P(x 1, x 2, x 4 ) P(x 1, x 4, x 7 ) P(x 1, x 3, x 4 ) P(x 2, x 3, x 4 ) P(x 1, x 3, x 6 ) P(x 2, x 3, x 4 ) P(x 2, x 3, x 4 ) P(x 4, x 6, x 7 ) P(x 2, x 3, x 5 ) P(x 2, x 3, x 5 ) P(x 3, x 6, x 7 ) P(x 2, x 4, x 5 ) P(x 3, x 4, x 5 ) P(x 4, x 5, x 7 ) P(x 3, x 4, x 5 ) P(x 3, x 4, x 5 ) Satisfy as many as possible.

Approximation Resistance Trivial algorithm: Pick random assignment. Approximation ratio P 1 (1) /2 k. P is approximation resistant if hard to do better.

Alternative formulation P is approximation resistant iff, for any ɛ > 0, it is hard to distinguish (1 ɛ)-satisfiable instances and ( P 1 (1) /2 k + ɛ)-satisfiable instances.

Understanding Approximation Resistance Overall goal: understand structure of resistant predicates. when is non-trivial approximation possible? To make life simple: Boolean variables. Same predicate in each constraint.

Alternative view, non-resistant Given a set of k-tuples of literals. Promise: There is an assignment to the variables such that (1 ɛ) fraction of the resulting k-bit strings satisfy P. Achieved: An assignment to the variables such that more than the expected ratio of the k-bit strings satisfy P.

Requiring less Given a set of k-tuples of literals. Promise: There is an assignment to the variables such that (1 ɛ) fraction of the resulting k-bit strings satisfy P. Achieved: An assignment to the variables such that the distribution of k-bit strings is noticeably far from uniform.

Instance P(x 1, x 2, x 6 ) P(x 1, x 2, x 3 ) P(x 1, x 2, x 4 ) P(x 1, x 4, x 7 ) P(x 1, x 3, x 4 ) P(x 2, x 3, x 4 ) P(x 1, x 3, x 6 ) P(x 2, x 3, x 4 ) P(x 2, x 3, x 4 ) P(x 4, x 6, x 7 ) P(x 2, x 3, x 5 ) P(x 2, x 3, x 5 ) P(x 3, x 6, x 7 ) P(x 2, x 4, x 5 ) P(x 3, x 4, x 5 ) P(x 4, x 5, x 7 ) P(x 3, x 4, x 5 ) P(x 3, x 4, x 5 )

The k-bit strings (x 1, x 2, x 6 ), (x 1, x 2, x 3 ), (x 1, x 2, x 4 ), (x 1, x 4, x 7 ), (x 1, x 3, x 4 ), (x 2, x 3, x 4 ), (x 1, x 3, x 6 ), (x 2, x 3, x 4 ), (x 2, x 3, x 4 ), (x 4, x 6, x 7 ), (x 2, x 3, x 5 ), (x 2, x 3, x 5 ), (x 3, x 6, x 7 ), (x 2, x 4, x 5 ), (x 3, x 4, x 5 ), (x 4, x 5, x 7 ), (x 3, x 4, x 5 ), (x 3, x 4, x 5 ) Within ɛ of the uniform distribution on k-bit strings?

Useless P is Useless. Given a set of k-tuples of literals, it is hard to distinguish. Yes: There is an assignment to the variables such that (1 ɛ) fraction of the resulting k-bit strings satisfy P. No: For all assignments to the variables the distribution of k-bit strings is at most ɛ far from uniform.

Two easy facts If P is useless then it is approximation resistant. If P useless and P implies Q then Q is useless.

A good fact Most early approximation resistance proofs in fact proved uselessness. In particular 3-Lin [H01], i.e. P(x) = x 1 x 2 x 3 is useless.

Resistance classification First we take a look at small arities of P and then we turn to asymptotic questions.

k = 2 x 1 x 2 x 1 x 2 x 1 x 2 x 1 x 2 x 2 x 1 x 1 = x 2 x 1 x 2 x 1 x 2 x 1 x 2 x 1 x 2 x 1 x 2 x 1 x 2

k = 2 x 1 x 2 x 1 x 2 x 1 x 2 x 1 x 2 x 2 x 1 x 1 = x 2 x 1 x 2 x 1 x 2 x 1 x 2 x 1 x 2 x 1 x 2 x 1 x 2 No predicate on two variables is resistant [GW95]

k = 3

k = 3 3-XOR is resistant [H01]

k = 3 3-XOR is useless [H01]

k = 3 Everything else has non-trivial approximation [Zwick99]

k = 4 weight #approx #resist #unknown 15 0 1 0 14 0 4 0 13 1 4 1 12 3 15 1 11 9 11 7 10 26 22 2 9 27 6 23 8 52 16 6 7 50 0 6 6 50 0 0 5 27 0 0 4 19 0 0 3 6 0 0 2 4 0 0 1 1 0 0 Most predicates on four variables classified [Hast05] After eliminating symmetries a total of 400 predicates 79 resistant, 275 approximable, 46 unknown Little apparent structure (4Lin useless gives some but not all hardness)

Systematic Results? We have two useful tools. Approximability by nice low-degree expansion using semidefinite programming. Prove very sparse predicates useless.

Fourier representation Every P : { 1, 1} k {0, 1} has unique Fourier representation P(x) = ˆP(S) x i. i S S [k] Let P =d (x) be the part that is of degree d k P =1 (x) = ˆP({i})x i i=1 P =2 (x) = ˆP({i, j})x i x j i<j..

Approximability by nice low-degree expansion Theorem ([Hast05]) Suppose there is a C R such that C P =1 (x) + P =2 (x) > 0 for every x P 1 (1). Then P is approximable. The Theorem is somewhat more general allowing C P =1 (x) + P =2 (x) to equal 0 on up to two accepting inputs.

Implications Hast s Theorem implies the following: Theorem ([Hast05]) Every predicate with fewer than 2 k+1 2 accepting assignments is approximable. Theorem ([Austrin-H09]) Let s c k 2 log k. A uniformly random predicate with s satisfying assignments is approximable with probability 1 o k (1).

Implications Hast s Theorem implies the following: Theorem ([Hast05]) Every predicate with fewer than 2 k+1 2 accepting assignments is approximable. Theorem ([Austrin-H09]) Let s c k 2 log k. A uniformly random predicate with s satisfying assignments is approximable with probability 1 o k (1).

A very sparse predicate Hadamard predicate of arity k = 2 t 1, indexed by non-zero linear functions. Had(x) true iff exists x 0 {0, 1} t, x L = L(x 0 ). The sparsest linear subspace without constant or repeated coordinates.

Results for Had Was proved useless assuming the Unique Games Conjecture (UGC) by Samorodnitsky and Trevisan [ST06]. Theorem [Chan12] Had is useless for any t (arity k = 2 t 1)) Note that t = 2 is 3Lin.

Random predicates Had is very sparse and we get. Theorem ([H07]) Let s c2 k /k 1/2. A uniformly random predicate P : {0, 1} k {0, 1} with s satisfying assignments is useless with probability 1 o k (1).

In Pictures resistant not resistant #satisfying assignments

In Pictures resistant not resistant #satisfying assignments 2 (k + 1)/2

In Pictures resistant not resistant #satisfying assignments Ω(k 2 / log k) 2 (k + 1)/2

In Pictures resistant not resistant #satisfying assignments k + 1 = 2 t

In Pictures resistant not resistant #satisfying assignments k + 1 = 2 t (vs. 2 (k + 1)/2 )

In Pictures resistant not resistant #satisfying assignments k + 1 = 2 t (vs. 2 (k + 1)/2 )

In Pictures resistant not resistant #satisfying assignments O(2 k /k 1/2 )

Getting more hardness results Not many more hardness results are known with NP-hardness and we turn to the Unique Games Conjecture (UGC).

Unique games conjecture? UGC, made by Khot in 2002. Constraint Satisfaction Problem x j [L] P i (x j, x k ) ( ) π i (x j ) = x k, 1 i m, distinguish whether optimal value is (1 ɛ)m or ɛm. Conjecture: NP-hard or at least not polynomial time.

Further restriction, UGC Can have constraints on form x j x k c mod L

Consequences of UGC Vertex Cover is hard to approximate within 2 ɛ, [Khot-Regev 03]. Optimal constant for Max-Cut [KKMO04]. Very useful for approximation resistance.

Pairwise independence A distribution µ over {0, 1} k is balanced and pairwise independent if the marginal distributions on every pair of coordinates are uniform E.g. µ 3 : pick random x {0, 1}3 s.t. x 1 x 2 x 3 = 1 Supp(µ 3 ) = {001, 010, 100, 111}

Pairwise independence A distribution µ over {0, 1} k is balanced and pairwise independent if the marginal distributions on every pair of coordinates are uniform E.g. µ 3 : pick random x {0, 1}3 s.t. x 1 x 2 x 3 = 1 Supp(µ 3 ) = {001, 010, 100, 111} We say that P : {0, 1} k {0, 1} contains a balanced pairwise independent distribution µ over {0, 1} k if Supp(µ) P 1 (1)

Pairwise independence Theorem ([Austrin-Mossel09]) Let P : {0, 1} k {0, 1} contain a balanced pairwise independent distribution. Then, assuming the UGC, P is useless. We say that P : {0, 1} k {0, 1} contains a balanced pairwise independent distribution µ over {0, 1} k if Supp(µ) P 1 (1)

Pairwise independence Theorem ([Austrin-Mossel09]) Let P : {0, 1} k {0, 1} contain a balanced pairwise independent distribution. Then, assuming the UGC, P is useless. In fact this is necessary for uselessness [AH12].

Implications of AM Theorem ([Austrin-Mossel09]) Assuming the UGC and the Hadamard Conjecture there exist hereditarily resistant predicates with 4 k+1 4 accepting assignments for any k. Theorem ([Austrin-H09]) Let s ck 2. Assuming the UGC, a uniformly random predicate P : {0, 1} k {0, 1} with s satisfying assignments is resistant with probability 1 o k (1). Theorem ([Austrin-H09]) Assuming the UGC, every predicate with more than 32 33 2k accepting assignments is resistant.

In Pictures with UGC resistant not resistant #satisfying assignments

In Pictures with UGC resistant not resistant #satisfying assignments 2 (k + 1)/2

In Pictures with UGC resistant not resistant #satisfying assignments Ω(k 2 / log k) 2 (k + 1)/2

In Pictures with UGC resistant not resistant #satisfying assignments 4 (k + 1)/4

In Pictures with UGC resistant not resistant #satisfying assignments 4 (k + 1)/4 (vs. 2 (k + 1)/2 )

In Pictures with UGC resistant not resistant #satisfying assignments 4 (k + 1)/4 (vs. 2 (k + 1)/2 )

In Pictures with UGC resistant not resistant #satisfying assignments O(k 2 )

In Pictures with UGC resistant not resistant #satisfying assignments O(k 2 ) (vs. Ω(k 2 / log k) )

In Pictures with UGC resistant not resistant #satisfying assignments

In Pictures with UGC resistant not resistant #satisfying assignments 32 33 2k

In Pictures with UGC resistant not resistant #satisfying assignments 32 33 2k 13 16 2k

In Pictures with UGC resistant not resistant #satisfying assignments 32 33 2k 13 16 2k O(k 2 ) vs. Ω(k 2 / log k) 4 (k + 1)/4 vs. 2 (k + 1)/2

Resistant but not useless Approximation resistant but not useless. Consider predicate GLST : { 1, 1} 4 {0, 1} { x2 x GLST (x 1, x 2, x 3, x 4 ) = 3 if x 1 = 1 x 2 x 4 if x 1 = 1 GLST is resistant [GLST98] but does not contain pairwise independence and hence is not useless.

GLST continued GLST (x 1, x 2, x 3, x 4 ) = { x2 x 3 if x 1 = 1 x 2 x 4 if x 1 = 1 1 2 x 2x 3 4 x 2x 4 4 x 1x 2 x 3 + x 1x 2 x 4 4 4 Let µ be uniform distribution over { x : x 1 x 2 x 3 = 1 and x 4 = x 3 } Balanced pairwise independent except x 3 and x 4 correlated But x 3 and x 4 never appear together in expansion of GLST

GLST continued GLST (x 1, x 2, x 3, x 4 ) = { x2 x 3 if x 1 = 1 x 2 x 4 if x 1 = 1 1 2 x 2x 3 4 x 2x 4 4 x 1x 2 x 3 + x 1x 2 x 4 4 4 Let µ be uniform distribution over { x : x 1 x 2 x 3 = 1 and x 4 = x 3 } Balanced pairwise independent except x 3 and x 4 correlated But x 3 and x 4 never appear together in expansion of GLST

GLST continued GLST (x 1, x 2, x 3, x 4 ) = { x2 x 3 if x 1 = 1 x 2 x 4 if x 1 = 1 = 1 2 x 2x 3 4 x 2x 4 4 x 1x 2 x 3 + x 1x 2 x 4 4 4 Let µ be uniform distribution over { x : x 1 x 2 x 3 = 1 and x 4 = x 3 } Balanced pairwise independent except x 3 and x 4 correlated But x 3 and x 4 never appear together in expansion of GLST

Other Type of Algorithms Raghavendra [R08][RS09] tells that if Max-P is not approximation resistance then a Semi-Definite Programming algorithm gives a non-trivial approximation ratio. We know what kind of algorithm to look for. Have not been used to classify any explicit predicate. Possibly undecidable.

Other Type of Algorithms Raghavendra [R08][RS09] tells that if Max-P is not approximation resistance then a Semi-Definite Programming algorithm gives a non-trivial approximation ratio. We know what kind of algorithm to look for. Have not been used to classify any explicit predicate. Possibly undecidable.

Other Type of Algorithms Raghavendra [R08][RS09] tells that if Max-P is not approximation resistance then a Semi-Definite Programming algorithm gives a non-trivial approximation ratio. We know what kind of algorithm to look for. Have not been used to classify any explicit predicate. Possibly undecidable.

Other Type of Algorithms Raghavendra [R08][RS09] tells that if Max-P is not approximation resistance then a Semi-Definite Programming algorithm gives a non-trivial approximation ratio. We know what kind of algorithm to look for. Have not been used to classify any explicit predicate. Possibly undecidable.

More hardness results Khot, Tulsiani and Worah, [KTW14] show that existence of a certain measure on possible pairwise correlations is equivalent to UG-hardness. We know what property to focus on. Not been used to get any new explicit hardness result. Possibly undecidable.

More hardness results Khot, Tulsiani and Worah, [KTW14] show that existence of a certain measure on possible pairwise correlations is equivalent to UG-hardness. We know what property to focus on. Not been used to get any new explicit hardness result. Possibly undecidable.

More hardness results Khot, Tulsiani and Worah, [KTW14] show that existence of a certain measure on possible pairwise correlations is equivalent to UG-hardness. We know what property to focus on. Not been used to get any new explicit hardness result. Possibly undecidable.

More hardness results Khot, Tulsiani and Worah, [KTW14] show that existence of a certain measure on possible pairwise correlations is equivalent to UG-hardness. We know what property to focus on. Not been used to get any new explicit hardness result. Possibly undecidable.

Predicates We Can t Characterize Fact P does not contain pairwise independence iff it implies a predicate P of the form P (x) = 1+sign(Q(x)) 2, where Q is a quadratic polynomial without constant term. Special case 2: Republic x 1 decides outcome unless 3/4 of the other variables unite against it Open Problem Is Republic approximable?

Predicates We Can t Characterize Maybe signs of quadratic forms are always approximable? Fact P does not contain pairwise independence iff it implies a predicate P of the form P (x) = 1+sign(Q(x)) 2, where Q is a quadratic polynomial without constant term. Special case 2: Republic x 1 decides outcome unless 3/4 of the other variables unite against it Open Problem Is Republic approximable?

Predicates We Can t Characterize Maybe signs of quadratic forms are always approximable? No: Exists a quadratic form Q on k = 12 variables which turns out to be resistant using generalized [AM09]. Special case 2: Republic x 1 decides outcome unless 3/4 of the other variables unite against it Open Problem Is Republic approximable?

Predicates We Can t Characterize Maybe signs of linear forms are always approximable? Special case 1: Monarchy x 1 decides outcome unless all other variables unite against it Can t be handled using Hast s Theorem but turns out to be approximable [Austrin-Benabbas-Magen09] Special case 2: Republic x 1 decides outcome unless 3/4 of the other variables unite against it Open Problem Is Republic approximable?

Predicates We Can t Characterize Maybe signs of linear forms are always approximable? Special case 1: Monarchy x 1 decides outcome unless all other variables unite against it Can t be handled using Hast s Theorem but turns out to be approximable [Austrin-Benabbas-Magen09] Special case 2: Republic x 1 decides outcome unless 3/4 of the other variables unite against it Open Problem Is Republic approximable?

Final Comments Classification is doing fairly well. Conditional on UGC we know SDPs are universal. Simple unknown predicates such as Republic.

Open questions Is there a nice complete characterization? Can we get NP-hardness? Can we get more results for satisfiable instances? Should we hope/fear for a new complexity class (UGC)?

Thank you!