SDP Relaxations for MAXCUT

Similar documents
Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

Introduction to Semidefinite Programming I: Basic properties a

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

6.854J / J Advanced Algorithms Fall 2008

8 Approximation Algorithms and Max-Cut

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

Lecture 5. Max-cut, Expansion and Grothendieck s Inequality

Approximation Algorithms

Lecture 4: Polynomial Optimization

How hard is it to find a good solution?

SEMIDEFINITE PROGRAM BASICS. Contents

Semidefinite Programming

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Lecture 3: Semidefinite Programming

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

On the efficient approximability of constraint satisfaction problems

Today: Linear Programming (con t.)

Convex and Semidefinite Programming for Approximation

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Semidefinite Programming

Network Optimization

Lecture #21. c T x Ax b. maximize subject to

approximation algorithms I

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

Lecture 13 March 7, 2017

Lecture 16: Constraint Satisfaction Problems

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

Lecture 22: Hyperplane Rounding for Max-Cut SDP

Hierarchies. 1. Lovasz-Schrijver (LS), LS+ 2. Sherali Adams 3. Lasserre 4. Mixed Hierarchy (recently used) Idea: P = conv(subset S of 0,1 n )

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Approximation & Complexity

Unique Games Conjecture & Polynomial Optimization. David Steurer

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Approximation algorithm for Max Cut with unit weights

Overview. 1 Introduction. 2 Preliminary Background. 3 Unique Game. 4 Unique Games Conjecture. 5 Inapproximability Results. 6 Unique Game Algorithms

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

Lecture 8: The Goemans-Williamson MAXCUT algorithm

4. Algebra and Duality

Lecture Semidefinite Programming and Graph Partitioning

Positive Semi-definite programing and applications for approximation

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Approximation Algorithms and Hardness of Approximation May 14, Lecture 22

Semidefinite Programming Basics and Applications

Lecture 2: November 9

Lecture: Examples of LP, SOCP and SDP

A better approximation ratio for the Vertex Cover problem

Graph Partitioning Algorithms and Laplacian Eigenvalues

Continuous Optimisation, Chpt 9: Semidefinite Problems

16.1 L.P. Duality Applied to the Minimax Theorem

Approximability of Constraint Satisfaction Problems

On the Optimality of Some Semidefinite Programming-Based. Approximation Algorithms under the Unique Games Conjecture. A Thesis presented

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Approximating Maximum Constraint Satisfaction Problems

Handout 6: Some Applications of Conic Linear Programming

Continuous Optimisation, Chpt 9: Semidefinite Optimisation

Convex relaxation. In example below, we have N = 6, and the cut we are considering

ORIE 6334 Spectral Graph Theory November 22, Lecture 25

Linear and non-linear programming

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Lecture: Introduction to LP, SDP and SOCP

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover

Lecture 20: Course summary and open problems. 1 Tools, theorems and related subjects

Lectures 6, 7 and part of 8

9. Interpretations, Lifting, SOS and Moments

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms.

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

1 Primals and Duals: Zero Sum Games

Unique Games and Small Set Expansion

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture 4: NP and computational intractability

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Maximum cut and related problems

Summer School: Semidefinite Optimization

Lecture 6: Conic Optimization September 8

EE 227A: Convex Optimization and Applications October 14, 2008

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

CO 250 Final Exam Guide

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8. Strong Duality Results. September 22, 2008

Robust and Optimal Control, Spring 2015

Assignment 1: From the Definition of Convexity to Helley Theorem

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

BBM402-Lecture 20: LP Duality

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

Scheduling on Unrelated Parallel Machines. Approximation Algorithms, V. V. Vazirani Book Chapter 17

Transcription:

SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27

Overview 1 MAXCUT, Hardness and UGC 2 LP Shortcomings and SDP 3 Goemans and Williamson 4 SOS Recap 5 SOS Certificates (Some time later?) Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 2 / 27

MAXCUT, Hardness and UGC Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 3 / 27

The MAXCUT Problem Problem (MAXCUT) For an n-vertex graph G, find a bipartition (S, S) that maximizes { {u, v} E(G) u S, v S }. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 4 / 27

The MAXCUT Problem Problem (MAXCUT) For an n-vertex graph G, find a bipartition (S, S) that maximizes { {u, v} E(G) u S, v S }. NP-Complete by a straightforward reduction from MAX-2-SAT: {u, v} E(G) u S, v S (x u x v ) ( x u x v ). Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 4 / 27

The MAXCUT Problem Problem (MAXCUT) For an n-vertex graph G, find a bipartition (S, S) that maximizes { {u, v} E(G) u S, v S }. NP-Complete by a straightforward reduction from MAX-2-SAT: {u, v} E(G) u S, v S (x u x v ) ( x u x v ). Theorem (Håstad 2001, Trevisan et al. 2000) It is NP-hard to approximate MAXCUT better than 16 17 0.941. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 4 / 27

Unique Games and Label Covers In pictures: Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 5 / 27

Unique Games and Label Covers In pictures: Problem ((c, s)-gap Label Cover with Unique Constraints) The promise problem (L yes, L no ): - L yes = {G some assignment satisfies c-fraction of the constraints}. - L no = {G every assignment satisfies s-fraction of the constraints}. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 5 / 27

Unique Games and Label Covers In pictures: Problem ((c, s)-gap Label Cover with Unique Constraints) The promise problem (L yes, L no ): - L yes = {G some assignment satisfies c-fraction of the constraints}. - L no = {G every assignment satisfies s-fraction of the constraints}. Conjecture (Unique Games Conjecture (UCG)) For every sufficiently small pair of constants ɛ, δ > 0, k s.t. the (1 δ, ɛ)-gap label-cover problem with k colors is NP-hard. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 5 / 27

LP Shortcomings and SDP Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 6 / 27

LP Rounding for MAXCUT? A standard approach to design an approximation algorithm is to formulate an Integer Linear Program (ILP), then solve a relaxed LP yielding a (super)optimal solution in real variables, then round the solution back to integers trying to preserve optimality. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 7 / 27

LP Rounding for MAXCUT? A standard approach to design an approximation algorithm is to formulate an Integer Linear Program (ILP), then solve a relaxed LP yielding a (super)optimal solution in real variables, then round the solution back to integers trying to preserve optimality. (LP1) s.t. max {u,v} E(G) z uv z uv x u + x v z uv (1 x u ) + (1 x v ) z uv, x u {0, 1} Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 7 / 27

LP Rounding for MAXCUT? A standard approach to design an approximation algorithm is to formulate an Integer Linear Program (ILP), then solve a relaxed LP yielding a (super)optimal solution in real variables, then round the solution back to integers trying to preserve optimality. Claim (LP1) s.t. max {u,v} E(G) z uv z uv x u + x v z uv (1 x u ) + (1 x v ) z uv, x u {0, 1} Allowing x u [0, 1], we can set x u = 1 2 u and get z uv = 1 {u, v}. Hence, the optimal value of LP1 is E. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 7 / 27

LP Rounding for MAXCUT? A standard approach to design an approximation algorithm is to formulate an Integer Linear Program (ILP), then solve a relaxed LP yielding a (super)optimal solution in real variables, then round the solution back to integers trying to preserve optimality. Claim (LP1) s.t. max {u,v} E(G) z uv z uv x u + x v z uv (1 x u ) + (1 x v ) z uv, x u {0, 1} Allowing x u [0, 1], we can set x u = 1 2 u and get z uv = 1 {u, v}. Hence, the optimal value of LP1 is E. Bad: For K n MAXCUT E /2. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 7 / 27

0.5-approximation by Random Guessing Each vertex u E(G) is added independently to S with probability 1 2. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 8 / 27

0.5-approximation by Random Guessing Each vertex u E(G) is added independently to S with probability 1 2. Pr[{u, v} is cut] = Pr[(u S) (v S)] + Pr[(u S) (v S)] = Pr[u S] Pr[v S] + Pr[u S] Pr[v S] = 1 2 1 2 + 1 2 1 2 = 1 2. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 8 / 27

0.5-approximation by Random Guessing Each vertex u E(G) is added independently to S with probability 1 2. Pr[{u, v} is cut] = Pr[(u S) (v S)] + Pr[(u S) (v S)] = Pr[u S] Pr[v S] + Pr[u S] Pr[v S] = 1 2 1 2 + 1 2 1 2 = 1 2. Remark: Edge-by-edge analysis. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 8 / 27

0.5-approximation by Random Guessing Each vertex u E(G) is added independently to S with probability 1 2. Pr[{u, v} is cut] = Pr[(u S) (v S)] + Pr[(u S) (v S)] = Pr[u S] Pr[v S] + Pr[u S] Pr[v S] = 1 2 1 2 + 1 2 1 2 = 1 2. Remark: Edge-by-edge analysis. Remark(2): Sub-exponential lower bounds on the size of LP relaxations beating 1 2 [P. Kothari et al., CATS@UMD February 17, 2017]. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 8 / 27

High Level Goals for SDP Generalize LP. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 9 / 27

High Level Goals for SDP Generalize LP. Simulate nonlinear programming, in some linear way. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 9 / 27

High Level Goals for SDP Generalize LP. Simulate nonlinear programming, in some linear way. Provide a systematic way to expand the search space by introducing more and more variables to obtain tighter approximations to the problem at hand. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 9 / 27

Features and Analogies to LP? Variables: real-valued = vector-valued. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 10 / 27

Features and Analogies to LP? Variables: real-valued = vector-valued. Linear constraints: on variables = on inner products. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 10 / 27

Features and Analogies to LP? Variables: real-valued = vector-valued. Linear constraints: on variables = on inner products. Recover vectors defining a solution by decomposing resulting matrix. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 10 / 27

Features and Analogies to LP? Variables: real-valued = vector-valued. Linear constraints: on variables = on inner products. Recover vectors defining a solution by decomposing resulting matrix. Convex, well-behaved, solvable in polynomial time. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 10 / 27

Features and Analogies to LP? Variables: real-valued = vector-valued. Linear constraints: on variables = on inner products. Recover vectors defining a solution by decomposing resulting matrix. Convex, well-behaved, solvable in polynomial time. Analysis involves statements about high dimensional geometric graphs (e.g., size of indep. set and expansion properties). Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 10 / 27

Positive Semidefinite Matrices Definition A symmetric matrix X R n n is positive semidefinite (PSD), or X 0, if any of the following equivalent conditions holds: 1 a T Xa 0, a R n. 2 X admits a Cholseky decomposition X = LL T. 3 All eigenvalues of X are non-negative. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 11 / 27

Positive Semidefinite Matrices Definition A symmetric matrix X R n n is positive semidefinite (PSD), or X 0, if any of the following equivalent conditions holds: 1 a T Xa 0, a R n. 2 X admits a Cholseky decomposition X = LL T. 3 All eigenvalues of X are non-negative. Remark: Condition(1) encodes an (uncountably) infinite set of linear constraints! Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 11 / 27

Positive Semidefinite Matrices Definition A symmetric matrix X R n n is positive semidefinite (PSD), or X 0, if any of the following equivalent conditions holds: 1 a T Xa 0, a R n. 2 X admits a Cholseky decomposition X = LL T. 3 All eigenvalues of X are non-negative. Remark: Condition(1) encodes an (uncountably) infinite set of linear constraints! Remark(2): Condition(2) is equivalent to X ij = v i, v j for vectors {v 1,..., v n } corresponding to the columns of L. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 11 / 27

Standard Form (Primal) (P) min C, X := ij C ij X ij s.t. A i, X = b i i {1,..., m}, X 0 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 12 / 27

Standard Form (Primal) (P) min C, X := ij C ij X ij s.t. A i, X = b i i {1,..., m}, X 0 Remark: LP is a special case of SDP for which X is a diagonal matrix. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 12 / 27

Dual Form (D) s.t. max b T y yi A i + S = C, S 0 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 13 / 27

Dual Form (D) s.t. max b T y yi A i + S = C, S 0 Remark: Weak duality holds. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 13 / 27

Powerful Technical Lemma Lemma ( Condition(4) for PSD matrices) For a symmetric matrix X R n n, X 0 iff A, X 0, A 0. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 14 / 27

Powerful Technical Lemma Lemma ( Condition(4) for PSD matrices) For a symmetric matrix X R n n, X 0 iff A, X 0, A 0. Fact(1): A, X = Trace(A T X ). Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 14 / 27

Powerful Technical Lemma Lemma ( Condition(4) for PSD matrices) For a symmetric matrix X R n n, X 0 iff A, X 0, A 0. Fact(1): A, X = Trace(A T X ). Fact(2): Trace(AB) = Trace(BA) (cyclic permutations). Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 14 / 27

Powerful Technical Lemma Lemma ( Condition(4) for PSD matrices) For a symmetric matrix X R n n, X 0 iff A, X 0, A 0. Fact(1): A, X = Trace(A T X ). Fact(2): Trace(AB) = Trace(BA) (cyclic permutations). Proof (Do on board). ( ) Suppose X is not PSD and obtain a witness a R n s.t. a T Xa < 0. ( ) Suppose A 0 and obtain the Cholseky decomposition A = LL T. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 14 / 27

Weak Duality Lemma If X, y are feasible for (P) and (D), respectively, then b T y C, X. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 15 / 27

Weak Duality Lemma If X, y are feasible for (P) and (D), respectively, then b T y C, X. Recall: In (P), X 0 and in (D), i y ia i + S = C, with S 0. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 15 / 27

Weak Duality Lemma If X, y are feasible for (P) and (D), respectively, then b T y C, X. Recall: In (P), X 0 and in (D), i y ia i + S = C, with S 0. Proof. C, X = i y i A i + S, X = i y i A i, X + S, X = i y i A i, X + S, X = i y i b i + S, X 0 (X is feasible) b T y (By the technical lemma) Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 15 / 27

Strong Duality? When the value of (P) coincides with that of (D). Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 16 / 27

Strong Duality? When the value of (P) coincides with that of (D). Usually holds under some condition. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 16 / 27

Strong Duality? When the value of (P) coincides with that of (D). Usually holds under some condition. Definition (Slater s Condition) Feasible region has an interior point. In other words, feasible X 0. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 16 / 27

Goemans and Williamson Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 17 / 27

SDP Formulation of MAXCUT max ( 1 2 1 2 X uv) {u,v} E(G) s.t. X uu = 1, u V (G) X 0 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 18 / 27

SDP Formulation of MAXCUT max {u,v} E(G) ( 1 2 1 2 X uv) s.t. X uu = 1, u V (G) X 0 Intuition: Set X = aa T for a R n is defined as { 1, if u S, a u = 1, o.w. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 18 / 27

SDP Formulation of MAXCUT max {u,v} E(G) ( 1 2 1 2 X uv) s.t. X uu = 1, u V (G) X 0 Intuition: Set X = aa T for a R n is defined as { 1, if u S, a u = 1, o.w. Cut edges contribute 1, uncut edges contribute 0. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 18 / 27

Rounding by a Random Hyperplane Theorem (Goemans and Williamson) There exists an α GW -approximation algorithm for MAXCUT where 2 α GW = min 0 θ π π θ 1 cos θ 0.87856.... Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 19 / 27

Rounding by a Random Hyperplane Theorem (Goemans and Williamson) There exists an α GW -approximation algorithm for MAXCUT where 2 α GW = min 0 θ π π θ 1 cos θ 0.87856.... - Solve the SDP to get the solution X. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 19 / 27

Rounding by a Random Hyperplane Theorem (Goemans and Williamson) There exists an α GW -approximation algorithm for MAXCUT where 2 α GW = min 0 θ π π θ 1 cos θ 0.87856.... - Solve the SDP to get the solution X. - Recover vector variables y u for which X uv = y u, y v. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 19 / 27

Rounding by a Random Hyperplane Theorem (Goemans and Williamson) There exists an α GW -approximation algorithm for MAXCUT where 2 α GW = min 0 θ π π θ 1 cos θ 0.87856.... - Solve the SDP to get the solution X. - Recover vector variables y u for which X uv = y u, y v. - Pick a uniformly random vector a R n. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 19 / 27

Rounding by a Random Hyperplane Theorem (Goemans and Williamson) There exists an α GW -approximation algorithm for MAXCUT where 2 α GW = min 0 θ π π θ 1 cos θ 0.87856.... - Solve the SDP to get the solution X. - Recover vector variables y u for which X uv = y u, y v. - Pick a uniformly random vector a R n. - Set x u = sgn( a, y u ). Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 19 / 27

Rounding by a Random Hyperplane Theorem (Goemans and Williamson) There exists an α GW -approximation algorithm for MAXCUT where 2 α GW = min 0 θ π π θ 1 cos θ 0.87856.... - Solve the SDP to get the solution X. - Recover vector variables y u for which X uv = y u, y v. - Pick a uniformly random vector a R n. - Set x u = sgn( a, y u ). - Edge-by-edge analysis: Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 19 / 27

Rounding by a Random Hyperplane Theorem (Goemans and Williamson) There exists an α GW -approximation algorithm for MAXCUT where 2 α GW = min 0 θ π π θ 1 cos θ 0.87856.... - Solve the SDP to get the solution X. - Recover vector variables y u for which X uv = y u, y v. - Pick a uniformly random vector a R n. - Set x u = sgn( a, y u ). - Edge-by-edge analysis: Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 19 / 27

Second Generation of SDP Rounding Algorithms Beyond random hyperplanes. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 20 / 27

Second Generation of SDP Rounding Algorithms Beyond random hyperplanes. Initiated by Arora, Rao and Vazirani for c-balanced cuts. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 20 / 27

Second Generation of SDP Rounding Algorithms Beyond random hyperplanes. Initiated by Arora, Rao and Vazirani for c-balanced cuts. More global analysis. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 20 / 27

Second Generation of SDP Rounding Algorithms Beyond random hyperplanes. Initiated by Arora, Rao and Vazirani for c-balanced cuts. More global analysis. Different formulation + triangle inequality. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 20 / 27

Second Generation of SDP Rounding Algorithms Beyond random hyperplanes. Initiated by Arora, Rao and Vazirani for c-balanced cuts. More global analysis. Different formulation + triangle inequality. Breadth-first search on weighted geometric graph. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 20 / 27

SOS Recap Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 21 / 27

Certifying non-negativity over the hypercube Problem (Non-negativity) Given a low-degree polynomial f : {0, 1} n R, decide if f 0 over the hypercube or if there exists a point x {0, 1} n such that f (x) < 0. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 22 / 27

Certifying non-negativity over the hypercube Problem (Non-negativity) Given a low-degree polynomial f : {0, 1} n R, decide if f 0 over the hypercube or if there exists a point x {0, 1} n such that f (x) < 0. Example: For an n-vertex graph G, we encode a bipartition of the vertex set by a vector x {0, 1} n and we let f G (x) be the number of edges cut by the bipartition x. We get the degree-2 polynomial f G (x) = (x i x j ) 2. {i,j} E(G) Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 22 / 27

Certifying non-negativity over the hypercube Problem (Non-negativity) Given a low-degree polynomial f : {0, 1} n R, decide if f 0 over the hypercube or if there exists a point x {0, 1} n such that f (x) < 0. Example: For an n-vertex graph G, we encode a bipartition of the vertex set by a vector x {0, 1} n and we let f G (x) be the number of edges cut by the bipartition x. We get the degree-2 polynomial f G (x) = (x i x j ) 2. {i,j} E(G) Deciding if the polynomial c f G (x) is non-negative over the hypercube is the same as deciding if MAXCUT (G) c. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 22 / 27

Sum-of-Squares Certificates Definition A degree-d sos certificate (of non-negativity) for f : {0, 1} n R consists of polynomials g 1,..., g r : {0, 1} n R of degree at most d/2 for some r N such that f (x) = g 2 1 (x) + + g 2 r (x), for every x {0, 1} n. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 23 / 27

Sum-of-Squares Certificates Definition A degree-d sos certificate (of non-negativity) for f : {0, 1} n R consists of polynomials g 1,..., g r : {0, 1} n R of degree at most d/2 for some r N such that f (x) = g 2 1 (x) + + g 2 r (x), for every x {0, 1} n. A.k.a degree-d sos proof of the inequality f 0. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 23 / 27

Sum-of-Squares Certificates Definition A degree-d sos certificate (of non-negativity) for f : {0, 1} n R consists of polynomials g 1,..., g r : {0, 1} n R of degree at most d/2 for some r N such that f (x) = g 2 1 (x) + + g 2 r (x), for every x {0, 1} n. A.k.a degree-d sos proof of the inequality f 0. We can assume r = n O(d), thus verifying a certificate takes n O(d) time. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 23 / 27

Sum-of-Squares Certificates Definition A degree-d sos certificate (of non-negativity) for f : {0, 1} n R consists of polynomials g 1,..., g r : {0, 1} n R of degree at most d/2 for some r N such that f (x) = g 2 1 (x) + + g 2 r (x), for every x {0, 1} n. A.k.a degree-d sos proof of the inequality f 0. We can assume r = n O(d), thus verifying a certificate takes n O(d) time. Just check that f (g 2 1 + + g 2 r ) vanishes for every x {0, 1} n. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 23 / 27

Finding SOS Certificates Theorem There exists an algorithm that given a polynomial f : {0, 1} n R and a number k N, outputs a degree-k sos certificate for f + 2 n in time n O(k) if f has a degree-k sos certificate. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 24 / 27

Finding SOS Certificates Theorem There exists an algorithm that given a polynomial f : {0, 1} n R and a number k N, outputs a degree-k sos certificate for f + 2 n in time n O(k) if f has a degree-k sos certificate. Intuition Such polynomials having a degree-d sos certificate form a convex cone, which admits a small semidefinite programming formulation. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 24 / 27

PSD Matrices and SOS Certificates Theorem A polynomial f has a degree-d sos certificate iff there exists a PSD matrix A such that for all x {0, 1} n, f (x) = (1, x) d/2, A(1, x) d/2. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 25 / 27

PSD Matrices and SOS Certificates Theorem A polynomial f has a degree-d sos certificate iff there exists a PSD matrix A such that for all x {0, 1} n, Proof (Do on board). f (x) = (1, x) d/2, A(1, x) d/2. ( ) For a PSD A, extract a degree-d certificate {g 1,... g r }. ( ) For a degree-d sos certificate f = r i=1 g i 2 form a PSD matrix A. Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 25 / 27

SOS Certificates (Some time later?) Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 26 / 27

The End.. often the sign of scientific success is when we eliminate the need for creativity and make boring what was once exciting.. is it just a matter of time until algorithm design will become as boring as solving a single polynomial equation? Questions? akader@cs.umd.edu Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 27 / 27