Approximation Algorithms for Unique Games via Orthogonal Separators

Similar documents
Notes for Lecture 17-18

Lecture 2 October ε-approximation of 2-player zero-sum games

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:

Math 10B: Mock Mid II. April 13, 2016

SOLUTIONS TO ECE 3084

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov

1 Review of Zero-Sum Games

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

We just finished the Erdős-Stone Theorem, and ex(n, F ) (1 1/(χ(F ) 1)) ( n

Online Convex Optimization Example And Follow-The-Leader

An Introduction to Malliavin calculus and its applications

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Introduction to Probability and Statistics Slides 4 Chapter 4

On the Advantage over a Random Assignment*

Vehicle Arrival Models : Headway

Chapter 3 Boundary Value Problem

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

dy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

Chapter 2. First Order Scalar Equations

Christos Papadimitriou & Luca Trevisan November 22, 2016

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

Linear Response Theory: The connection between QFT and experiments

PCP Theorem by Gap Amplification

Longest Common Prefixes

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004

The Arcsine Distribution

Lecture 2 April 04, 2018

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Block Diagram of a DCS in 411

MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE

Expert Advice for Amateurs

Math 334 Fall 2011 Homework 11 Solutions

Ensamble methods: Boosting

Chapter 6. Systems of First Order Linear Differential Equations

Lecture 20: Riccati Equations and Least Squares Feedback Control

Ensamble methods: Bagging and Boosting

On a Fractional Stochastic Landau-Ginzburg Equation

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

Transform Techniques. Moment Generating Function

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

Chapter 8 The Complete Response of RL and RC Circuits

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1.

Final Spring 2007

13.3 Term structure models

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

Linear Cryptanalysis

Guest Lectures for Dr. MacFarlane s EE3350 Part Deux

Stochastic models and their distributions

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

Challenge Problems. DIS 203 and 210. March 6, (e 2) k. k(k + 2). k=1. f(x) = k(k + 2) = 1 x k

Optimal Server Assignment in Multi-Server

Cash Flow Valuation Mode Lin Discrete Time

Chapter 7: Solving Trig Equations

Written Exercise Sheet 5

10. State Space Methods

Question 1: Question 2: Topology Exercise Sheet 3

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

non -negative cone Population dynamics motivates the study of linear models whose coefficient matrices are non-negative or positive.

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

Let us start with a two dimensional case. We consider a vector ( x,

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Some Ramsey results for the n-cube

Convergence of the Neumann series in higher norms

Solutions from Chapter 9.1 and 9.2

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales

2. Nonlinear Conservation Law Equations

Rainbow saturation and graph capacities

Answers to QUIZ

EXERCISES FOR SECTION 1.5

CMU-Q Lecture 3: Search algorithms: Informed. Teacher: Gianni A. Di Caro

THE MATRIX-TREE THEOREM

Undetermined coefficients for local fractional differential equations

11!Hí MATHEMATICS : ERDŐS AND ULAM PROC. N. A. S. of decomposiion, properly speaking) conradics he possibiliy of defining a counably addiive real-valu

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

Two Coupled Oscillators / Normal Modes

Journal of Discrete Algorithms. Approximability of partitioning graphs with supply and demand

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Elements of Stochastic Processes Lecture II Hamid R. Rabiee

Math 315: Linear Algebra Solutions to Assignment 6

Weyl sequences: Asymptotic distributions of the partition lengths

4 Sequences of measurable functions

The Strong Law of Large Numbers

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

ENGI 9420 Engineering Analysis Assignment 2 Solutions

KEY. Math 334 Midterm I Fall 2008 sections 001 and 003 Instructor: Scott Glasgow

Echocardiography Project and Finite Fourier Series

Problem set 2 for the course on. Markov chains and mixing times

t 2 B F x,t n dsdt t u x,t dxdt

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

Random Walk with Anti-Correlated Steps

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

SELBERG S CENTRAL LIMIT THEOREM ON THE CRITICAL LINE AND THE LERCH ZETA-FUNCTION. II

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Lecture 6: Wiener Process

Transcription:

Approximaion Algorihms for Unique Games via Orhogonal Separaors Lecure noes by Konsanin Makarychev. Lecure noes are based on he papers [CMM06a, CMM06b, LM4]. Unique Games In hese lecure noes, we define he Unique Games problem and describe an approximaion algorihm for i. Definiion. (Unique Games). Given a consrain graph G = (V, E) and a se of permuaions π uv on [k] = {,..., k} (for all edges (u, v)), assign a value (label) x u from he se of labels [k] = {,..., k} o each verex u so as o saisfy he maximum number of consrains of he form π uv (x u ) = x v. Observe ha if he insance is compleely saisfiable, hen i is very easy o find a soluion ha saisfies all consrains: We simply need o guess he label for one verex, and hen propagae he values o all oher verices in he graph. However, even if a very small fracion of all consrains is violaed, hen i is very hard o find a good soluion. Kho [Kho0] conjecured ha if he opimal soluion saisfies 99% of all consrains, hen i is N P-hard o find a soluion saisfying even a % of all consrains. The conjecure is known as Kho s Unique Games conjecure. We sae i formally below. Definiion. (Unique Games Conjecure [Kho0]). For every posiive ε and δ, here exiss a k such ha given an insance of Unique Games wih k labels, i is N P-hard o disinguish beween he following wo cases: There exiss a soluion saisfying ( ε) fracion of all consrains. Every assignmen saisfies a mos δ fracion of all consrains. I is unknown wheher he conjecure is rue or false. The bes approximaion algorihms for Unique Games find soluions saisfying O( ε log k) and O(ε log n log k) fracion of all consrains, if he opimal soluion saisfies ε fracion of all consrains. Theorem.3 (Charikar, Makarychev, Makarychev [CMM06a]). There exiss an approximaion algorihm ha given a Unique Games insance saisfying ( ε) fracion of all consrains, finds a soluion saisfying O( ε log k) fracion of all consrains. Theorem.4 (Chlamáč, Makarychev, Makarychev [CMM06b]). There exiss an approximaion algorihm ha given a Unique Games insance saisfying ( ε) fracion of all consrains, finds a soluion saisfying O(ε log n log k) fracion of all consrains. Noe ha he approximaion of Theorem.3 canno be improved assuming he Unique Games conjecure is rue [KKMO07]. We prove Theorem.3 using he echnique of orhogonal separaors from [CMM06b]. Below, we denoe he number of violaed consrains by OP T. We assume ha OP T ε E, and show how o find a soluion violaing a mos O( ε log k) E consrains. In he nex secion, we presen a sandard SDP relaxaion for Unique Games (wihou l riangle inequaliies). Then, in Secion 3, we inroduce a echnique of orhogonal separaors. However, we pospone he proof of exisence of orhogonal separaors o Secion 5. In Secion 4, we presen he approximaion algorihm and prove Theorem.3. Finally, in Secion 6, we give some useful bounds on he Gaussian disribuion.

SDP Relaxaion In he SDP relaxaion, we have a vecor ū i for every verex u V and label i [k]. In he inended inegral soluion, he vecor ū i is he indicaor of he even he verex u has he label i. Tha is, if x u is he opimal labeling, hen he corresponding inegral soluion is as follows: { ū, if x u = i; i = 0, oherwise. Observe ha if a consrain (u, v) is saisfied hen ū i = v π uv(i) for all i. If he consrain is violaed, hen ū i = v π = 0 for all bu exacly wo i s: uv(i) ū x u =, bu v π = 0; and uv(x u) v x v =, bu ū = 0. πuv (x v) Thus, { ū i v 0, if assignmen x π uv(i) u saisfies consrain (u, v); =, if assignmen x u violaes consrain (u, v). Therefore, he number of violaed consrains equals (u,v) E ū i v π uv(i). () Our goal is o minimize his expression. Noe ha for a fixed verex u, one and only one ū i equals. Hence, ū i, ū j = 0, if i j; and i ū i =. We now wrie he SDP relaxaion. min (u,v) E ū i v πuv(i) ū i, ū j = 0 for all u V and i j ū i = for all u V This is a relaxaion, since for ū i = ū i, he SDP value equals he number of violaed consrains (see ()); and ū i is a feasible soluion for he SDP. Usually, such relaxaions conain an exra consrains he l riangle inequaliy. Bu we will noe use i here. 3 Orhogonal Separaors Overview Le X be a se of vecors in l of lengh a mos. We say ha a disribuion over subses of X is an m- orhogonal separaor of X wih l disorion D, probabiliy scale α > 0 and separaion hreshold β <, if he following condiions hold for S X chosen according o his disribuion:. For all ū X, Pr(ū S) = α ū.

. For all ū, v X wih ū, v β max( ū, v ), 3. For all ū, v X, Pr(ū S and v S) α min( ū, v ). m Pr(I S (ū) I S ( v)) αd ū v min( ū, v ) + α ū v, where I S is he indicaor of he se S i.e. I S (ū) =, if ū S; I S (ū) = 0, if ū / S. In mos cases, i is convenien o use a slighly weaker (bu simpler) bound on Pr(I S (ū) I S ( v)). 3. For all ū, v X, Pr(I S (ū) I S ( v)) αd ū v max( ū, v ), The propery (3 ) follows from (3), since ū v = ū v ( ū + v ) ū v max( ū, v ). The las inequaliy follows from he (regular) riangle inequaliy for vecors ū, v and (ū v). Our algorihm for Unique Games relies on he following heorem. Theorem 3. (Chlamac, Makarychev, Makarychev [CMM06b]). There exiss a polynomial-ime randomized algorihm ha given a se of vecors X in he uni ball and parameer m, generaes a m-orhogonal separaor wih l disorion D = O ( log m ), probabiliy scale α poly(/m) and separaion hreshold β = 0. 4 Approximaion Algorihm We are now ready o presen he approximaion algorihm. Inpu: An insance of Unique Games. Oupu: Assignmen of labels o verices.. Solve he SDP. Le X = {ū i : u V, i [k]}.. Mark all verices as unprocessed. 3. while (here are unprocessed verices) (a) Produce an m-orhogonal separaor S X wih disorion D and probabiliy scale α as in Theorem 3., where m = 4k and D = O( log n log m). (b) For all unprocessed verices u : Le S u = {i : u i S}. If S u conains exacly one elemen i, hen assign he label i o u, and mark he verex u as processed. 4. If he algorihm performs more han n/α ieraions, assign arbirary values o any remaining verices (noe ha α /poly(k)). 3

Lemma 4.. The algorihm saisfies he consrain beween verices u and v wih probabiliy O(D ε uv ), where ε uv is he SDP conribuion of he erm corresponding o he edge (u, v): ε uv = k ū i v πuv(i). i= Proof. If D ε uv /8, hen he saemen holds rivially, so we assume ha D ε uv < /8. For he sake of analysis we also assume ha π uv is he ideniy permuaion (we can jus rename he labels of he verex v, his clearly does no affec he execuion of he algorihm). A he end of an ieraion in which one of he verices u or v assigned a value we mark he consrain as saisfied or no: he consrain is saisfied, if he he same label i is assigned o he verices u and v; oherwise, he consrain is no saisfied (here we conservaively coun he number of saisfied consrains: a consrain marked as no saisfied in he analysis may poenially be saisfied in he fuure). Consider one ieraion of he algorihm. There are hree possible cases:. Boh ses S u and S v are equal and conain only one elemen, hen he consrain is saisfied.. The ses S u and S v are equal, bu conain more han one or none elemens, hen no values are assigned a his ieraion o u and v. 3. The ses S u and S v are no equal, hen he consrain is no saisfied (a conservaive assumpion). Le us esimae he probabiliies of each of hese evens. Using he fac ha for all i j he vecors ū i and ū j are orhogonal, and he firs and second properies of orhogonal separaors we ge (below α is he probabiliy scale): for a fixed i, Pr ( S u = ; i S u ) = Pr (i S u ) Pr (i S u and j S u for some j i) Here we used ha j [k] ū j =. Then, Pr ( S u = ) = Pr (i S u ) j [k] j i α ū i j [k] j i α ū i α 4k α ū i α 4k. Pr (i, j S u ) α min( ū i, ū j ) 4k ū j j [k] j i Pr ( S u = ; i S u ) [ α ū i α ] 3/4 α. 4k The probabiliy ha he consrain is no saisfied is a mos Pr (S u S v ) Pr (I S (u i ) I S (v i )). By he hird propery of orhogonal separaors (see propery (3 )): Pr (S u S v ) αd ū i v i max( ū i, v i ). 4

By Cauchy-Schwarz, Pr (S u S v ) αd αd = αd ε uv. ū i v i ε uv Finally, he probabiliy of saisfying he consrain is a leas max( ū i, v i ) ū i + v i }{{} = Pr ( S u = and S u = S v ) 3 4 α αd ε uv α. Since he algorihm performs n/α ieraions, he probabiliy ha i does no assign any value o u or v before sep 4 is exponenially small. A each ieraion he probabiliy of failure is a mos O(D ε uv ) imes he probabiliy of success, hus he probabiliy ha he consrain is no saisfied is O(D ε uv ). We now show ha he approximaion algorihm saisfies O( ε log k) fracion of all consrains. Proof of Theorem.3. By Lemma 4., he expeced number of unsaisfied consrains is equal o O( ε uv log k). (u,v) E By Jensen s inequaliy for he funcion, εuv log k E E (u,v) E (u,v) E ε uv log k = SDP E log k. Here, SDP = (u,v) E ε uv denoes he SDP value. If OP T ε E, hen SDP OP T ε E. Hence, he expeced cos of soluion is upper bounded by O( ε log k) E. 5 Orhogonal Separaors Proofs Proof of Theorem 3.. In he proof, we denoe he probabiliy ha a Gaussian N (0, ) random variable X is greaer han a hreshold by Φ(). We use he following algorihm for generaing m-orhogonal separaors wih l disorion: Assume w.l.o.g. ha all vecors ū lie in R n. Fix m = m and = Φ (/m ) (i.e., fix such ha Φ() = /m ). Sample independenly a random Gaussian n dimensional vecor g N (0, I) in R n and a random number r in [0, ]. Reurn he se S = {ū : ū, g ū and ū r}. We claim ha S is an m-orhogonal separaor wih l disorion O( log m), probabiliy scale α = /m and β = 0. Le us verify ha S saisfies he required condiions.. For every nonzero vecor ū X, we have Pr(ū S) = Pr( ū, g ū and r ū ) = Pr( ū/ ū, g ) Pr(r ū ) = ū /m α ū. 5

Here we used ha ū/ ū, g is disribued as N (0, ), since ū/ ū is a uni vecor. If ū = 0, hen Pr(r ū ) = 0, hence Pr(ū S) = 0.. For every ū, v X wih ū, v = 0, we have Pr(ū, v S) = Pr( ū, g ; v, g ; r ū and r v ) = Pr( ū, g ū and v, g v ) Pr(r min( ū, v )). The random variables ū, g and v, g are independen, since ū and v are orhogonal vecors. Hence, Pr(ū, v S) = Pr( ū, g ū ) Pr( v, g v ) Pr(r min( ū, v )). Noe ha ū/ ū is a uni vecor, and ū/ ū, g N (0, ). Thus, Pr( ū, g ū ) = Pr( ū/ ū, g ) = /m. Similarly, Pr( v, g v ) = /m. Then, Pr(r min( ū, v )) = min( ū, v ), since r is uniformly disribued in [0, ]. We ge Pr(ū, v S) = min( ū, v ) m = α min( ū, v ). m 3. If I S (ū) I S ( v) hen eiher ū S and v / S, or ū / S and v S. Thus, Pr(I S (ū) I S ( v)) = Pr(ū S; v / S) + Pr(ū / S; v S). We upper bound he boh erms on he righ hand side using he following lemma (swiching ū and v for he second erm) and obain he desired inequaliy. Lemma 5.. If ū v, hen oherwise, Proof of Lemma 5.. We have Pr(ū S; v / S) αd ū v min( ū, v ) + α ū v ; Pr(ū S; v / S) αd ū v min( ū, v ). Pr(ū S; v / S) = Pr( ū, g ū ; r ū ; v / S). The even { v / S} is he union of wo evens { v, g v and r v } and {r v }. Hence, Pr(ū S; v / S) Pr( ū, g ū ; v, g < v ; r min( ū, ū )) () + Pr( ū, g ū ; v r ū ). Le g u = ū/ ū, g and g v = v/ v, g. Boh g u and g v are sandard N (0, ) Gaussian random variables. Thus, Pr(g u ) = Pr(g v ) = /m = α. We rewrie () as follows: Pr(ū S; v / S) = Pr(g u ; g v < ) Pr(r min( ū, v )) + Pr(g u ) Pr( v r ū ) = Pr(g u ; g v < ) min( ū, v ) + α Pr( v r ū ). (3) 6

To finish he proof we need o esimae Pr(g u ; g v < ) and Pr( v r ū ). Since r is uniformly disribued in [0, ], Pr( v r ū ) = ū v, if ū v > 0; and Pr( v r ū ) = 0, oherwise. We use Lemma 6. o upper bound Pr(g u ; g v < ): Pr(g u ; g v < ) O ( cov(gu, g v ) log m /m ). (4) The covariance of g u and g v equals cov(g u, g v ) = ū/ ū, v/ v and ū v = ū + v ū, v. Hence, cov(g u, g v ) = ū + v ū v ū v We plug his bound in (4) and ge Pr(g u ; g v < ) α = ū v ( ū + v ū v ) ū v = ū v ( ū v ) ū v ū v ū v O( log m ). Now, Lemma 5. follows form (3). This concludes he proof of Lemma 5. and Theorem 3.. ū v ū v. 6 Gaussian Disribuion In his secion, we prove several useful esimaes on he Gaussian disribuion. Le X N (0, ) be one dimensional Gaussian random variable. Denoe he probabiliy ha X by Φ(): Φ() = Pr(X ). The firs lemma gives a very accurae esimae on Φ() for large. Lemma 6.. For every > 0, π ( + ) e < Φ() < π e. Proof. Wrie Thus, Φ() = = π e x dx = π e π π e x x e x x dx. Φ() < π e. e x x dx On he oher hand, π e x x dx < π e x dx = Φ(). 7

Hence, and, consequenly, Φ() > π e Φ() > Φ(), π( + ) e. Lemma 6.. Le X and Y be Gaussian N (0, ) random variables wih covariance cov(x, Y ) = ε. Pick he hreshold > such ha Φ() = /m for m > 3. Then Pr(X and Y ) = O(ε log m/m). Proof. If ε or ε /, hen we are done, since ε log m = Ω(ε) = Ω() and So we assume ha ε and ε < /. Le Pr(X and Y ) Pr(X ) = m. ξ = X + Y ε ; η = X Y. ε Noe ha ξ and η are N (0, ) Gaussian random variables wih covariance 0. Hence, ξ and η are independen. We have Pr ( X and Y ) = Pr ( ε ξ + εη and ε ξ εη ). Denoe by E he following even: Then, E = { ε ξ + εη and ε ξ εη }. Pr ( X and Y ) = Pr(E and εη ) + Pr(E and εη ). Observe ha he second probabiliy on he righ hand side is very small. I is upper bounded by Pr(εη ), which, in urn, is bounded as follows: Pr(εη ) = π e x /ε We now esimae he firs probabiliy: ( ε e ε dx = Φ(/ε) O Pr(E and εη ) = E η [Pr(E and η /ε η)] = /ε Pr(E η = x) e x / dx π 0 ) ( ε e O ) = O(ε/m). = /ε Pr( ε ξ [ εx, + εx]) e x / dx. π 0 The densiy of he random variable ε ξ in he inerval ( εx, + εx) for x [0, /ε] is a mos π( ε ) e ( εx) ( ε ) e ( εx) e e εx e e x, 8

here we used ha ε / and ε. Hence, Therefore, Pr( εx ε ξ + εx) εx e e x. Pr(E and εη ) ε e π /ε 0 xe x e x ε e dx π xe x e x dx. 0 } {{ } O() The inegral in he righ hand side does no depend on any parameers, so i can be upper bounded by some consan (e.g. one can show ha i is upper bounded by π). We ge This finishes he proof. Pr(E and εη ) O(ε e ) = O(ε Φ()) = O(ε log m/m). References [CMM06a] Moses Charikar, Konsanin Makarychev, and Yury Makarychev. Near-opimal algorihms for unique games. In ACM Symposium on Theory of Compuing, STOC, 006. [CMM06b] Eden Chlamac, Konsanin Makarychev, and Yury Makarychev. How o play unique games using embeddings. In IEEE Symposium on Foundaions of Compuer Science, FOCS, 006. [Kho0] Subhash Kho. On he power of unique -prover -round games. In ACM Symposium on Theory of Compuing, STOC, 00. [KKMO07] Subhash Kho, Guy Kindler, Elchanan Mossel, and Ryan O Donnell. Opimal inapproximabiliy resuls for MAX-CUT and oher -variable CSPs? SIAM J. Compu., 37():39 357, 007. [LM4] Anand Louis and Konsanin Makarychev. Approximaion algorihm for sparses k-pariioning. In ACM-SIAM Symposium on Discree Algorihms, SODA, 04. 9