N P-Hardness of Approximately Solving Linear Equations Over Reals

Size: px
Start display at page:

Download "N P-Hardness of Approximately Solving Linear Equations Over Reals"

Transcription

1 lectronic Colloquium on Computational Complexity, Report No. 11 (010) N P-Hardness of Approximately Solving Linear quations Over Reals Subhash Khot Dana Moshkovitz July 15, 010 Abstract In this paper, we consider the problem of approximately solving a system of homogeneous linear equations over reals, where each equation contains at most three variables. Since the all-zero assignment always satisfies all the equations exactly, we restrict the assignments to be non-trivial. Here is an informal statement of our result: it is N P-hard to distinguish whether there is a non-trivial assignment that satisfies 1 δ fraction of the equations or every non-trivial assignment fails to satisfy a constant fraction of the equations with a margin of Ω( δ). We develop linearity and dictatorship testing procedures for functions f : R n R over a Gaussian space, which could be of independent interest. Our research is motivated by a possible approach to proving the Unique Games Conjecture. 1 Introduction In this paper, we study the following natural question: given a homogeneous system of linear equations over reals, each equation containing at most three variables (call it 3Lin(R)), we seek a non-trivial approximate solution to the system. In the authors opinion, the question is poorly understood whereas the corresponding question over a finite field, say GF (), is fairly well understood Hås01, HK04. Over a finite field, an equation is either satisfied or not satisfied, whereas over reals, an equation may be approximately satisfied up to a certain margin and we may be interested in the margin. The main motivation for this research is a possible approach to proving the Unique Games Conjecture. More details appear in Section 1.5. We first describe our result and techniques and compare it with known results. 1.1 Our Result Fix a parameter b 0 1. Call a 3Lin(R) system b 0 -regular if every variable appears in the same number of equations, and the absolute values of the coefficients in all the equations are in the This is a new and improved version of our paper KM10 that established the same result, but under the Unique Games Conjecture. khot@cims.nyu.edu. Computer Science Department, Courant Institute of Mathematical Sciences, New York University. Research supported by NSF CARR grant CCF-08338, NSF xpeditions grant CCF , and BSF grant dana.moshkovitz@gmail.com. School of Mathematics, The Institute for Advanced Study. Research supported by NSF Grant CCF ISSN

2 range 1 b 0, b 0. Let X denote the set of variables so that an assignment is a map A : X R. For an equation eq : r 1 x 1 + r x + r 3 x 3 = 0, and an assignment A, the margin of the equation (w.r.t. A) is Margin(A, eq). = r 1 A(x 1 ) + r A(x ) + r 3 A(x 3 ). The all-zeroes assignment, x X, A(x) = 0, satisfies all the equations exactly, i.e. with a zero margin. Therefore, we will be interested only in the non-trivial assignments. For now, think of a non-trivial assignment as one where the distribution of its values {A(x) x X} is well-spread. Specifically, we may consider the Gaussian distributed assignments, for which the set of values {A(x) x X} is distributed (essentially) according to a standard Gaussian. Here is an informal statement of our result: Theorem 1. (Informal) There exist universal constants b 0, c (b 0 = works) such that for every δ > 0, given a b 0 -regular 3Lin(R) system, it is N P-hard to distinguish between: (YS Case): There is a Gaussian distributed assignment that satisfies 1 δ fraction of the equations. (NO Case): For every Gaussian distributed assignment, for at least a fraction c of the equations, the margin is at least c δ. A few remarks are in order. Since the 3Lin(R) instance is finite, we cannot expect the set of values {A(x) x X} to be exactly Gaussian distributed. The proof of our result proceeds by constructing a probabilistically checkable proof (PCP) over a continuous high-dimensional Gaussian space and then this idealized instance is discretized to obtain a finite instance. Theorem 1 holds in the idealized setting. The discretization step introduces, in the YS Case, a margin of at most γ in each equation, but γ can be made arbitrarily small relative to δ and hence this issue may be safely ignored. The distribution of values is still close to a standard Gaussian. We also set all variables with values larger than O(log(1/δ)) to zero. This applies to only poly(δ) fraction of the variables and hence does not have any significant effect on the result. Thus our assignment, in the YS Case, satisfies in particular: x X, A(x) b = O(log(1/δ)), A(x) = 1. (1) x X In the NO Case, our analysis extends to every assignment that satisfies (1), and the conclusion is appropriately modified (which is necessary since an assignment that satisfies (1) could still have a very skewed distribution of its values). A formal statement of the result appears as Theorem 6 in Section. 1. Optimality of Our Result, Squared-l versus l 1 rror, and Homogeneity Optimality: The result of Theorem 1 is qualitatively almost optimal as can be seen from a natural semi-definite programming relaxation and a rounding algorithm. Suppose there are N variables X = {x 1,..., x N }, m equations and j th equation in the system is r j1 x j1 + r j x j + r j3 x j3 = 0. Consider the following SDP relaxation where for every variable x i, we have a vector v i and b = O(log(1/δ)):

3 Minimize j m rj1 v j1 + r j v j + r j3 v j3, Such that x i X, v i b, vi = 1. x i X Suppose that in the YS Case, there is an assignment A that satisfies (1) and satisfies 1 δ fraction of the equations exactly. Then letting v i = A(x i )v 0 for some fixed unit vector v 0 gives a feasible solution to the SDP with the objective O(δ log (1/δ)). Hence the SDP finds a feasible vector solution with the same upper bound on the objective. Suppose the SDP vectors lie in d-dimensional uclidean space. Consider a rounding that picks a standard d-dimensional Gaussian vector r and defines an assignment A(x i ) = v i, r. It is easily seen that after a suitable scaling, with constant probability over the rounding scheme, we have: A(xi ) = 1, rj1 A(x j1 ) + r j A(x j ) + r j3 A(x j3 ) O(δ log (1/δ)). x i X j m Thus the margin r j1 A(x j1 ) + r j A(x j ) + r j3 A(x j3 ) is at most O( δ log(1/δ)) for almost all, say 99%, of the equations. Moreover, since x i X, v i b, after rounding all but poly(δ) fraction of the variables get values bounded by O(log (1/δ)), and these variables can be set to zero without affecting the solution significantly. Optimality of Semidefinite Programming Based Algorithms: As shown by Raghavendra Rag08, the Unique Games Conjecture, if true, implies that for every constraint satisfaction problem 1, a certain semi-definite programming based algorithm gives the best efficient approximation for the problem (as long as P = N P). Similar results hold for many other types of problems. In light of this, a natural question is whether one can point to even a single problem for which an SDP-based algorithm is provably optimal assuming P = N P (and not relying on the Unique Games Conjecture). One could argue that such a result was known for 3Sat: Håstad showed that (7/8 + ɛ)- approximation on satisfiable 3SAT formulas is N P-hard for any ɛ > 0 Hås01, while Karloff and Zwick gave a matching SDP-based algorithm KZ97. In our opinion, this example does not truly demonstrate the phenomenon that SDP-based algorithms are optimal. The reason is that the hardness result produces formulas where each clause contains exactly three literals. For formulas of this kind, a random assignment satisfies, in expectation, 7/8 fraction of the clauses. So, there is a simple, non SDP-based, algorithm achieving 7/8 approximation. To the best of our knowledge, 3Lin(R) is the first problem where a non-trivial SDP algorithm is shown to be optimal assuming P N P. The Squared-l versus l 1 rror: The SDP algorithm described above finds an assignment that minimizes the expected squared margin, i.e. j m Margin(A, j). Thus the problem of minimizing the squared-l error is a computationally easy problem. However, Theorem 1 implies that minimizing the l 1 error (i.e. j m Margin(A, j)), even approximately, is computationally hard (assuming P = N P). In the YS Case therein, all but δ fraction of the equations are exactly satisfied, and the variables are bounded by O(log(1/δ)). Hence the l 1 1 where variables range over a constant sized alphabet, and each constraint depends on a constant number of variables. 3

4 error is O(δ log(1/δ)). In the NO Case, for any Gaussian distributed assignment, for at least a constant fraction of the equations, the margin is at least Ω( δ), and hence the l 1 error is Ω( δ). Thus approximating the l 1 error within a quadratic factor is computationally hard; this is optimal since the squared-l minimization implies an l 1 approximation within a quadratic factor. Homogeneity: Theorem 1 holds for a system of linear equations that is homogeneous and it is necessary therein (in the NO Case) to restrict the distribution of values of an assignment. When the system of equations is non-homogeneous, one might hope to drop the restriction on the distribution of values. However, then a simple LP can directly minimize the l 1 error and hence one cannot hope for a theorem analogous to Theorem Techniques Dictatorship Test Over Reals Similar to most hardness results, our result proceeds by developing an appropriate dictatorship test. However, unlike most previous applications that use a dictatorship test over an n-dimensional boolean hypercube (or k-ary hypercube in some cases), we develop a dictatorship test over R n with the standard Gaussian measure. The test is quite natural, but its analysis turns out to be rather delicate. We think that the test itself is of independent interest and provide its high level overview here. Let N n denote the n-dimensional Gaussian distribution with n independent mean 0 and variance 1 coordinates. Let L (R n, N n ) be the space of all measurable real functions f : R n R with f = x N n f(x) <. This is an inner product space with the inner product f, g =. x N n f(x)g(x). A dictatorship is a function f(x) = x i0 for some fixed coordinate i 0 n. Given oracle access to a function f L (R n, N n ), we desire a probabilistic homogeneous linear test that accesses at most three values of f. The tests, over all choices of randomness, can be written down as a system of homogeneous linear equations over the values of f. We assume that the function f is non-trivial, i.e. f = 1, and anti-symmetric, i.e. f( x) = f(x) x Rn. In particular, f = 0. We desire a test such that a dictatorship function is a good solution to the system of linear equations, whereas a function that is far from a dictatorship, is a bad solution to the system. The test we propose is a combination of a linearity test and a coordinate-wise perturbation test. A dictatorship function satisfies all the equations of the linearity test and 1 δ fraction of the equations of the coordinate-wise perturbation test. A function that is far from a dictatorship, either fails miserably on the linearity test, or a constant fraction of the equations have a margin Ω( δ) on the coordinate-wise perturbation test. One starts out by observing that a dictatorship function is linear. Thus, for any λ, µ R such that λ + µ = 1, say λ = µ = 1, one can test whether f(λx + µy) = λx + µy, where x, y N n are picked independently. Clearly, a dictatorship function satisfies each such equation exactly. The condition λ + µ = 1 ensures that the query point λx + µy is also distributed according to N n. Note that we assume f = 1 and f = 0. Functions in A closer examination of the proof of Theorem 1 shows that the upper bound is actually O(δ); for the equations that are not satisfied, the margin itself is distributed according to a standard Gaussian. 4

5 L (R n, N n ) have the Hermite representation; in particular, f can be decomposed into the linear and non-linear parts: f = f =1 + e, f =1 = n a i x i, i=1 f =1, e = 0. Note that 1 = f = f =1 + e. A simple Fourier analytic argument shows that unless e 0.01, the linearity test fails with large average squared margin (and the analysis of the test is over). Therefore we may assume that e Assume for now, that e 0 and hence the function is linear: f = f =1 = n i=1 a ix i and n i=1 a i = 1. We introduce the coordinate-wise perturbation test to ensure that the coefficients {a i } n i=1 are concentrated on a bounded set. This makes sense because for a dictatorship function, there is exactly one non-zero coefficient. The test picks a random point x N n and for a randomly chosen δ fraction of the coordinates, each chosen coordinate is re-sampled independently from a standard Gaussian. If x is the new point, then one tests whether f( x) f(x) = 0. Note that for a dictatorship function, the above equation is satisfied with probability 1 δ, whereas with probability δ, the margin is distributed as a mean-0 variance- Gaussian. On the other hand, if f = n i=1 a ix i is far from a dictatorship, then coefficients {a i } n i=1 are spreadout, and with a constant probability, the margin is Ω( δ). This is intuitively the idea behind the test; however the presence of the non-linear part e complicates matters considerably. ven though e 0.01, we are dealing with margins of the order of δ, and the non-linear part e could potentially interfere with the above simplistic argument. We therefore need a more refined argument. We observe that since f = f =1 + e, f( x) f(x) = (f =1 ( x) f =1 (x)) + (e( x) e(x)). When f =1 = n i=1 a ix i is spread-out, the first term in the above equation, namely f =1 ( x) f =1 (x), is Ω( δ) with a constant probability as we observed above. The same can be concluded about the left hand side of the equation, namely f( x) f(x), unless the second term e( x) e(x) interferes in a very correlated manner. If this happens, then the function e must be sensitive to noise along a random set of δn coordinates. We add a test ensuring that e is insensitive to noise of comparable magnitude in a random direction. We then show that the two behaviors are contradictory, using a Fourier analytic argument that relies, in addition, on the cut-decomposition of line/l 1 metrics. 1.4 The Reduction The NP-hardness proof proceeds by using the dictatorship test discussed in the previous section as a gadget in a reduction. One might expect the reduction to go along the lines of Håstad s reduction for the Boolean 3Lin, however the real case confronts us with serious challenges. A key component in Håstad s reduction addresses the following problem (in the Boolean case): The Restriction Problem. Given oracle access to a function f : F n F that is approximately a dictatorship function (for the sake of exposition, assume that for some i 0 n, on 5

6 most points y F n we have f(y) = y i0 ), and to a function g : F m F, m l = n, test whether g is the following restriction of f: g(x 1,..., x m ) = f(x 1,..., x 1,......, x m,..., x m ), where each x i repeats l times. The test should check a linear equation on three values of f and g (altogether). The restriction problem can be solved in the Boolean case F = {0, 1} and for any finite field F via self-correction. The tester is as follows: 1. Pick x F m, y F n uniformly at random.. Set z = (x 1,..., x 1,......, x m,..., x m ) F n. 3. Accept if and only if g(x 1,..., x m ) = f(y) + f(z y). Note that when f is a dictatorship function and g is the appropriate restriction of it, the test always accepts (in fact, linearity of f suffices). Also note that the test is linear in three values of f and g. The test works also when f is close to a dictatorship function f, because the points y and z y are uniformly distributed in F n, and with high probability, f evaluates to the correct dictatorship function f at both the points. Note that z itself is not uniformly distributed in F n, but still f(y) + f(z y) yields, with high probability, the correct value f(z). Now consider the analogous problem for functions in Gaussian space. In this case, we can at most gurantee that with high probability over y N n it holds that f(y) y i0. The tester we showed for the finite field case no longer works: even when x R m and y R n are Gaussian distributed, the point z y may not be distributed as a Gaussian in R n. We instead proceed as follows. Define a subspace S of R n as: S := {(x 1,..., x 1,......, x m,..., x m ) x 1,..., x m R}, where each x i repeats l times. Let π : S R m denote the projection that for 1 i m, picks the common coordinate from the i th block. The tester is as follows: 1. Pick y N n.. Write y = y + y, where y S and y S. Set y = y y and let y R m be the vector y := l π(y ). It is easily seen that y is distributed as N m. 3. Check that g(y ) = l f(y) + f(y ). It can be easily checked that if f is a dictatorship and g its appropriate restriction, then the test equation holds. Note that y, y are both Gaussian distributed, and thus if f is close to a dictatorship f, then with high probability f(y) f(y) and f(y ) f(y ) and l f(y)+f(y ) l f(y)+ f(y ) = g(y ) if g is the appropriate restriction of f. One caveat however is that the error involved in the approximating f by f gets multiplied by l in this calculation and if l is too large, the equation becomes rather meaningless. 6

7 How large is l in hardness applications? This parameter corresponds to the Outer PCP (aka Label Cover) being l to 1. In standard hardness results, such as Håstad s, one uses the Parallel Repetition Theorem Raz98 and l = (1/ε) O(1), where ε is the soundness error of the Outer PCP. Moreover, the soundness error ε usually needs to be tiny, which in turn requires l to be large, and this is prohibitive in our application. To avoid having large l, we do not use parallel repetition, and work instead with the basic PCP Theorem AS98, ALM This PCP has high soundness error (say 0.99), but is adequate for the purpose of proving Theorem 1. The reason is that Theorem 1 is also a hige error hardness result we only guarantee in the NO case that a constant fraction of the equations (say 1%) fail with a good margin. Still, working with a high error PCP seems impossible at first sight. The dictatorship test gives rise to a list decoding of possible dictatorship functions, rather than identifying a single dictatorship function, and this seems to call for an Outer PCP with low error. Indeed, virtually all existing hardness results rely on PCP with low error for the same reason (where one of the dictatorship coordinates in the decoded list is picked at random as a candidate label/answer for the Outer PCP). To circumvent the need for a low error PCP, we build a new Outer PCP. Suppose that the basic PCP corresponds to a set of variables Z, a set of tests/constraints C, and each test depends on d variables. The new Outer PCP is as follows: 1. The verifier picks independently at random k possible tests c 1,..., c k C, an index i k, and a variable z in the test c i.. The verifier sends the tuple u = (c 1,..., c k ) to the first prover and the tuple (c 1,..., c i 1, z, c i+1,..., c k ) to the second prover. 3. Both provers are supposed to answer with the values of all the variables in the tuple they were given. 4. The verifier checks that provers answers are consistent and satisfy the tests. Note that this outer PCP is as sound as the basic PCP. Moreover, it is l = d to 1 where each constraint depends on d variables for a fixed constant d. The crux of the analysis is that a short list of each prover s answers in this PCP translate (with high probability) into just one answer for a random coordinate i k on which the basic PCP test is actually performed. Thus, via this Outer PCP, we convert the list decoding setting into a unique decoding setting, and allow the reduction to go through. We make the argument formal by using the technique of correlated sampling KT0, Hol09 to choose a consistent element from two lists, one for each prover. Due to the specific Outer PCP construction, our reduction maps instances of Sat of size N to instances of 3Lin(R) of size N O(k), k = (1/δ) O(1), where δ is the parameter of Theorem 1. Hence, the reduction incurs a blow-up of N (1/δ)O(1) in the size. This blow-up matches the blow-up predicted by the recent work of Arora, Barak and Steurer ABS10 for unique games. We remark that the actual analysis is much more complex than hinted here. The reason is that the 3Lin(R) instance constructed by the reduction consists of several functions f : R n R that could have widely varying norms, whereas list decoding via dictatorship testing can be extracted only from functions with non-negligible norms, and the eventual prover strategies have to be weighted delicately according to these norms. 7

8 1.5 Comparison with Known Results and Motivation for Studying 3LIN(R) MinUncut: Given a graph G(V = N, ), the MinUncut problem seeks a cut in the graph that minimizes the number of edges not cut. It can be thought of as an instance of Lin(R) where one has variables {x 1,..., x N }, and for every edge (i, j), a homogeneous equation: x i + x j = 0, and the goal is to find a boolean, i.e. { 1, 1}-valued assignment that minimizes the number of unsatisfied equations. Khot et al KKMO07 show that assuming the UGC, for sufficiently small δ > 0, given an instance that has an assignment that satisfies all but δ fraction of the equations, it is N P-hard to find an assignment that satisfies all but π δ fraction of the equations. This result is qualitatively similar to Theorem 1, but note that the variables are restricted to be boolean. Balanced Partitioning: Given a graph G(V = N, ), the Balanced Partitioning problem seeks a roughly balanced cut (i.e. each side has Ω(N) vertices) in the graph that minimizes the number of edges cut. It can again be thought of as an instance of Lin(R) where one has variables {x 1,..., x N }, and for every edge (i, j), a homogeneous equation: x i x j = 0, () and the goal is to find a { 1, 1}-valued and roughly balanced assignment that minimizes the number of unsatisfied equations. Arora et al AKK + 08 show that assuming a certain variant of the UGC, given an instance of Balanced Partitioning that has a balanced assignment that satisfies all but δ fraction of the equations, it is N P-hard to find a roughly balanced assignment that satisfies all but δ c fraction of the equations. Here 1 < c < 1 is an arbitrary constant and for every such c, the result holds for all sufficiently small δ > 0. The result is again qualitatively similar to Theorem 1. In fact, the result holds even when the variables are allowed to be real valued, say in the range 1, 1, as long as the set of values is well-separated. Imagine picking a random λ 1, 1 and partitioning the variables (i.e. vertices of the graph) into two sets depending on whether their value is less or greater than λ. The cut is roughly balanced if the set of values is well-separated, and the probability that an edge (i, j) is cut is x i x j. Thus solving the Lin(R) instance w.r.t. l 1 error is equivalent to solving the Balanced Partitioning problem. Motivation for Studying 3LIN(R): The hardness results for the MinUncut and the Balanced Partitioning problem cited above are known only assuming the UGC. It would be a huge progress to prove these results without relying on the UGC and could possibly lead to a proof of the UGC itself. Due to the close connection of both the problems to the Lin(R) problem, it is natural to seek a hardness result for the Lin(R) problem w.r.t. the l 1 error. This is the main motivation behind the work in this paper. We propose that understanding the complexity of the 3Lin(R) problem might help us make progress on the UGC: the plan would be to (1) prove Theorem 1 (which we do) and then () give a gap-preserving reduction from the 3Lin(R) to Lin(R). Regarding the second step, the authors currently have a candidate reduction from 3Lin(R) to Lin(R) along with counterexamples showing that the reduction, as is, does not work. The authors believe that there might be a way to fix the reduction. Guruswami and Raghavendra s Result: Our result is incomparable to that in GR09. Their result shows that given a system of non-homogeneous linear equations over integers (as 8

9 well as over reals), with three variables in each equation, it is N P-hard to distinguish 1 δ satisfiable instances from δ satisfiable instances. The instance produced by their reduction is non-homogeneous, a good solution in the YS Case consists of large (unbounded) integer values, the result is very much about exactly satisfying equations, and in particular does not give, if any, a strong gap in terms of margins, especially relative to the magnitude of integers in a good solution. Comparison with Results over GF (): We argue that, in order to make progress on Min- Uncut, Balanced Partitioning and UGC, studying equations over reals may be the right thing to do, as opposed to equations over GF (). As we discussed before, the Balanced Partitioning problem can be thought of as an instance of Lin(R) (as in quation ()) where one seeks to minimize l 1 error and the set of values is a well-separated set in 1, 1. Assuming a UGC variant, we know that (δ, δ c )-gap is N P-hard for c > 1, whereas Theorem 1 yields a similar gap for 3Lin(R), with a stronger conclusion that a constant fraction of equations have a margin at least Ω( δ). We pointed out that such a gap is also the best one may hope for. Thus the 3-variable case seems qualitatively similar to the -variable case in terms of hardness gap that may be expected. For equations over GF (), the two cases are qualitatively very different. Suppose one thinks of the Balanced Partitioning problem as an instance of Lin(GF ()) where a cut is a GF () valued balanced assignment, and one introduces an equation x i x j = 0 for each edge (i, j). Its generalization to homogeneous equations with three variables, namely 3Lin(GF ()), turns out to be qualitatively very different. Holmerin and Khot HK04 show a hardness gap (in terms of fraction of equations left unsatisfied by a balanced assignment) of (δ, 1 ) which is qualitatively very different from the (δ, δ c ) gap that may be expected for Lin(GF ()). 1.6 Overview of the Paper In Section, we formally state our main result (Theorem 6) and provide preliminaries on Hermite representation of functions in L (R n, N n ). In Section 3, we propose and analyze the linearity test that is used as a sub-routine in the dictatorship test proposed and analyzed in Section 4. The reduction, proving our main result, is presented in Section 5. The soundness analysis is first presented in a simplified setting and then in the general setting. The entire reduction is presented in a continuous setting and then discretized in Section 5.7. Problem Definition, Our Result, and Preliminaries We consider the problem of approximately solving a system of homogeneous linear equations over the reals. ach equation depends on (at most) three variables. The system of equations is given by a distribution over equations, meaning different equations receive different weights. Definition (Robust-3Lin(R) instance). Let b 0 1 be a parameter. A Robust-3Lin(R) instance is given by a set of real variables X and a distribution over equations on the variables. ach equation is of the form: r 1 x 1 + r x + r 3 x 3 = 0, where the coefficients satisfy r 1, r, r 3 1 b 0, b 0 and x 1, x, x 3 X. Definition 3 (Assignment to Robust-3Lin(R) instance). An assignment to the variables of a Robust-3Lin(R) instance (X, ) is a function A : X R. An equation r 1 x 1 + r x + r 3 x 3 = 0 9

10 is exactly satisfied by A if r 1 A(x 1 ) + r A(x ) + r 3 A(x 3 ) = 0. The equation is β-approximately satisfied for an approximation parameter β, if r 1 A(x 1 ) + r A(x ) + r 3 A(x 3 ) β. Notation. The set of variables appearing in an equation eq : r 1 x 1 +r x +r 3 x 3 = 0 is denoted as X eq = {x 1, x, x 3 }. The assignment A will usually be clear from the context. We use the shorthand eq to denote the margin r 1 A(x 1 ) + r A(x ) + r 3 A(x 3 ). An assignment that assigns 0 to all variables trivially exactly satisfies all equations. Hence, we use a measure for how different the assignment is from the all-zero assignment, locally (per equation) and globally (on average over all equations): Definition 4 (Assignment norm). Let (X, ) be a Robust-3Lin(R) instance. Let A : X R be an assignment. Define the squared norm of A at equation eq to be: Define the squared norm of A to be: A eq = A = A(x). x X eq Aeq. eq Remark.1. We will sometimes refer to a distribution on the set of variables X induced by first picking an equation from the distribution and then picking a variable at random from that equation. If D denotes this distribution on variables, then clearly A = x D A(x). Legitimate assignments A are required to be normalized A b, b for some parameter b. We seek to maximize: = 1 and bounded A : X val β (X,) (A). = χ eq β A eq, (3) eq where χ eq β is indicator function of the event that eq β. In words, we seek to maximize 3 the total squared norm of equations that are satisfied with margin of at most β. Definition 5 (Robust-3Lin(R) problem). Let b 0 1, b 0 and 0 < β < 1 be parameters. Given a Robust-3Lin(R) instance where the coefficients are in 1 b 0, b 0 in magnitude, the problem is to find an assignment A : X b, b of norm A = 1 that maximizes valβ (X,) (A). We are now ready to formally state our result: Theorem 6 (Hardness of Robust-3Lin(R)). There exist universal constants b 0 = and c, s > 0, such that for any γ, δ > 0, there is b = O(log(1/δ)), such that given an instance (X, ) of Robust-3Lin(R) with the magnitude of the coefficients in 1 b 0, b 0, it is N P-hard to distinguish between the following two cases: 3 We recommend that the reader takes a pause and convinces himself/herself that this is a reasonable measure of how good an assignment is. Since an assignment may be very skewed, assigning large values to a tiny subset of variables and zero to the rest of the variables, simply maximizing the fraction of equations satisfied does not make much sense. 10

11 Completeness: There is an assignment A : X b, b with A = 1, such that val γ (X,)(A) 1 δ. Soundness: For any assignment A : X b, b with A = 1, it holds that val c δ (X,)(A) 1 s. We note three points: (1) The parameter γ is to be thought of as negligible compared to δ and essentially equal to 0. Our reduction is best thought of as a continuous construction on a Gaussian space, and the parameter γ arises as a negligible error involved in discretization of the construction. () In the YS Case, we can say more about how the good assignment looks like. Consider the distribution D induced on variables by first picking an equation eq and then picking one of the variables in the equation. The values taken by the good assignment, w.r.t. D, are distributed (essentially) as a standard Gaussian, and can be truncated to b = O(log(1/δ)) in magnitude without affecting the result. (3) In the NO Case, if an assignment has either values bounded in 1, 1 or values distributed, w.r.t. D, (essentially) as a standard Gaussian, it is indeed the case that a constant fraction of the equations fail with a margin of at least c δ, proving informal Theorem 1..1 Fourier Analysis Over Gaussian Space Gaussian Space. Let N n denote the n-dimensional Gaussian distribution with n independent mean-0 and variance-1 coordinates. L (R n, N n ) is the space of all real functions f : R n R with x N n f(x) <. This is an inner product space with inner product f, g. = f(x)g(x). x N n Hermite Polynomials. For a natural number j, the j th Hermite polynomial H j : R R is H j (x) = 1 j! ( 1) j e x / dj dx j e x /. The first few Hermite polynomials are H 0 1, H 1 (x) = x, H (x) = 1 (x 1), H 3 = 1 6 (x 3 3x), H 4 (x) = 1 6 (x4 6x + 3). The Hermite polynomials satisfy: Claim.1 (Orthonormality). For every j, H j, H j = 1. For every i j, H i, H j = 0. In particular, for every j 1, x N H j (x) = 0. Claim. (Addition formula). ( ) x + y H j = 1 j/ j (j ) H k (x)h j k (y). k k=0 11

12 Fourier Analysis. The multi-dimensional Hermite polynomials defined as: H j1,...,j n (x 1,..., x n ) = n H ji (x i ), form an orthonormal basis for the space L (R n, N n ). very function f L (R n, N n ) can be written as f(x) = ˆf(S) HS (x), S N n where S is multi-index, i.e. an n-tuple of natural numbers, and the ˆf(S) R are the Fourier coefficients of f. The size of a multi-index S = (S 1,..., S n ) is defined as S = n i=1 S i. The Fourier expansion of degree d is f d = ˆf(S)H S d S (x), and it holds that i=1 lim f f d = 0. d The linear part of f is f =1 = f 1 f 0. When f is anti-symmetric, i.e. x R n, f( x) = f(x), we have f( 0) = f = 0 and f 0 0. Influence. We denote the restriction of a Gaussian variable x N n to a set of coordinates D n by x D. The influence of a set of coordinates D n on a function f L (R n, N n ) is I D (f). = x D Var f(x). x D The influence can also be expressed in terms of Fourier spectrum of f: Proposition.3. I D (f) = S D φ ˆf(S), where S D φ denotes that there exists i D such that S i 0. Note that S N n is a multi-index and D n is a subset of coordinates. Perturbation Operator. The perturbation operator (more commonly known as the Ornstein- Uhlenbeck operator) T ρ takes a function f L (R n, N n ) and produces a function T ρ f L (R n, N n ) that averages the value of f over local neighborhoods: T ρ f(x) = f(ρx + 1 ρ y). y N n The Fourier spectrum of T ρ f can be obtained from the Fourier spectrum of f as follows: Proposition.4. T ρ f = S ρ S ˆf(S) HS. 1

13 . Distributions: ntropy and Distance The entropy of a probability distribution D over a discrete probability space Ω is ntropy satisfies the following properties: H(D). = a Ω D(a) log 1 D(a). Proposition.5. CT91 For distributions D, D 1,..., D k over Ω, is Range: 0 H(D) log Ω ; the lower bound is attained by constant distributions; the upper bound is attained by the uniform distribution. Concavity: H( 1 k k i=1 D i) 1 k k i=1 H(D i). Sub-additivity: H(D 1... D k ) k i=1 H(D i). The statistical distance between distributions D 1 and D over a discrete probability space Ω (D 1, D ) =. 1 D 1 (a) D (a). a Ω A distribution with nearly maximal entropy is close to uniform: Proposition.6. CT91 log Ω H(D) 1 ln D Uniform 1. The squared Hellinger distance between D 1 and D is H(D 1, D ). = 1 ( D1 (a) D (a)) = 1 D1 (a)d (a). a Ω We have the following connection between the Hellinger distance and the statistical distance: Proposition.7. Pol0 a Ω H(D 1, D ) (D 1, D ) H (D 1, D ). 3 Linearity Testing We show how to perform linearity testing for functions in L (R n, N n ) using linear equations on three variables each. Linear functions always exactly satisfy the linear equations. Functions with a large non-linear part give rise to heavy margins in the equations. The linearity test we show resembles linearity testing in finite fields (see, e.g., BLR93, BCH + 96). We change it slightly so as to guarantee that all the queries to the function are distributed according to the Gaussian distribution. Linearity Test: 13

14 Given oracle access to a function f L (R n, N n ), f anti-symmetric, i.e., f( x) = f(x) for every x R n. Pick x, y N n and test: f(x) + f(y) + ( f x + y ) = 0. Note that a linear function always exactly satisfies the test s equation. The following lemma shows that if the test s equations are approximately satisfied, then the weight of f s non-linear part is small: Lemma 3.1 (Linearity testing). Let f L (R n, N n ), f anti-symmetric, i.e., f( x) = f(x) for every x R n. Then f f =1 f(x) + f(y) + ( f x + y ). x,y N n Proof. Since x and y are independent, the variables x, y and x+y to N n. Also f is anti-symmetric. Hence, x,y N n f(x) + f(y) + f Writing in terms of the Fourier representation: ( ) x + y f(x)f = x,y x,y By Claim., Hence, x,y ( ) x + y f(x)f are all distributed according ( x + y ) = 4 f 4 ( x + y f(x)f ). (4) x,y = S,T = S,T ( ) xi + y i H Ti = 1 T i/ = S,T ˆf(S) ˆf(T ) S,T N n ˆf(S) ˆf(T )HS (x)h T ˆf(S) ˆf(T ) x,y n ˆf(S) ˆf(T ) n i=1 T i l=0 1 T i/ i=1 n x,y i=1 (Ti l T i l=0 ( x + y ) ( ) xi + y i H Si (x i )H Ti H Si (x i )H Ti ( xi + y i ). ) H l (x i )H Ti l(y i ). (Ti ) l x H Si (x i )H l (x i ) y H Ti l(y i ). By Claim.1, y H Ti l(y i ) = 0, unless l = T i, and x H Si (x i )H l (x i ) = 0, unless l = S i. Thus, x,y ( ) x + y f(x)f = ( ) 1 S ˆf(S) S 1 ( ) 1 f =1 + f f =1, (5) 14

15 where we used ˆf( 0) = 0 that follows from anti-symmetry. inequality (5), x,y N n f(x) + f(y) + f By combining equality (4) and ( x + y ) 4 f 4 f =1 4 f f =1 = (4 ) f f =1 f f =1. 4 Dictator Testing In this section we devise a dictator test, i.e., a test that checks whether an anti-symmetric real function in L (R n, N n ) is a dictator (that is, of the form f(x) = x i for some i n) or far from a dictator. We consider a function to be close to a dictator if it satisfies the following definition: Definition 7 ((J, s, Γ)-Approximate linear junta). An anti-symmetric function f L (R n, N n ) with linear part f =1 = n i=1 a ix i, is called a (J, s, Γ)-approximate-linear-junta, if: f =1 = n i=1 a i (1 s) f. i:a i 1 J f a i Γ f. An approximate linear junta has almost all the Fourier mass on its linear part, and this linear part is concentrated on at most J coordinates: Let I = { i a i 1 J f }. Then I J, and f i I a ix i (s + Γ) f. Our test will produce equations that dictators almost always satisfy exactly. On the other hand, functions that are not even approximate linear juntas fail with large margin. Theorem 8 (Dictator testing). For every constant 0 < Γ 0.01, there are constants s, c > 0 such that the following holds. For every sufficiently small δ > 0, there is a dictator test given by a distribution over equations, where each equation depends on the value of f on at most three points in R n. The test satisfies the following properties: 1. Uniformity: The distribution over R n obtained from picking at random an equation and x such that f(x) is queried by the equation, is Gaussian N n.. Bound on coefficients: All the coefficients in the equations are in 1 b 0, b 0 in magnitude where b 0 is a universal constant (b 0 = works). 3. Completeness: If f(x) = x i for some i n, then χ eq >0 f eq δ. eq 4. Soundness: For any anti-symmetric function f L (R n, N n ), f ( 10, s, Γ)-approximate linear junta, then Γδ χ eq >c δ f eq s 100. eq = 1, if f is not a 15

16 Remark 4.1. Note that it follows from the soundness guarantee that for an anti-symmetric function f L (R n, N n ) with arbitrary non-zero norm, if f is not a ( 10, s, Γ)-approximate Γδ linear junta, then χ eq >c δ f f eq s eq 100 f. This is obtained by applying the theorem with the normalized version of f, i.e., f f. The test will consist of three steps: (i) Linearity test that rules out functions that are not well-approximated by their linear parts. (ii) Coordinatewise perturbation test that checks that the function does not change by re-sampling a small fraction of the coordinates. (iii) Random perturbation test that guarantees that the function does not change much if perturbing the input slightly in a random direction. We achieve the effect of this test by instead doing two correlated linearity tests, in order to keep the coefficients in the range 1, in magnitude. Dictator Test: Given oracle access to a function f L (R n, N n ), f anti-symmetric. With equal probability, perform one of these three tests: 1. Linearity test on f, as in Section 3.. Coordinatewise perturbation test: (a) Pick x, y N n. Pick x N n as follows: for i = 1,,..., n, independently, with probability 1 δ, set x i = x i, and with probability δ, set x i = y i. (b) Test: 3. Random perturbation test (in disguise): f(x) f( x) = 0. (a) Pick y, z N n. Let x = y+z, w = y z, and x = (1 δ)x + δ δ w ( 1 δ = + = λ 1 y + λ z (say). δ δ ) y + ( ) 1 δ δ δ z (b) Note that λ 1, λ are very close to 1. Test with equal probability: f(x) 1 f(y) 1 f(z) = 0. f( x) λ 1 f(y) λ f(z) = 0. Note that in the random perturbation test, x = (1 δ)x + δ δ w and x is independent of w. Thus x can indeed be thought of as a perturbation of x in a random direction. The uniformity property, as well as the bound on the coefficients, hold by the definition of the tests. Denote the distribution on all equations by, and the three sub-distributions by: l (linearity tests), c (coordinatewise perturbation tests), r (random perturbation tests). 16

17 Completeness: A dictator function f, being a linear function, always exactly satisfies the linearity test and the random perturbation test. As for the coordinatewise perturbation test, eq c χ eq >0 f eq δ f = δ. Soundness: In the following, O( ) and Ω( ) notations will hide universal constants. Let Γ be the given constant in Theorem 8. We will pick s and c to be constants eventually, but throughout the proof, retain the dependence on the parameters. Assume for now that c s 0.01 and 3 s Γ. The parameter δ is thought of as tending to zero. Let f L (R n, N n ) be an anti-symmetric function, f 10 = 1, f is not a (J =, s, Γ)- Γδ approximate linear junta. Assume, for the sake of a contradiction, that χ eq c δ f eq 1 s 100. eq Denote the non-linear part of f by e = f f =1 (since f is anti-symmetric, f 0 0). We handle the cases that e s and e > s separately. Case e > s: By Lemma 3.1, eq l eq e > s. By Cauchy-Schwarz inequality, for every equation, 4 we have eq 1 f eq, so s < eq χ eq >c δ 1 f eq + c δ 1 χ eq >c δ f eq + s eq l eq l eq l 3. Since the distribution is average of distributions l, c, and r, we get χ eq >c δ f eq eq This contradicts our assumption that eq 1 3 χ eq >c δ f eq eq l χ eq c δ f eq 1 s 100. > s 100. Case e s: at most c δ. We first show that in this case, almost every equation is satisfied with margin Lemma 4.1. The probability that a dictator test equation chosen at random is c δ-approximately satisfied is at least s. Proof. We begin by showing that for x N n, f(x) 3 s 4 except with probability at most 6 3 s. When x N n, except with probability at most 4 3 s, we have that e(x) s e s/3 4. Write f =1 (x) = n i=1 a ix i. When x N n, we have that f =1 (x) is normal with mean 0 and variance n i=1 a i = 1 e Thus, except with probability at most 3 s, we have that f =1 (x) s. Overall, except with probability at most 6 3 s, we have that f(x) f =1 (x) e(x) s 3 s 3 s 4. Assume, for the sake of a contradiction, that with probability at least 7 3 s, a dictator test equation has margin at least c δ. An equation has at most three variables, and each of these is distributed as N n. With probability at least 7 3 s 6 3 s = 3 s, it also holds that the 4 The linearity testing equation is of the form f(x) + f(y) f(z) = 0. Here eq = f(x) + f(y) f(z) and f eq = f(x) +f(y) +f(z) 3. 17

18 first variable, say f(x), in the equation has magnitude f(x) f eq 1 3 f(x) s/3 48. Hence, χ eq >c δ f eq 3 s s/3 eq 48 > s 100. This contradicts our assumption, and the claim follows. 3 s 4. For such an equation, In the sequel we inspect the change in e as we perturb the input. We show that our assumptions on f (made towards a contradiction) imply that e may change somewhat as a result of a perturbation in a random direction, yet changes noticeably more as a result of a coordinatewise perturbation. We will later show that these two behaviors are contradictory. Lemma 4. (e is noise-stable for random perturbation). (Under the assumptions we made towards a contradiction) Let x, x be picked as in the random perturbation test. Then, with probability at least 1 O( 3 s), e(x) e( x) O( 3 s) δ. Proof. Since the random perturbation test is performed with probability 1 3, from Lemma 4.1, with probability at least 1 O( 3 s), we have f(x) 1 f(y) 1 f(z) c δ, f( x) λ 1 f(y) λ f(z) c δ. Since f = f =1 + e, and f =1 is linear, the above inequalities are really inequalities for e: e(x) 1 e(y) 1 e(z) c δ, e( x) λ 1 e(y) λ e(z) c δ. Combining the two inequalities and substituting for λ 1 and λ, we get: e(x) e( x) c δ + O( δ)( e(y) + e(z) ). By Markov inequality, except with probability at most 3 s, it holds that e(y) e / 3 s s /3. The same applies to e(z). Therefore, with probability at least 1 O( 3 s), e(x) e( x) c δ + O( 3 s δ) = O( 3 s) δ. Lemma 4.3 (e is noise-sensitive coordinatewise). (Under the assumptions we made towards a contradiction) Let x, x N n be picked as in the coordinatewise perturbation test. Then, with probability at least Ω(1), we have e(x) e( x) Ω( Γδ). 18

19 Proof. Write f =1 = n i=1 a ix i. Since f = f =1 + e, we have e(x) e( x) f =1 (x) f =1 ( x) f(x) f( x) n = a i (x i x i ) f(x) f( x). i=1 From Lemma 4.1, we know that except with probability O( 3 s), the second term f(x) f( x) is at most c δ. Thus it suffices to show that with probability Ω(1), the first term is at least Ω( Γδ) (and to choose c and s sufficiently small). Recall that the test picks the pair (x, x) as follows: First pick a set D n by including in it every i n independently with probability δ. Pick x, y N n independently. For every i D, set x i = x i, and for every i D, set x i = y i. Thus for a fixed D, n i=1 a i (x i x i ) = i D a i (x i y i ), which is a normal variable with mean 0 and variance i D a i. We will show that the variance is at least Γδ with probability 0.9 over the choice of D. Whenever this happens, the random variable exceeds Ω( Γδ) in magnitude with probability Ω(1) and we are done. Let I = { i n a i 1 } J be the non-influential coordinates. Since f is not a (J, s, Γ)- approximate linear junta, and e s, we must have i I a i Γ. A standard Hoeffding bound now shows that for a random choice of set D, the sum i I D a i is at least half its expected value with probability at least 0.9 and the expected value is δ i I a i which is at least Γδ. Pr D a i δ i I D i I a i δ i I a i exp ( ( δ i I a i ) i I a4 i ) exp ( J ) Γδ 0.1, where we noted that i I a4 i 1 J i I a i and J = 10 Γδ. The rest of the proof is devoted to showing that Lemma 4. and Lemma 4.3 cannot both hold, i.e., a function cannot be noise stable for random perturbation, yet noise sensitive for coordinatewise perturbation. Towards this end, we will construct from e a new function e (that happens to be {0, 1}-valued) for which the expected squared change as a result of coordinatewise perturbation is much larger than the expected squared change as a result of random perturbation: Lemma 4.4. (Under the assumptions we made towards a contradiction, and in particular, assuming Lemma 4. and Lemma 4.3) There is a function e such that: 1. (x, x) R e (x) e ( x) ( 3 ) O s Γ,. (x, x) C e (x) e ( x) Ω(1), where R is the distribution over pairs in the random perturbation test, and C is the distribution over pairs in the coordinatewise perturbation test. 19

20 The proof of Lemma 4.4 appears in Section 4.1. For sufficiently small s, Lemma 4.4 leads to a contradiction by the following claim: Claim 4.5. For any function h L (R n, N n ), h(x) h( x) (x, x) R h(x) h( x), (x, x) C where R is the distribution over pairs in the random perturbation test, and C is the distribution over pairs in the coordinatewise perturbation test. Proof. The expectation (x, x) C h(x) h( x) is given by the following expression: D x D h(x) h( x), x D, x D where the set of coordinates D n is chosen by including each i n in D independently with probability δ. Using Var x F (x) = 1 (F (x) F (x x,x )) and the notion of influence as discussed in the preliminaries, the above expression can be re-written as: Var h(x) = I D (h) = ĥ(s) = ĥ(s) Pr S D φ. D x D x D D D D S S D φ For every multi-index S N n, we have: Pr D S D φ = 1 (1 δ) #S 1 (1 δ) S. Here S = n i=1 S i and #S denotes the number of S i that are non-zero, and hence we have #S S. Therefore, the expectation is at most S ĥ(s) (1 (1 δ) S ). On the other hand, the expectation (x, x) R h(x) h( x) is given by the following expression, for ρ = 1 δ: h(x) h(x)h(ρx + 1 ρ w). x x,w We have x,w h(x)h(ρx + 1 ρ w) = h, T ρ h = S ĥ(s) ρ S and x h(x) = S ĥ(s), so the expectation is ĥ(s) (1 (1 δ) S ). S This concludes the proof of Theorem 8 assuming Lemma

21 4.1 Proof of Lemma 4.4 In this section we prove Lemma 4.4. Assume that a function e L (R n, N n ) with e s satisfies: With probability at least 1 O( 3 s) over (x, x) R, it holds that e(x) e( x) d R = O( 3 s) δ. (6) With probability at least Ω(1) over (x, x) C, it holds that e(x) e( x) d C = Ω( Γδ). (7) We show how to obtain a function e L (R n, N n ) (in fact {0, 1}-valued) that satisfies: (x, x) R e (x) e ( x) ( 3 ) O s Γ. (x, x) C e (x) e ( x) Ω(1). To this end, we construct two graphs on R n, G R = (R n, R ) and G C = (R n, C ), representing the function e under random perturbation and under coordinatewise perturbation, respectively. The graphs are infinite, and we will be abusing notation in the following, but all the arguments can be made precise by replacing sums by integrals wherever appropriate. Perturbation Graphs. The graphs G R and G C have labels on their vertices and weights on their edges. The label of a vertex x R n is e(x). The graph G R has edges between pairs (x, x) such that: (i) The labels on the endpoints are bounded, e(x), e( x) 1. (ii) e(x) e( x) d R. The weight of the edge w R (x, x) is the probability that (x, x) is chosen in the random perturbation test. The total edge weight is w R ( R ) 1 O( 3 s) from Hypothesis (6) and the observation that e s and thus for x N n, e(x) 1 except with probability s. The graph G C has edges between pairs (x, x) such that: (i) The labels on the endpoints are bounded, e(x), e( x) 1. (ii) e(x) e( x) d C. The weight of the edge w C (x, x) is the probability that (x, x) is chosen in the coordinate-wise perturbation test. The total edge weight is w C ( C ) Ω(1) from Hypothesis (7) and since e s. Cuts in Perturbation Graphs. We will construct a cut C : R n {0, 1}, and this will be our function e C. Denote by w R (C) and w C (C), the weight of the edges in the graphs G R and G C respectively that are cut by C. The cut C will satisfy: ( 3 ) 1. (Small R weight is cut:) w R (C) O s Γ.. (Large C weight is cut) w C (C) Ω(1). Let us first check that this proves Lemma 4.4: When choosing (x, x) as in the random perturbation test, the probability that the pair (x, x) is separated is at most w R (C) + (1 w R ( R )) O ( 3 s Γ ). When choosing (x, x) as in the coordinatewise perturbation test, the probability the pair (x, x) is separated is at least w C (C) Ω(1). 1

Reductions Between Expansion Problems

Reductions Between Expansion Problems Reductions Between Expansion Problems Prasad Raghavendra David Steurer Madhur Tulsiani November 11, 2010 Abstract The Small-Set Expansion Hypothesis (Raghavendra, Steurer, STOC 2010) is a natural hardness

More information

On the efficient approximability of constraint satisfaction problems

On the efficient approximability of constraint satisfaction problems On the efficient approximability of constraint satisfaction problems July 13, 2007 My world Max-CSP Efficient computation. P Polynomial time BPP Probabilistic Polynomial time (still efficient) NP Non-deterministic

More information

Two Query PCP with Sub-Constant Error

Two Query PCP with Sub-Constant Error Electronic Colloquium on Computational Complexity, Report No 71 (2008) Two Query PCP with Sub-Constant Error Dana Moshkovitz Ran Raz July 28, 2008 Abstract We show that the N P-Complete language 3SAT has

More information

Approximation & Complexity

Approximation & Complexity Summer school on semidefinite optimization Approximation & Complexity David Steurer Cornell University Part 1 September 6, 2012 Overview Part 1 Unique Games Conjecture & Basic SDP Part 2 SDP Hierarchies:

More information

Lecture 22. m n c (k) i,j x i x j = c (k) k=1

Lecture 22. m n c (k) i,j x i x j = c (k) k=1 Notes on Complexity Theory Last updated: June, 2014 Jonathan Katz Lecture 22 1 N P PCP(poly, 1) We show here a probabilistically checkable proof for N P in which the verifier reads only a constant number

More information

2 Completing the Hardness of approximation of Set Cover

2 Completing the Hardness of approximation of Set Cover CSE 533: The PCP Theorem and Hardness of Approximation (Autumn 2005) Lecture 15: Set Cover hardness and testing Long Codes Nov. 21, 2005 Lecturer: Venkat Guruswami Scribe: Atri Rudra 1 Recap We will first

More information

On Independent Sets, 2-to-2 Games and Grassmann Graphs

On Independent Sets, 2-to-2 Games and Grassmann Graphs Electronic Colloquium on Computational Complexity, Revision 1 of Report No. 14 (016) On Independent Sets, -to- Games and Grassmann Graphs Subhash Khot Dor Minzer Muli Safra Abstract We present a candidate

More information

Hardness of Embedding Metric Spaces of Equal Size

Hardness of Embedding Metric Spaces of Equal Size Hardness of Embedding Metric Spaces of Equal Size Subhash Khot and Rishi Saket Georgia Institute of Technology {khot,saket}@cc.gatech.edu Abstract. We study the problem embedding an n-point metric space

More information

PCP Soundness amplification Meeting 2 ITCS, Tsinghua Univesity, Spring April 2009

PCP Soundness amplification Meeting 2 ITCS, Tsinghua Univesity, Spring April 2009 PCP Soundness amplification Meeting 2 ITCS, Tsinghua Univesity, Spring 2009 29 April 2009 Speaker: Andrej Bogdanov Notes by: Andrej Bogdanov Recall the definition of probabilistically checkable proofs

More information

On the Optimality of Some Semidefinite Programming-Based. Approximation Algorithms under the Unique Games Conjecture. A Thesis presented

On the Optimality of Some Semidefinite Programming-Based. Approximation Algorithms under the Unique Games Conjecture. A Thesis presented On the Optimality of Some Semidefinite Programming-Based Approximation Algorithms under the Unique Games Conjecture A Thesis presented by Seth Robert Flaxman to Computer Science in partial fulfillment

More information

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems Moses Charikar Konstantin Makarychev Yury Makarychev Princeton University Abstract In this paper we present approximation algorithms

More information

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover Grant Schoenebeck Luca Trevisan Madhur Tulsiani Abstract We study semidefinite programming relaxations of Vertex Cover arising

More information

Lecture 7: Passive Learning

Lecture 7: Passive Learning CS 880: Advanced Complexity Theory 2/8/2008 Lecture 7: Passive Learning Instructor: Dieter van Melkebeek Scribe: Tom Watson In the previous lectures, we studied harmonic analysis as a tool for analyzing

More information

1 The Low-Degree Testing Assumption

1 The Low-Degree Testing Assumption Advanced Complexity Theory Spring 2016 Lecture 17: PCP with Polylogarithmic Queries and Sum Check Prof. Dana Moshkovitz Scribes: Dana Moshkovitz & Michael Forbes Scribe Date: Fall 2010 In this lecture

More information

Lecture 18: Inapproximability of MAX-3-SAT

Lecture 18: Inapproximability of MAX-3-SAT CS 880: Advanced Complexity Theory 3/7/2008 Lecture 18: Inapproximability of MAX-3-SAT Instructor: Dieter van Melkebeek Scribe: Jeff Kinne In this lecture we prove a tight inapproximability result for

More information

Lecture 10 + additional notes

Lecture 10 + additional notes CSE533: Information Theorn Computer Science November 1, 2010 Lecturer: Anup Rao Lecture 10 + additional notes Scribe: Mohammad Moharrami 1 Constraint satisfaction problems We start by defining bivariate

More information

SDP Relaxations for MAXCUT

SDP Relaxations for MAXCUT SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP

More information

Lecture 8: Linearity and Assignment Testing

Lecture 8: Linearity and Assignment Testing CE 533: The PCP Theorem and Hardness of Approximation (Autumn 2005) Lecture 8: Linearity and Assignment Testing 26 October 2005 Lecturer: Venkat Guruswami cribe: Paul Pham & Venkat Guruswami 1 Recap In

More information

CSC 5170: Theory of Computational Complexity Lecture 13 The Chinese University of Hong Kong 19 April 2010

CSC 5170: Theory of Computational Complexity Lecture 13 The Chinese University of Hong Kong 19 April 2010 CSC 5170: Theory of Computational Complexity Lecture 13 The Chinese University of Hong Kong 19 April 2010 Recall the definition of probabilistically checkable proofs (PCP) from last time. We say L has

More information

Approximation Algorithms and Hardness of Approximation May 14, Lecture 22

Approximation Algorithms and Hardness of Approximation May 14, Lecture 22 Approximation Algorithms and Hardness of Approximation May 4, 03 Lecture Lecturer: Alantha Newman Scribes: Christos Kalaitzis The Unique Games Conjecture The topic of our next lectures will be the Unique

More information

Towards a Proof of the 2-to-1 Games Conjecture?

Towards a Proof of the 2-to-1 Games Conjecture? Electronic Colloquium on Computational Complexity, Report No. 198 (2016) Towards a Proof of the 2-to-1 Games Conjecture? Irit Dinur Subhash Khot Guy Kindler Dor Minzer Muli Safra Abstract We propose a

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Lecture 5: February 16, 2012

Lecture 5: February 16, 2012 COMS 6253: Advanced Computational Learning Theory Lecturer: Rocco Servedio Lecture 5: February 16, 2012 Spring 2012 Scribe: Igor Carboni Oliveira 1 Last time and today Previously: Finished first unit on

More information

Convex and Semidefinite Programming for Approximation

Convex and Semidefinite Programming for Approximation Convex and Semidefinite Programming for Approximation We have seen linear programming based methods to solve NP-hard problems. One perspective on this is that linear programming is a meta-method since

More information

Unique Games and Small Set Expansion

Unique Games and Small Set Expansion Proof, beliefs, and algorithms through the lens of sum-of-squares 1 Unique Games and Small Set Expansion The Unique Games Conjecture (UGC) (Khot [2002]) states that for every ɛ > 0 there is some finite

More information

Approximation Resistance from Pairwise Independent Subgroups

Approximation Resistance from Pairwise Independent Subgroups 1/19 Approximation Resistance from Pairwise Independent Subgroups Siu On Chan UC Berkeley 2/19 Max-CSP Goal: Satisfy the maximum fraction of constraints Examples: 1 MAX-3XOR: x 1 + x 10 + x 27 = 1 x 4

More information

Lower Bounds for Testing Bipartiteness in Dense Graphs

Lower Bounds for Testing Bipartiteness in Dense Graphs Lower Bounds for Testing Bipartiteness in Dense Graphs Andrej Bogdanov Luca Trevisan Abstract We consider the problem of testing bipartiteness in the adjacency matrix model. The best known algorithm, due

More information

Lecture 15: A Brief Look at PCP

Lecture 15: A Brief Look at PCP IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 15: A Brief Look at PCP David Mix Barrington and Alexis Maciel August 4, 2000 1. Overview

More information

IP = PSPACE using Error Correcting Codes

IP = PSPACE using Error Correcting Codes Electronic Colloquium on Computational Complexity, Report No. 137 (2010 IP = PSPACE using Error Correcting Codes Or Meir Abstract The IP theorem, which asserts that IP = PSPACE (Lund et. al., and Shamir,

More information

Online Learning, Mistake Bounds, Perceptron Algorithm

Online Learning, Mistake Bounds, Perceptron Algorithm Online Learning, Mistake Bounds, Perceptron Algorithm 1 Online Learning So far the focus of the course has been on batch learning, where algorithms are presented with a sample of training data, from which

More information

APPROXIMATION RESISTANCE AND LINEAR THRESHOLD FUNCTIONS

APPROXIMATION RESISTANCE AND LINEAR THRESHOLD FUNCTIONS APPROXIMATION RESISTANCE AND LINEAR THRESHOLD FUNCTIONS RIDWAN SYED Abstract. In the boolean Max k CSP (f) problem we are given a predicate f : { 1, 1} k {0, 1}, a set of variables, and local constraints

More information

PCP Theorem and Hardness of Approximation

PCP Theorem and Hardness of Approximation PCP Theorem and Hardness of Approximation An Introduction Lee Carraher and Ryan McGovern Department of Computer Science University of Cincinnati October 27, 2003 Introduction Assuming NP P, there are many

More information

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction U.C. Berkeley Handout N2 CS294: PCP and Hardness of Approximation January 23, 2006 Professor Luca Trevisan Scribe: Luca Trevisan Notes for Lecture 2 These notes are based on my survey paper [5]. L.T. Statement

More information

Lecture 16: Constraint Satisfaction Problems

Lecture 16: Constraint Satisfaction Problems A Theorist s Toolkit (CMU 18-859T, Fall 2013) Lecture 16: Constraint Satisfaction Problems 10/30/2013 Lecturer: Ryan O Donnell Scribe: Neal Barcelo 1 Max-Cut SDP Approximation Recall the Max-Cut problem

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell Lower bounds on the size of semidefinite relaxations David Steurer Cornell James R. Lee Washington Prasad Raghavendra Berkeley Institute for Advanced Study, November 2015 overview of results unconditional

More information

Lecture 5: Probabilistic tools and Applications II

Lecture 5: Probabilistic tools and Applications II T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will

More information

Lecture 10: Hardness of approximating clique, FGLSS graph

Lecture 10: Hardness of approximating clique, FGLSS graph CSE 533: The PCP Theorem and Hardness of Approximation (Autumn 2005) Lecture 10: Hardness of approximating clique, FGLSS graph Nov. 2, 2005 Lecturer: Venkat Guruswami and Ryan O Donnell Scribe: Ioannis

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Matt Weinberg Scribe: Sanjeev Arora One of the running themes in this course is

More information

Unique Games Over Integers

Unique Games Over Integers Unique Games Over Integers Ryan O Donnell Yi Wu Yuan Zhou Computer Science Department Carnegie Mellon University {odonnell,yiwu,yuanzhou}@cs.cmu.edu Abstract Consider systems of two-variable linear equations

More information

On the Usefulness of Predicates

On the Usefulness of Predicates On the Usefulness of Predicates Per Austrin University of Toronto austrin@cs.toronto.edu Johan Håstad KTH Royal Institute of Technology ohanh@kth.se Abstract Motivated by the pervasiveness of strong inapproximability

More information

A On the Usefulness of Predicates

A On the Usefulness of Predicates A On the Usefulness of Predicates Per Austrin, Aalto University and KTH Royal Institute of Technology Johan Håstad, KTH Royal Institute of Technology Motivated by the pervasiveness of strong inapproximability

More information

approximation algorithms I

approximation algorithms I SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions

More information

Hardness of Directed Routing with Congestion

Hardness of Directed Routing with Congestion Hardness of Directed Routing with Congestion Julia Chuzhoy Sanjeev Khanna August 28, 2006 Abstract Given a graph G and a collection of source-sink pairs in G, what is the least integer c such that each

More information

Unique Games Conjecture & Polynomial Optimization. David Steurer

Unique Games Conjecture & Polynomial Optimization. David Steurer Unique Games Conjecture & Polynomial Optimization David Steurer Newton Institute, Cambridge, July 2013 overview Proof Complexity Approximability Polynomial Optimization Quantum Information computational

More information

Lecture 5: Derandomization (Part II)

Lecture 5: Derandomization (Part II) CS369E: Expanders May 1, 005 Lecture 5: Derandomization (Part II) Lecturer: Prahladh Harsha Scribe: Adam Barth Today we will use expanders to derandomize the algorithm for linearity test. Before presenting

More information

1 Probabilistically checkable proofs

1 Probabilistically checkable proofs CSC 5170: Theory of Computational Complexity Lecture 12 The Chinese University of Hong Kong 12 April 2010 Interactive proofs were introduced as a generalization of the classical notion of a proof in the

More information

BCCS: Computational methods for complex systems Wednesday, 16 June Lecture 3

BCCS: Computational methods for complex systems Wednesday, 16 June Lecture 3 BCCS: Computational methods for complex systems Wednesday, 16 June 2010 Lecturer: Ashley Montanaro 1 Lecture 3 Hardness of approximation 1 Overview In my lectures so far, we ve mostly restricted ourselves

More information

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous: MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is

More information

Provable Approximation via Linear Programming

Provable Approximation via Linear Programming Chapter 7 Provable Approximation via Linear Programming One of the running themes in this course is the notion of approximate solutions. Of course, this notion is tossed around a lot in applied work: whenever

More information

Approximating MAX-E3LIN is NP-Hard

Approximating MAX-E3LIN is NP-Hard Approximating MAX-E3LIN is NP-Hard Evan Chen May 4, 2016 This lecture focuses on the MAX-E3LIN problem. We prove that approximating it is NP-hard by a reduction from LABEL-COVER. 1 Introducing MAX-E3LIN

More information

Proclaiming Dictators and Juntas or Testing Boolean Formulae

Proclaiming Dictators and Juntas or Testing Boolean Formulae Proclaiming Dictators and Juntas or Testing Boolean Formulae Michal Parnas The Academic College of Tel-Aviv-Yaffo Tel-Aviv, ISRAEL michalp@mta.ac.il Dana Ron Department of EE Systems Tel-Aviv University

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

Overview. 1 Introduction. 2 Preliminary Background. 3 Unique Game. 4 Unique Games Conjecture. 5 Inapproximability Results. 6 Unique Game Algorithms

Overview. 1 Introduction. 2 Preliminary Background. 3 Unique Game. 4 Unique Games Conjecture. 5 Inapproximability Results. 6 Unique Game Algorithms Overview 1 Introduction 2 Preliminary Background 3 Unique Game 4 Unique Games Conjecture 5 Inapproximability Results 6 Unique Game Algorithms 7 Conclusion Antonios Angelakis (NTUA) Theory of Computation

More information

arxiv:cs/ v1 [cs.cc] 7 Sep 2006

arxiv:cs/ v1 [cs.cc] 7 Sep 2006 Approximation Algorithms for the Bipartite Multi-cut problem arxiv:cs/0609031v1 [cs.cc] 7 Sep 2006 Sreyash Kenkre Sundar Vishwanathan Department Of Computer Science & Engineering, IIT Bombay, Powai-00076,

More information

CSE 291: Fourier analysis Chapter 2: Social choice theory

CSE 291: Fourier analysis Chapter 2: Social choice theory CSE 91: Fourier analysis Chapter : Social choice theory 1 Basic definitions We can view a boolean function f : { 1, 1} n { 1, 1} as a means to aggregate votes in a -outcome election. Common examples are:

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

Low Degree Test with Polynomially Small Error

Low Degree Test with Polynomially Small Error Low Degree Test with Polynomially Small Error Dana Moshkovitz January 31, 2016 Abstract A long line of work in Theoretical Computer Science shows that a function is close to a low degree polynomial iff

More information

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions Economics 24 Fall 211 Problem Set 2 Suggested Solutions 1. Determine whether the following sets are open, closed, both or neither under the topology induced by the usual metric. (Hint: think about limit

More information

18.5 Crossings and incidences

18.5 Crossings and incidences 18.5 Crossings and incidences 257 The celebrated theorem due to P. Turán (1941) states: if a graph G has n vertices and has no k-clique then it has at most (1 1/(k 1)) n 2 /2 edges (see Theorem 4.8). Its

More information

Average Case Complexity: Levin s Theory

Average Case Complexity: Levin s Theory Chapter 15 Average Case Complexity: Levin s Theory 1 needs more work Our study of complexity - NP-completeness, #P-completeness etc. thus far only concerned worst-case complexity. However, algorithms designers

More information

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam Limits to Approximability: When Algorithms Won't Help You Note: Contents of today s lecture won t be on the exam Outline Limits to Approximability: basic results Detour: Provers, verifiers, and NP Graph

More information

Hardness of Approximation

Hardness of Approximation Hardness of Approximation We have seen several methods to find approximation algorithms for NP-hard problems We have also seen a couple of examples where we could show lower bounds on the achievable approxmation

More information

Monotone Submodular Maximization over a Matroid

Monotone Submodular Maximization over a Matroid Monotone Submodular Maximization over a Matroid Yuval Filmus January 31, 2013 Abstract In this talk, we survey some recent results on monotone submodular maximization over a matroid. The survey does not

More information

Improved NP-inapproximability for 2-variable linear equations

Improved NP-inapproximability for 2-variable linear equations Improved NP-inapproximability for 2-variable linear equations Johan Håstad Sangxia Huang Rajsekar Manokaran Ryan O Donnell John Wright December 23, 2015 Abstract An instance of the 2-Lin(2) problem is

More information

PCP Theorem And Hardness Of Approximation For MAX-SATISFY Over Finite Fields

PCP Theorem And Hardness Of Approximation For MAX-SATISFY Over Finite Fields MM Research Preprints KLMM, AMSS, Academia Sinica Vol. 7, 1 15, July, 008 1 PCP Theorem And Hardness Of Approximation For MAX-SATISFY Over Finite Fields Shangwei Zhao Key Laboratory of Mathematics Mechanization

More information

1 Distributional problems

1 Distributional problems CSCI 5170: Computational Complexity Lecture 6 The Chinese University of Hong Kong, Spring 2016 23 February 2016 The theory of NP-completeness has been applied to explain why brute-force search is essentially

More information

Chapter 11 - Sequences and Series

Chapter 11 - Sequences and Series Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a

More information

Integer Linear Programs

Integer Linear Programs Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Lecture 2: November 9

Lecture 2: November 9 Semidefinite programming and computational aspects of entanglement IHP Fall 017 Lecturer: Aram Harrow Lecture : November 9 Scribe: Anand (Notes available at http://webmitedu/aram/www/teaching/sdphtml)

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning David Steurer Cornell Approximation Algorithms and Hardness, Banff, August 2014 for many problems (e.g., all UG-hard ones): better guarantees

More information

Approximability of Constraint Satisfaction Problems

Approximability of Constraint Satisfaction Problems Approximability of Constraint Satisfaction Problems Venkatesan Guruswami Carnegie Mellon University October 2009 Venkatesan Guruswami (CMU) Approximability of CSPs Oct 2009 1 / 1 Constraint Satisfaction

More information

Lecture 29: Computational Learning Theory

Lecture 29: Computational Learning Theory CS 710: Complexity Theory 5/4/2010 Lecture 29: Computational Learning Theory Instructor: Dieter van Melkebeek Scribe: Dmitri Svetlov and Jake Rosin Today we will provide a brief introduction to computational

More information

Parallel Repetition of Fortified Games

Parallel Repetition of Fortified Games Electronic Colloquium on Computational Complexity, Report No. 54 (2014) Parallel Repetition of Fortified Games Dana Moshkovitz April 16, 2014 Abstract The Parallel Repetition Theorem upper-bounds the value

More information

Lec. 11: Håstad s 3-bit PCP

Lec. 11: Håstad s 3-bit PCP Limits of Approximation Algorithms 15 April, 2010 (TIFR) Lec. 11: Håstad s 3-bit PCP Lecturer: Prahladh Harsha cribe: Bodhayan Roy & Prahladh Harsha In last lecture, we proved the hardness of label cover

More information

Label Cover Algorithms via the Log-Density Threshold

Label Cover Algorithms via the Log-Density Threshold Label Cover Algorithms via the Log-Density Threshold Jimmy Wu jimmyjwu@stanford.edu June 13, 2017 1 Introduction Since the discovery of the PCP Theorem and its implications for approximation algorithms,

More information

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Compute the Fourier transform on the first register to get x {0,1} n x 0. CS 94 Recursive Fourier Sampling, Simon s Algorithm /5/009 Spring 009 Lecture 3 1 Review Recall that we can write any classical circuit x f(x) as a reversible circuit R f. We can view R f as a unitary

More information

SDP gaps for 2-to-1 and other Label-Cover variants

SDP gaps for 2-to-1 and other Label-Cover variants SDP gaps for -to-1 and other Label-Cover variants Venkatesan Guruswami 1, Subhash Khot, Ryan O Donnell 1, Preyas Popat, Madhur Tulsiani 3, and Yi Wu 1 1 Computer Science Department Carnegie Mellon University

More information

An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint

An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint Thomas Hofmeister Informatik 2, Universität Dortmund, 44221 Dortmund, Germany th01@ls2.cs.uni-dortmund.de Abstract. We present a randomized

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Lecture 3: Lower Bounds for Bandit Algorithms

Lecture 3: Lower Bounds for Bandit Algorithms CMSC 858G: Bandits, Experts and Games 09/19/16 Lecture 3: Lower Bounds for Bandit Algorithms Instructor: Alex Slivkins Scribed by: Soham De & Karthik A Sankararaman 1 Lower Bounds In this lecture (and

More information

Roth s Theorem on 3-term Arithmetic Progressions

Roth s Theorem on 3-term Arithmetic Progressions Roth s Theorem on 3-term Arithmetic Progressions Mustazee Rahman 1 Introduction This article is a discussion about the proof of a classical theorem of Roth s regarding the existence of three term arithmetic

More information

Improved NP-inapproximability for 2-variable linear equations

Improved NP-inapproximability for 2-variable linear equations Improved NP-inapproximability for 2-variable linear equations Johan Håstad johanh@csc.kth.se Ryan O Donnell odonnell@cs.cmu.edu (full version) Sangxia Huang sangxia@csc.kth.se July 3, 2014 Rajsekar Manokaran

More information

CS286.2 Lecture 8: A variant of QPCP for multiplayer entangled games

CS286.2 Lecture 8: A variant of QPCP for multiplayer entangled games CS286.2 Lecture 8: A variant of QPCP for multiplayer entangled games Scribe: Zeyu Guo In the first lecture, we saw three equivalent variants of the classical PCP theorems in terms of CSP, proof checking,

More information

DD2446 Complexity Theory: Problem Set 4

DD2446 Complexity Theory: Problem Set 4 DD2446 Complexity Theory: Problem Set 4 Due: Friday November 8, 2013, at 23:59. Submit your solutions as a PDF le by e-mail to jakobn at kth dot se with the subject line Problem set 4: your full name.

More information

Reed-Muller testing and approximating small set expansion & hypergraph coloring

Reed-Muller testing and approximating small set expansion & hypergraph coloring Reed-Muller testing and approximating small set expansion & hypergraph coloring Venkatesan Guruswami Carnegie Mellon University (Spring 14 @ Microsoft Research New England) 2014 Bertinoro Workshop on Sublinear

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT CMPUT 67: Approximation Algorithms Fall 014 Lecture 1 Oct. 4): Max Cut SDP Gap and Max -SAT Lecturer: Zachary Friggstad Scribe: Chris Martin 1.1 Near-Tight Analysis of the Max Cut SDP Recall the Max Cut

More information

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms.

1 Agenda. 2 History. 3 Probabilistically Checkable Proofs (PCPs). Lecture Notes Definitions. PCPs. Approximation Algorithms. CS 221: Computational Complexity Prof. Salil Vadhan Lecture Notes 20 April 12, 2010 Scribe: Jonathan Pines 1 Agenda. PCPs. Approximation Algorithms. PCPs = Inapproximability. 2 History. First, some history

More information

A Strong Parallel Repetition Theorem for Projection Games on Expanders

A Strong Parallel Repetition Theorem for Projection Games on Expanders Electronic Colloquium on Computational Complexity, Report No. 142 (2010) A Strong Parallel Repetition Theorem for Projection Games on Expanders Ran Raz Ricky Rosen Abstract The parallel repetition theorem

More information

A New Multilayered PCP and the Hardness of Hypergraph Vertex Cover

A New Multilayered PCP and the Hardness of Hypergraph Vertex Cover A New Multilayered PCP and the Hardness of Hypergraph Vertex Cover Irit Dinur Venkatesan Guruswami Subhash Khot Oded Regev April, 2004 Abstract Given a k-uniform hypergraph, the Ek-Vertex-Cover problem

More information

Fourier analysis of boolean functions in quantum computation

Fourier analysis of boolean functions in quantum computation Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge

More information

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut CMPUT 675: Approximation Algorithms Fall 011 Lecture 17 (Nov 3, 011 ): Approximation via rounding SDP: Max-Cut Lecturer: Mohammad R. Salavatipour Scribe: based on older notes 17.1 Approximation Algorithm

More information

Lecture 16 November 6th, 2012 (Prasad Raghavendra)

Lecture 16 November 6th, 2012 (Prasad Raghavendra) 6.841: Advanced Complexity Theory Fall 2012 Lecture 16 November 6th, 2012 (Prasad Raghavendra) Prof. Dana Moshkovitz Scribe: Geng Huang 1 Overview In this lecture, we will begin to talk about the PCP Theorem

More information

Grothendieck s Inequality

Grothendieck s Inequality Grothendieck s Inequality Leqi Zhu 1 Introduction Let A = (A ij ) R m n be an m n matrix. Then A defines a linear operator between normed spaces (R m, p ) and (R n, q ), for 1 p, q. The (p q)-norm of A

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

Lecture 6: Powering Completed, Intro to Composition

Lecture 6: Powering Completed, Intro to Composition CSE 533: The PCP Theorem and Hardness of Approximation (Autumn 2005) Lecture 6: Powering Completed, Intro to Composition Oct. 17, 2005 Lecturer: Ryan O Donnell and Venkat Guruswami Scribe: Anand Ganesh

More information