A strong direct product theorem in terms of the smooth rectangle bound

Size: px
Start display at page:

Download "A strong direct product theorem in terms of the smooth rectangle bound"

Transcription

1 A strong direct product theorem in terms of the smooth rectangle bound Rahul Jain Centre for Quantum Technologies and Department of Computer Science National U. Singapore Penghui Yao Centre for Quantum Technologies National U. Singapore Abstract A strong direct product theorem states that, in order to solve k instances of a problem, if we provide less than k times the resource required to compute one instance, then the probability of overall success is exponentially small in k. In this paper, we consider the model of two-way public-coin communication complexity and show a strong direct product theorem for all relations in terms of the smooth rectangle bound, introduced by Jain and Klauck [6] as a generic lower bound method in this model. Our result therefore implies a strong direct product theorem for all relations for which an asymptotically) optimal lower bound can be provided using the smooth rectangle bound. In fact we are not aware of any relation for which it is known that the smooth rectangle bound does not provide an optimal lower bound. This lower bound subsumes many of the other known lower bound methods, for example the rectangle bound a.k.a the corruption bound) [3], the smooth discrepancy bound a.k.a the γ 2 bound [28] which in turn subsumes the discrepancy bound), the subdistribution bound [7] and the conditional min-entropy bound [4]. As a consequence, our result reproves some of the known strong direct product results, for example for the Inner Product [26] function and the Set-Disjointness [25, 4] function. Recently smooth rectangle bound has been used to provide new tight lower bounds for several functions, for example for the Gap-Hamming Distance [8, 33] partial function, the Greater-Than function [35] and the Tribes [2] function. These results, along with our result, imply strong direct product for these functions. Smooth rectangle bound has also been used to provide near optimal lower bounds for several important functions and relations used to show exponential separations between classical and quantum communication complexity for example by Raz [30], Gavinsky [] and Klartag and Regev [32]. These results combined with our result imply near optimal strong direct product results for these functions and relations. We show our result using information theoretic arguments. A key tool we use is a sampling protocol due to Braverman [5], in fact a modification of it used by Kerenidis, Laplante, Lerays, Roland and Xiao [23].

2 Introduction Given a model of computation, suppose solving one instance of a given problem f with probability of success p < requires c units of some resource. A natural question that may be asked is: how much resource is needed to solve f k, k instances of the same problem, simultaneously. A naive way is by running the optimal protocol for f, k times in parallel, which requires c k units of resource, however the probability of overall success is p k exponentially small in k). A strong direct product conjecture for f states that this is essentially optimal, that is if only ok c) units of resource are provided for any protocol solving f k, then the probability of overall success is at most p Ωk). Proving or disproving strong direct product conjectures in various models of computation has been a central task in theoretical computer science, notable examples of such results being Yao s XOR lemma [38] and Raz s [29] theorem for two-prover games. Readers may refer to [25, 4, 8] for a good discussion of known results in different models of computation. In the present work, we consider the model of two-party two-way public-coin communication complexity [37] and consider the direct product question in this model. In this model, there are two parties who wish to compute a joint function more generally a relation) of their input, by doing local computation, sharing public coins and exchanging messages. The resource counted is the number of bits communicated between them. The textbook by Kushilevitz and Nisan [26] is an excellent reference for communication complexity. Much effort has been made towards investigating direct product questions in this model and strong direct product theorems have been shown for many different functions, for example Set-Disjointness [25, 4], Inner Product [27], Pointer Chasing [8] etc. To the best of our knowledge, it is not known if the strong direct product conjecture fails to hold for any function or relation in this model. Therefore, whether the strong direct product conjecture holds for all relations in this model, remains one of the major open problems in communication complexity. In the model of constant-round public-coin communication complexity, recently a strong direct product result has been shown to hold for all relations by Jain, Perezlényi and Yao [8]. The work [8] built on a previous result due to Jain [4] showing a strong direct product result for all relations in the model of one-way public-coin communication complexity where a single message is sent from Alice to Bob, who then determines the answer). The weaker direct sum conjecture, which states that solving k independent instances of a problem with constant success probability requires k times the resource needed to compute one instance with constant success probability, has also been extensively investigated in different models of communication complexity and has met a better success. Direct sum theorems have been shown to hold for all relations in the public-coin one-way model [2], the entanglementassisted quantum one-way model [20], the public-coin simultaneous message passing model [2], the private-coin simultaneous message passing model [5], the constant-round public-coin twoway model [6] and the model of two-way distributional communication complexity under product distributions [3]. Again please refer to [25, 4, 8] for a good discussion. Another major focus in communication complexity has been to investigate generic lower bound methods, that apply to all functions and possibly to all relations). In the model we are concerned with, various generic lower bound methods are known, for example the partition bound [6], the information complexity [9], the smooth rectangle bound [6] which in turn subsumes the rectangle bound a.k.a the corruption bound) [36,, 3, 24, 4], the smooth discrepancy bound a.k.a the γ 2 bound [28] which in turn subsumes the discrepancy bound), the subdistribution bound [7] and the conditional min-entropy bound [4]. Proving strong direct product results in terms of these lower bound methods is a reasonable approach to attacking the general question. Indeed, many lower bounds have been shown to satisfy strong direct product theorems, example the discrepancy bound [27], the subdistribution bound under product distributions [7], the smooth discrepancy bound [34] and the conditional min-entropy bound [4].

3 Our result In present work, we show a strong direct product theorem in terms of the smooth rectangle bound, introduced by Jain and Klauck [6], which generalizes the rectangle bound a.k.a. the corruption bound) [36,, 3, 24, 4]. Roughly speaking, the rectangle bound for relation f X Y Z under a distribution µ, with respect to an element z Z, and error ε, tries to capture the size under µ) of a largest rectangle for which z is a right answer for ε fraction of inputs inside the rectangle. It is not hard to argue that the rectangle bound forms a lower bound on the distributional communication complexity of f under µ. The smooth rectangle bound for f further captures the maximum, over all relations g that are close to f under µ, of the rectangle bound of g under µ. The distributional error setting can eventually be related to the worst case error setting via the well known Yao s principle [36]. Jain and Klauck showed that the smooth rectangle bound is stronger than every lower bound method we mentioned above except the partition bound and the information complexity. Jain and Klauck showed that the partition bound subsumes the smooth rectangle bound and in a recent work Kerenidis, Laplante, Lerays, Roland and Xiao [23] showed that the information complexity subsumes the smooth rectangle bound building on the work of Braverman and Weinstein [7] who showed that the information complexity subsumes the discrepancy bound). New lower bounds for specific functions have been discovered using the smooth rectangle bound, for example Chakrabarti and Regev s [8] optimal lower bound for the Gap-Hamming Distance partial function. Klauck [25] used the smooth rectangle bound to show a strong direct product result for the Set-Disjointness function, via exhibiting a lower bound on a related function. On the other hand, as far as we know, no function or relation) is known for which its smooth rectangle bound is asymptotically) strictly smaller than its two-way public-coin communication complexity. Hence establishing whether or not the smooth rectangle bound is a tight lower bound for all functions and relations in this model is an important open question. Our result is as follows. Theorem.. 2 Let X, Y, Z be finite sets, f X Y Z be a relation. Let µ be a distribution on X Y. Let z Z and β = Pr x,y) µ [fx, y) = {z}]. Let ε, δ > 0. There exists a small enough ε > 0 such that the following holds. For all integers t, R pub ε) ε2 t/32 f t ) ε2 32 t ε srec z,µ +ε )δ/β,δ f) 2 ). Above R pub ) represents the two-way public-coin communication communication complexity and srec ) represents the smooth rectangle bound please refer to Section 2 for precise initions). Our result implies a strong direct product theorem for all relations for which an asymptotically) optimal lower bound can be provided using the smooth rectangle bound. Jain and Klauck [6] provided an alternate inition Definition.3) of the smooth rectangle bound for partial) functions in terms of linear programs and show a relationship between the two initions. We show a tighter relationship between the two initions in Lemma.4. Combining Lemma.4 with Theorem. we get the following result. Theorem.2. Let f : X Y Z be a partial) function. For every ɛ 0, ), there exists small enough η 0, /3) such that the following holds. For all integers t, R pub f t) ) η η2 η) η2 t/32 32 t log srec ɛ f) 3 log ɛ ) 2. For a relation f with R pub ε f) = O), a strong direct product result can be shown via direct arguments [4]. 2 With a slight abuse of notation, we write fx, y) = {z Z x, y, z) f}, and f z) = {x, y) : x, y, z) f}. 2

4 As a consequence, our results reprove some of the known strong direct product results, for example for Inner Product [26] and Set-Disjointness [25, 4]. Recently smooth rectangle bound has been used to provide new tight lower bounds for several functions, for example for the Gap- Hamming Distance [8, 33] partial function and the Greater-Than function [35]. These results, along with our result, imply strong direct product for these functions. Smooth rectangle bound has also been used to provide near optimal lower bounds for several important functions and relations used to show exponential separations between classical and quantum communication complexity for example by Raz [30], Gavinsky [] and Klartag and Regev [32]. These results combined with our result imply near optimal strong direct product results for these functions and relations. In a recent work, Harsha and Jain [2] have shown that the smooth-rectangle bound provides an optimal lower bound of Ωn) for the Tribes function. For this function all other weaker lower bound methods mentioned before like the rectangle bound, the sub-distribution bound, the smooth discrepancy bound, the conditional min-entropy bound etc. fail to provide an optimal lower bound since they are all O n). Earlier Jayram, Kumar and Sivakumar [22] had shown a lower bound of Ωn) using information complexity. The result of [2] along with Theorem.2 implies a strong direct product result for the Tribes function. This adds to the growing list of functions for which a strong direct product result can be shown via Theorem.2. In [23], Kerenidis et. al. introduced the relaxed partition bound a weaker version of the partition bound [6]) and showed it to be stronger than the smooth rectangle bound. It is easily seen by comparing the corresponding linear-programs) that the smooth rectangle bound and the relaxed partition bound are in-fact equivalent for boolean functions and more generally when the size of output set is a constant). Thus our result also implies a strong direct product theorem in terms of the relaxed partition bound for boolean functions and more generally when the size of output set is a constant). Our techniques The broad argument of the proof of our result is as follows. We show our result in the distributional error setting and translate it to the worst case error setting using the well known Yao s principle [36]. Let f be a relation, µ be a distribution on X Y, and c be the smooth rectangle bound of f under the distribution µ with output z Z. Consider a protocol Π which computes f k with inputs drawn from distribution µ k and communication oc k) bits. Let C be a subset of the coordinates {, 2,..., k}. If the probability that Π computes all the instances in C correctly is as small as desired, then we are done. Otherwise, we exhibit a new coordinate j / C, such that the probability, conditioned on success in C, of the protocol Π answering correctly in the j-th coordinate is bounded away from. Since µ could be a non-product distribution we introduce a new random variable R j, such that conditioned on it and X j Y j input in the jth coordinate), Alice and Bob s inputs in the other coordinates become independent. Use of such a variable to handle non product distributions has been used in many previous works, for example [2, 3, 3, 4, 8]. Let the random variables Xj Y j R j M represent the inputs in the jth coordinate, the new variable R j and the message transcript of Π, conditioned on the success on C. The first useful property that we observe is that the joint distribution of Xj Y j R j M can be written as, Pr [ X j Y j R j M = xym ] = q µx, y)u xr j, m)u y r j, m), where u x, u y are functions and q is a positive real number. The marginal distribution of Xj Y j is no longer µ though. However using arguments as in [4, 8]), one can show that the distribution of Xj Y j is close, in l distance, to µ and I Xj ; R j M ) Y j + I Y j ; Rj M ) X j oc), where I; ) represents the mutual information please refer to Section 2 for precise initions). 3

5 Now, assume for contradiction that the success in the jth coordinate in Π is large, like 0.99, conditioned on success in C. Using the conditions obtained in the previous paragraph, we argue that there exists a zero-communication public-coin protocol Π, between Alice and Bob, with inputs drawn from µ. In Π Alice and Bob are allowed to abort the protocol or output an element in Z. We show that the probability of non-abort for this protocol is large, like 2 c, and conditioned on non-abort, the probability that Alice and Bob output a correct answer for their inputs is also large, like This allows us to exhibit by fixing the public coins of Π appropriately), a large rectangle with weight under µ like 2 c ) such that z is a correct answer for a large fraction like 0.99) of the inputs inside the rectangle. This shows that the rectangle bound of f, under µ with output z, is smaller than c. With careful analysis we are also able to show that the smooth rectangle bound of f under µ, with output z, is smaller than c, reaching a contradiction to the inition of c. The sampling protocol that we use to obtain the public-coin zero communication protocol, is the same as that in Kerenidis et al. [23], which in turn is a modification of a protocol due to Braverman [5] 3 a variation of which also appears in [7]). However our analysis of the protocol s correctness deviates significantly in parts from the earlier works [23, 5, 7] due to the fact that for us the marginal distribution of X Y need not be the same as that of µ, in fact for some inputs x, y), the probability under the two distributions can be significantly different. There is another important original contribution of our work, not present in the previous works [23, 5, 7]. We observe a crucial property of the protocol Π which turns out to be very important in our arguments. The property is that the bad inputs x, y) for which the distribution of Π s sample for Rj M, conditioned on non-abort, deviates a lot from the desired Rj M X Y = xy), their probability is nicely reduced as compared to Pr [ X Y = xy ] ) in the final distribution of Π, conditioned on non-abort. This helps us to argue that the distribution of inputs and outputs in Π, conditioned on non-abort, is close in l distance to X j Y j R j M, implying good success in Π, conditioned on non-abort. Organization. In Section 2, we present some necessary background, initions and preliminaries. In Section 3, we prove our main result Theorem.. We er some proofs to Appendix. 2 Preliminary Information theory We use capital letters e.g. X, Y, Z or letters in bold e.g. a, b, α, β to represent random variables and use calligraphic letters e.g. X, Y, Z to represent sets. For integer n, let [n] represent the set {, 2,..., n}. Let X, Y be finite sets and k be a natural number. Let X k be the set X X, the cross product of X, k times. Let µ be a probability) distribution on X. Let µx) represent the probability of x X according to µ. For any subset S X, ine µs) = x S µx). Let X be a random variable distributed according to µ, which we denote by X µ. We use the same symbol to represent a random variable and its distribution whenever it is clear from the context. The expectation value of function f on X is denoted as E x X [fx)] = x X Pr[X = x] fx). The entropy of X is ined as HX) = x µx) log µx) log, ln represent logarithm to the base 2, e repectively). For two distributions µ, λ on X, the distribution µ λ is ined as µ λ)x, x 2 ) = µx ) λx 2 ). Let µ k = µ µ, k times. The l distance between µ and λ is ined to be half of the l norm of µ λ; that is, λ µ = 2 x λx) µx) = max S X λs) µs). We say that λ is ε-close to µ if λ µ ε. The relative entropy 3 A protocol, achieving similar task, however working only for product distributions on inputs was first shown by Jain, Radhakrishnan and Sen [20]. 4

6 [ between distributions X and Y on X is ined as SX Y ) = E x X { min-entropy between them is ined as S X Y ) = max x X log Pr[X=x] Pr[Y =x] log Pr[X=x] Pr[Y =x] ]. The relative }. It is easy to see that SX Y ) S X Y ). Let X, Y, Z be jointly distributed random variables. Let Y x denote the distribution of Y conditioned on X = x. The conditional entropy of Y conditioned on X is ined as HY X) = E x X [HY x )] = HXY ) HX). The mutual information between X and Y is ined as: IX ; Y ) = HX) + HY ) HXY ) = E y Y [SX y X)] = E x X [SY x Y )]. The conditional mutual information between X and Y, conditioned on Z, is ined as: IX ; Y Z) = E z Z [IX ; Y Z = z)] = H X Z) + H Y Z) H XY Z). The following chain rule for mutual information is easily seen : IX ; Y Z) = IX ; Z) + IX ; Y Z). We will need the following basic facts. A very good text for reference on information theory is [0]. Fact 2.. Relative entropy is jointly convex in its arguments. That is, for distributions µ, µ, λ, λ X and p [0, ]: S pµ + p)µ ) λ + p)λ p Sµ λ)+ p) S µ ) λ. Fact 2.2. Relative entropy satisfies the following chain rule. Let XY and X Y be random variables on X Y. It holds that: S X Y XY ) = S X X ) +Ex X [ S Y x Yx )]. In particular, S X Y X Y ) = S X X ) +E x X [ S Y x Y )] S X X ) +S Y Y ). The last inequality follows from Fact 2.. Fact 2.3. Let XY and X Y be random variables on X Y. It holds that S X Y X Y ) S X Y X Y ) = I X ; Y ). The following fact follows from Fact 2.2 and Fact 2.3. Fact 2.4. Given random variables XY and X Y on X Y, it holds that Fact 2.5. For distributions λ and µ: E x X [SY x Y )] E x X [SY x Y )] = IX ; Y ). 0 λ µ Sλ µ). Fact 2.6. Classical substate theorem [9]) Let X, X be two distributions on X. For any δ 0, ), it holds that [ Pr[X ] = x] Pr x X Pr[X = x] 2SX X)+)/δ δ. We will need the following lemma. Its proof is erred to Appendix. Lemma 2.7. Given random variables A, A and ε > 0, if A A ε, then for any r 0, ), [ ] Pr a A Pr[A =a] Pr[A=a] ε r 2r; and [ ] Pr a A Pr[A =a] Pr[A=a] ε r 2r ε. Communication complexity Let X, Y, Z be finite sets, f X Y Z be a relation and ε > 0. In a two-way public-coin communication protocol, Alice is given x X, and Bob is given y Y. They are supposed to output z Z such that x, y, z) f via exchanging messages and doing local computations. They may share public coins before the inputs are revealed to them. We assume that the last 5

7 log Z bits of the transcript is the output of the protocol. Let R pub ε f) represent the twoway public-coin randomized communication complexity of f with the worst case error ε, that is the communication of the best two-way public-coin protocol for f with error for each input x, y) being at most ε. Let µ be a distribution on X Y. Let D µ ε f) represent the two-way distributional communication complexity of f under distribution µ with distributional error ε, that is the communication of the best two-way deterministic protocol for f, with average error over the distribution of the inputs drawn from µ, at most ε. Following is Yao s min-max principle which connects the worst case error and the distributional error settings, see. e.g., [26, Theorem 3.20, page 36]. Fact 2.8. [37] R pub ε f) = max µ D µ ε f). The following fact can be easily verified by induction on the number of message exchanges in a private-coin protocol please refer for example to [5] for an explicit proof). It is also implicit in the cut and paste property of private-coins protocol used in Bar-Yossef, Jayram, Kumar and Sivakumar [2]. Lemma 2.9. For any private-coin two-way communication protocol, with input XY µ and transcript M M, the joint distribution can be written as Pr[XY M = xym] = µx, y)u x m)u y m), where u x : M [0, ] and u y : M [0, ], for all x, y) X Y. Smooth rectangle bound Let f X Y Z be a relation and ε, δ 0. With a slight abuse of notation, we write fx, y) = {z Z x, y, z) f}, and f z) = {x, y) : x, y, z) f}. Definition 2.0. Smooth-rectangle bound [6]) The ε, δ)-smooth rectangle bound of f, denoted by srec ε,δ f), is ined as follows: srec ε,δ f) = max{ srec λ ε,δ f) λ a distribution over X Y}; srec λ ε,δ f) = max{ srec z,λ f) z Z}; ε,δ srec z,λ ε,δ f) = max{ rec z,λ ε g) g X Y Z; Pr x,y) λ [fx, y) gx, y)] δ}; rec z,λ ε g) = min{s λ R λ) R is a rectangle in X Y, λg z) R) ε)λr)}. When δ = 0, the smooth rectangle bound equals the rectangle bound a.k.a. the corruption bound) [36,, 3, 24, 4]. Definition 2.0 is a generalization of the one in [6], where it is only ined for boolean functions. The smooth rectangle bound is a lower bound on the two-way public-coin communication complexity. The proof of the following lemma appears in Appendix. Lemma 2.. Let f X Y Z be a relation. Let λ X Y be a distribution and let z Z. Let β = Pr x,y) λ [fx, y) = {z}]. Let ε, ε δ+ε, δ > 0 be such that β 2ε < + ε ) δ β. Then, R ε f) D λ ε f) srec z,λ +ε )δ/β,δ f) log 4 ε. 6

8 3 Proof The following lemma builds a connection between the zero-communication protocols and the smooth rectangle bound. Lemma 3.. Let f X Y Z, X Y X Y be a distribution and z Z. Let β = Pr x,y) X Y [fx, y) = {z}]. Let c. Let ε, ε, δ > 0 be such that δ+2ε)/β 3ε) < +ε )δ/β. Let Π be a zero-communication public-coin protocol with input X Y, public coin R, Alice s output A Z { }, and Bob s output B Z { }. Let X Y A B R = X Y ABR A = B ). Let. Pr[A = B ] 2 c ; 2. X Y X Y ε. 3. Pr [ X, Y, A ) f ] ε. Then srec z,x Y +ε )δ/β,δ f) < c ε. Proof. Let g X Y Z, satisfy Pr x,y) X Y [fx, y) gx, y)] δ. It suffices to show that rec z,x Y +ε )δ/β g) c ε. Since Pr[A = B ] 2 c, c S X Y R A B X Y RAB ) S X Y R A B X Y RAB ) Since X Y X Y ε, E r R,a A [ S X Y ) r,a X Y )] from Fact 2.2). ) Pr xyr X Y R[fx, y) = {z}] Pr xy X Y [fx, y) = {z}] ε β ε. 2) Since Pr [ X, Y, A ) f ] ε, hence Pr [ A = B = z ] β 2ε. Since by item 2 of this lemma, we have Pr x,y) X Y [fx, y) gx, y)] δ, Pr xyra X Y R A[x, y, a) g] Pr xyra X Y R A[x, y, a) f] δ ε 2ε δ. 3) By standard application of Markov s inequality on equations ), 2), 3), we get an r 0, such that S X Y ) r0,z X Y ) c ε, Pr [z / gx, y)] δ + 2ε)/β 3ε) + ε )δ/β. xy X Y ) r0,z Here, X Y ) r0,z = X Y R = r 0, A = z). Note that the distribution of X Y ) r0,z is the distribution of X Y restricted to some rectangle and then rescaled to make a distribution. Hence S X Y ) r0,z X Y ) = S X Y ) r0,z X Y ). Thus rec z,x Y +ε )δ/β g) < c ε. The following is our main lemma. A key tool that we use here is a sampling protocol that appears in [23] protocol Π as shown in Figure ), which is a variant of a sampling protocol that appears in [7], which in turn is a variant of a sampling protocol that appears in [5]. Naturally 7

9 similar arguments and calculations, as in this lemma, are made in previous works [5, 7, 23], however with a key difference. In their setting m u xm)u y m) = for all x, y). However in our setting this number could be much smaller than one for different x, y). Hence our arguments and calculations deviate from previous works at several places significantly. Another important original contribution of our work is Claim 3.7 which is used in the proof of the main lemma. We highlight its importance later just before its proof. Lemma 3.2. Main Lemma) Let c. Let p be a distribution over X Y and z Z. Let = Pr x,y) p [fx, y) = {z}]. Let 0 < ε < /3 and δ, ε > 0 be such that δ+22ε β 33ε < + ε ) δ β. Let XY M be random variables jointly distributed over the set X Y M such that the last log Z bits of M represents an element in Z. Let u x : M [0, ], u y : M [0, ] be functions for all x, y) X Y. If it holds that, β. For all x, y, m) X Y M, Pr [XY M = xym] = q px, y)u xm)u y m), where q = xym px, y)u xm)u y m); 2. SXY p) ε 2 /4; 3. IX ; M Y ) + IY ; M X) c; 4. err f XY M) ε, where err f XY M) = Pr xym XY M [x, y, m) / f], and m represents the last log Z bits of m; then srec z,p +ε )δ/β,δ f) < 2c ε 3. Proof. Note by direct calculations, Pr[XY = xy] = q px, y)α xy, where α xy = m u x m)u y m); ; 4) Pr[X = x] = q px)α x, where α x = y py x)α xy ; 5) Pr[Y = y] = q py)α y, where α y = x px y)α xy ; 6) Pr[X y = x] = px y)α xy, Pr[Y x = y] = py x)α xy ; α x α y 7) Pr[M xy = m] = u x m)u y m)/α xy ; 8) Pr[M x = m] = u xm)v x m) α x, where v x m) = y py x)u ym); 9) Pr[M y = m] = u ym)v y m) α y, where v y m) = x px y)u xm). 0) Define G = {x, y) : G 2 α xy q 2 and α x q 2 and α xy q }; ) 2 = {x, y) : SM xy M x ) + SM xy M y ) c/ε} ; 2) [ G uy m) = {x, y) : Pr m M xy v x m) 2 and u ] xm) v y m) 2 2ε}. 3) We begin by showing that G G 2 is a large set and also G G 2 G. Claim Pr x,y) p [x, y) G ] > 6ε, 2. Pr x,y) p [x, y) G 2 ] 3ε/2, 8

10 Alice s input is x. Bob s input is y. Common input is c, ε, q, M.. Alice and Bob both set = c/ε+ ε + 2, T = 2 q M 2 ln ε 2. For i =,, T : and k = log 3 ε ln ε )). a) Alice and Bob, using public coins, jointly sample m i M, α i, β i [0, 2 ], uniformly. b) Alice accepts m i if α i u x m i ), and β i 2 v x m i ). c) Bob accepts m i if α i 2 v y m i ), and β i u y m i ). 3. Let A = {i [T ] : Alice accepts m i } and B = {i [T ] : Bob accepts m i }. 4. Alice and Bob, using public coins, choose a uniformly random function h : M {0, } k and a uniformly random string r {0, } k. a) Alice outputs if either A is empty or hm i ) r where i is the smallest element in non-empty A). Otherwise, she outputs the element in Z, represented by the last log Z bits of m i. b) Bob finds the smallest j B such that hm j ) = r. If no such j exists, he outputs. Otherwise, he outputs the element in Z, represented by the last log Z bits of m j. Figure : Protocol Π 3. Pr x,y) p [x, y) G G 2 ] 5ε/2, 4. G G 2 G. Proof. Note item. and item 2. imply item 3. Now we show. Note that using item 2. of Lemma 3.2 and Fact 2.5) XY p ε/2. From Lemma 2.7 and 4), we have [ Pr α ] xy x,y) p q /2 2ε. By the monotonicity of l -norm, we have X p X ε 2 and X p Y ε 2. Similarly, from 5) and 6) we have Pr x,y) p [ α x q ] /2 2ε, and Pr By the union bound, item. follows. Next we show 2. From item 3. of Lemma 3.2, x,y) p [ α y q ] /2 2ε. E [SM xy M x ) + SM xy M y )] = IX ; M Y ) + IY ; M X) c. x,y) XY Markov s inequality implies Pr x,y) XY [x, y) G 2 ] ε. Then item 2. follows from the fact that XY and p are ε/2-close. 9

11 Finally we show 4. For any x, y) G G 2, SM xy M x ) c/ε [ ] Pr[Mxy = m] Pr 2 c/ε+ ε ε from Fact 2.6) m M xy Pr[M x = m] [ ] uy m)α x Pr 2 c/ε+ ε ε from 8) and 9)) m M xy v x m)α xy [ ] uy m) Pr m M xy v x m) 2 ε. x, y) G and the choice of ) Similarly, Pr m Mxy [ uxm) v ym) 2 ] ε. By the union bound, [ uy m) Pr m M xy v x m) 2 and u ] xm) v y m) 2 2ε, which implies x, y) G. Hence G G 2 G. Following few claims establish the desired properties of protocol Π Figure ). Definition 3.4. Define the following events. ˆ E occurs if the smallest i A satisfies hm i ) = r and i B. Note that E implies A. ˆ B c subevent of E) occurs if E occurs and there exist j B such that hm j ) = r and m i m j, where i is the smallest element in A. ˆ H = E B c. Below we use conditioning on x, y) as shorthand for Alice s input is x and Bob s input is y. Claim 3.5. For any x, y) G G 2, we have. for all i [T ], and 2 2 q M 2 Pr q M 2 Pr [Alice accepts m i x, y)] 3 2 [Bob accepts m i x, y)] 3 2 where is the internal randomness of protocol Π ; 2. Pr rπ [B c x, y), E] ε; 3. Pr rπ [H x, y)] 4ε) 2 k 2. q M 2, q M 2, Proof.. We do the argument for Alice. Similar argument follows for Bob. Note that u x m), v x m) [0, ]. Then for all x, y) X Y, Pr[Alice accepts m i x, y)] = u x m)v x m) M 2 = α x M 2. m Item follows by the fact that x, y) G. 0

12 2. Define E i subevent of E) when i is the smallest element of A. For all x, y) G G 2, we have : Pr[B c x, y), E i ] = Pr[ j : j B and hm j ) = r and m j m i x, y), E i ] Pr[j B and hm j ) = r and m j m i x, y), E i ] from the union bound) j [T ],j i Pr j [T ],j i T ε. [j B x, y), E i ] Pr[hm j ) = r x, y), E i, j B, m j m i ] 3q M k two-wise independence of h and item. of this Claim) from choice of parameters) Since above holds for every i, it implies Pr rπ [B c x, y), E] ε. 3. Consider, Pr[E x, y)] = Pr 2 [A x, y)] Pr[E A, x, y)] ) ) T q M 2 Pr[E A, x, y)] ε) Pr[E A, x, y)] from choice of parameters) using item. of this Claim) = ε) Pr[hm i ) = r A, x, y)] Pr[i B i A, hm i ) = r, x, y)] from here on we condition on i being the first element of A) = ε) 2 k Pr[i B i A, x, y)] = ε) 2 k Pr [i B and i A x, y)] Pr rπ [i A x, y)] 2 3q ε) 2 k M 2 Pr[i B and i A x, y)] using item. of this claim) = 2 3q ε) 2 k M 2 M 2 2 min { u x m), 2 v y m) } min { u y m), 2 v x m) } m M from construction of protocol Π ) 2 3q ε) 2 k M 2 u x m)u y m) M 2 2 m G xy = 2 3q ε) 2 k M 2 α xy M 2 2 G xy u x m)u y m) α xy m G xy 3 ε) 2 k Pr [m G xy ] since x, y) G and 8)) m M xy = {m : u x m) 2 v y m) and u y m) 2 v x m)}) 3 ε) 2 k 2ε) since x, y) G, using item 4. of Claim 3.3) 3ε) 2 k 2.

13 Finally, using item 2. of this Claim. Pr [H x, y)] = Pr [E x, y)] Pr[B c x, y), E]) 4ε) 2 k 2. Claim 3.6. Pr p,rπ [H] 23 2 ε) 2 k 2. Proof. Pr [H] p, x,y) G G 2 px, y) Pr 23 2 ε) 2 k 2. [H x, y)] 4ε) 2 k 2 x,y) G G 2 px, y) The second inequality is by Claim 3.5, item 3, and the last inequality is by Claim 3.3 item 3. The following claim is an important original contribution of this work not present in the previous works [23, 5, 7].) The claim helps us establish a crucial property of Π. The property is that the bad inputs x, y) for which the distribution of Π s sample for M, conditioned on nonabort, deviates a lot from the desired, their probability is nicely reduced in the final distribution of Π, conditioned on non-abort. This helps us to argue that the joint distribution of inputs and the transcript in Π, conditioned on non-abort, is still close in l distance to XY M. Claim 3.7. Let AB and A B be random variables over A B and h : A [0, + ) be a function. Suppose for any a A, there exist functions f a, g a : B [0, + ), such that. a,b ha)f ab) =, and Pr[AB = ab] = ha)f a b); 2. f a b) g a b), for all a, b) A B ; 3. Pr[A B = ab] = ha)g a b)/c, where C = a,b ha)g ab); 4. Pr a A [Pr b Ba [f a b) = g a b)] δ ] δ 2, for δ [0, ), δ 2 [0, ). Then AB A B δ + δ 2. Proof. Set G = {a, b) : f a b) = g a b)}. By condition 4, Pr a,b) AB [a, b) G] δ δ 2. Then C = a,b ha)g a b) a,b:a,b) G ha)f a b) = Pr [a, b) G] δ δ 2. 4) a,b) AB We have AB A B = ha)f a b) 2 C ha)g ab) a,b ha)f a b) ha)g a b) + ha)g a b) ) 2 C ha)g ab) a,b ha)f a b) ha)g a b)) + C ha)g a b) using item 2. of this claim) 2 C a,b a,b ha)f a b) + C 2 a,b:a,b) / G = ) Pr [a, b) / G] + C δ + δ 2 from 4)) 2 a,b) AB 2

14 Claim 3.8. Let the input of protocol Π be drawn according to p. Let X Y M represent the input and the transcript the part of the public coins drawn from M) conditioned on H. Then we have XY M X Y M 0ε. Note that this implies that X Y A B XY M M 0ε, where M represents the last log Z bits of M and A, B represent outputs of Alice and Bob respectively, conditioned on H. Proof. For any x, y), ine w xy m) = min { u x m), 2 v y m) } min { u y m), 2 v x m) }. From step 2 a),b),c), of protocol Π, Pr [ M X Y = mxy ] = C px, y)w xym), where C = xym px, y)w xym). Now, Pr x,y) XY [ Prm Mxy [w xy m) = u x m)u y m)] 2ε ] = Pr x,y) XY [x, y) G] 8ε. The last inequality above follows using items 3. and 4. of Claim 3.3 and the fact that XY and p are ε/2-close. Finally using Claim 3.7 by substituting δ 2ε, δ 2 8ε, A XY, B M, A X Y, B M, h p q, f x,y)m) u x m)u y m) and g x,y) m) w xy m)), we get that X Y M XY M 0ε. We are now ready to finish the proof of Lemma 3.2. Consider the protocol Π. We claim that it satisfies Lemma 3. by taking the correspondence between quantities in Lemma 3. and Lemma 3.2 as follows : c c/ε 2 + 3/ε), ε ε, β β, δ δ, z z, X Y p. Item. of Lemma 3. is implied by Claim 3.6 since 23 2 ε) 2 k 2 2 c/ε2 +3/ε), from choice of parameters. Item 2. of Lemma 3. is implied since X Y p X Y XY + XY p 2 2 ε, using item 2. of Lemma 3.2, Fact 2.5 and Claim 3.8. Item 3. of Lemma 3. is implied since err f X Y M ) err f XY M)+ X Y M XY M ε, using item 4. in Lemma 3.2 and Claim 3.8. This implies srec z,p +ε )δ/β,δ f) < c/ε2 + 3/ε 2c ε ε 3. We can now prove our main result. Theorem 3.9. Let X, Y, Z be finite sets, f X Y Z be a relation. Let µ be a distribution on X Y. Let z Z and β = Pr x,y) µ [fx, y) = {z}]. Let 0 < ε < /3 and ε, δ > 0 be such that δ+22ε β 33ε < + ε ) δ β. For all integers t, R pub ε) ε2 t/32 f t ) ε2 32 t ε srec z,µ +ε )δ/β,δ f) 2 ). Proof. Set δ = ε 2 /32. ine c = ε srec z,µ +ε )δ/β,δ f) 2 and XY µt. By Fact 2.8, it suffices to show D µt ε) ε2 t/32 f t ) δ tc. Let Π be a deterministic two-way communication protocol, that computes f t, with total communication δ ct bits. The following claim implies that the success of Π is at most ε) δt, and this shows the desired. 3

15 Claim 3.0. For each i [t], ine a binary random variable T i {0, }, which represents the success of Π on the i-th instance. That is, T i = if the protocol computes the i-th instance of f correctly, and T i = 0 otherwise. Let t = δ t. There exists t coordinates {i,, i t } such that for each r t,. either Pr [ T r) = ] ε) t or 2. Pr [ T ir+ = T r) = ] r) ε, where T = r j= T i j. Proof. Suppose we have already identified r coordinates, i,, i r satisfying that Pr[T i ] ε and Pr [ T ij+ = T j) = ] ε for j r. If Pr [ T r) = ] ε) t, then we are done. So from now on we assume Pr [ T r) = ] > ε) t 2 δt. Here we assume r. Similar arguments also work when r = 0, that is for identifying the first coordinate, which we skip for the sake of avoiding repetition. Let D be a random variable uniformly distributed in {0, } t and independent of XY. Let U i = X i if D i = 0, and U i = Y i if D i =. For any random variable L, ine L = L T r) = ). If L = L L t, ine L i = L L i L i+ L t. Let C = {i,, i r }. Define R i = D i U i X C [i ] Y C [i ]. Now let us apply Lemma 3.2 by substituting XY Xj Y j, M R j M, p X j Y j, z z, ε ε, δ δ, β β, ε ε and c 6δ c + ). Condition. in Lemma 3.2 is implied by Claim 3.. Conditions 2. and 3. are implied by Claim 3.2. Also we have srec z,µ +ε )δ/β,δ f) > 32δ c+) ε, by our choice of c. Hence condition 4. must be false and hence err 3 f X j Yj M ) = err f X j Yj R j M ) > ε. This shows condition 2. of this Claim. Claim 3.. Let R denote the space of R j. There exist functions u xj, u yj : R M [0, ] for all x j, y j ) X Y and a real number q > 0 such that Pr [ X j Y j R j M = x j y j r j m ] = q µx j, y j )u xj r j, m)u yj r j, m). Proof. Note that X j Y j is independent of R j. Now consider a private-coin two-way protocol Π with input X j Y j as follows. Let Alice generate R j and send to Bob. Alice and Bob then generate X j ) xjr j and Y j ) yjr j, respectively. Then they run the protocol Π. Thus, from Lemma 2.9, Pr[X j Y j R j M = xy j rm] = µx j, y j ) v xj r j, m) v yj r j, m), where v xj, v yj : R M [0, ], for all x j, y j ) X Y. Note that conditioning on T r) = corresponds to choosing a subset, say S, of R M. Let q = x jy jr jm:r j,m) S µx j, y j )v xj r j, m)v yj r j, m). Then Pr [ X j Y j R j M = x j y j r j m ] = q µx j, y j )v xj r j, m)v yj r j, m), for r j, m) S and Pr [ Xj Y j R j M = x j y j r j m ] = 0 otherwise. Now ine u xj r j, m) = v xj r j, m), and u yj r j, m) = v yj r j, m), for r j, m) S and ine them to be 0 otherwise. The claim follows. 4

16 Claim 3.2. If Pr [ T r) = ] > 2 δt, then there exists a coordinate j / C such that and S Xj Y ) Xj Y j 8δ = ε2 4, 5) j I Xj ; M ) ) Rj Y j + I Y j ; M Rj X j 6δ c + ). Proof. This follows using Claim III.6 in [8]. We include a proof in Appendix for completeness. 6) Conclusion and open problems We provide a strong direct product result for the two-way public-coin communication complexity in terms of an important and widely used lower bound method, the smooth rectangle bound. Some natural questions that arise are:. Is the smooth rectangle bound a tight lower bound for the two-way public-coin communication complexity for all relations? If yes, this would imply a strong direct product result for the two-way public-coin communication complexity for all relations, settling a major open question in this area. To start with we can ask: Is the smooth rectangle bound a polynomially tight lower bound for the two-way public-coin communication complexity for all relations? 2. Or on the other hand, can we exhibit a relation for which the smooth rectangle bound is asymptotically) strictly smaller than its two-way public-coin communication complexity? 3. Can we show similar direct product results in terms of possibly stronger lower bound methods like the partition bound and the information complexity? 4. It will be interesting to obtain new optimal lower bounds for interesting functions and relations using the smooth rectangle bound, implying strong direct product results for them. Acknowledgement. We thank Prahladh Harsha for helpful discussions. References [] Laszlo Babai, Peter Frankl, and Janos Simon. Complexity classes in communication complexity theory. In Proceedings of the 27th Annual Symposium on Foundations of Computer Science, SFCS 86, pages , Washington, DC, USA, 986. IEEE Computer Society. [2] Ziv Bar-Yossef, T. S. Jayram, Ravi Kumar, and D. Sivakumar. An information statistics approach to data stream and communication complexity. In Proceedings of the 43rd Symposium on Foundations of Computer Science, FOCS 02, pages , Washington, DC, USA, IEEE Computer Society. [3] Boaz Barak, Mark Braverman, Xi Chen, and Anup Rao. How to compress interactive communication. In Proceedings of the 42nd ACM symposium on Theory of computing, STOC 0, pages 67 76, New York, NY, USA, 200. ACM. [4] Paul Beame, Toniann Pitassi, Nathan Segerlind, and Avi Wigderson. A strong direct product theorem for corruption and the multiparty communication complexity of disjointness. Comput. Complex., 54):39 432, December [5] Mark Braverman. Interactive information complexity. In Proceedings of the 44th annual ACM symposium on Theory of computing, STOC 2, pages , New York, NY, USA, 202. ACM. 5

17 [6] Mark Braverman and Anup Rao. Information equals amortized communication. In Proceedings of the 52nd Symposium on Foundations of Computer Science, FOCS, pages , Washington, DC, USA, 20. IEEE Computer Society. [7] Mark Braverman and Omri Weinstein. A discrepancy lower bound for information complexity. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, Lecture Notes in Computer Science, Springer Berlin Heidelberg, pages , volume 7408, 202, isbn [8] Amit Chakrabarti and Oded Regev. An optimal lower bound on the communication complexity of gap-hamming-distance. In Proceedings of the 43rd annual ACM symposium on Theory of computing, STOC, pages 5 60, New York, NY, USA, 20. ACM. [9] Amit Chakrabarti, Yaoyun Shi, Anthony Wirth, and Andrew Yao. Informational complexity and the direct sum problem for simultaneous message complexity. In In Proceedings of the 42nd Annual IEEE Symposium on Foundations of Computer Science, pages , 200. [0] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley Series in Telecommunications. John Wiley & Sons, New York, NY, USA, 99. [] Dmitry Gavinsky. Classical interaction cannot replace a quantum message. In Proceedings of the 40th annual ACM symposium on Theory of computing, STOC 08, pages 95 02, New York, NY, USA, ACM. [2] Prahladh Harsha and Rahul Jain. A strong direct product theorem for the tribes function via the smooth-rectangle bound. Preprint available at arxiv: [3] Thomas Holenstein. Parallel repetition: simplifications and the no-signaling case. In Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, STOC 07, pages 4 49, New York, NY, USA, ACM. [4] Rahul Jain. New strong direct product results in communication complexity. Electronic Colloquium on Computational Complexity ECCC), 8:24, 20. [5] Rahul Jain and Hartmut Klauck. New results in the simultaneous message passing model via information theoretic techniques. In Proceedings of the th Annual IEEE Conference on Computational Complexity, CCC 09, pages , Washington, DC, USA, IEEE Computer Society. [6] Rahul Jain and Hartmut Klauck. The partition bound for classical communication complexity and query complexity. In Proceedings of the 200 IEEE 25th Annual Conference on Computational Complexity, CCC 0, pages , Washington, DC, USA, 200. IEEE Computer Society. [7] Rahul Jain, Hartmut Klauck, and Ashwin Nayak. Direct product theorems for classical communication complexity via subdistribution bounds: extended abstract. In Proceedings of the 40th annual ACM symposium on Theory of computing, STOC 08, pages , New York, NY, USA, ACM. [8] Rahul Jain, Attila Pereszlényi, and Penghui Yao. A direct product theorem for the twoparty bounded-round public-coin communication complexity. In Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2, pages 67 76, Washington, DC, USA, 202. IEEE Computer Society. [9] Rahul Jain, Jaikumar Radhakrishnan, and Pranab Sen. Privacy and interaction in quantum communication complexity and a theorem about the relative entropy of quantum states. In Proceedings of the 43rd Symposium on Foundations of Computer Science, FOCS 02, pages , Washington, DC, USA, IEEE Computer Society. 6

18 [20] Rahul Jain, Jaikumar Radhakrishnan, and Pranab Sen. Prior entanglement, message compression and privacy in quantum communication. In Proceedings of the 20th Annual IEEE Conference on Computational Complexity, pages , Washington, DC, USA, IEEE Computer Society. [2] Rahul Jain, Jaikumar Radhakrishnan, and Pranab Sen. A direct sum theorem in communication complexity via message compression. In Proceedings of the 30th international conference on Automata, languages and programming, ICALP 03, pages , Berlin, Heidelberg, Springer-Verlag. [22] T. S. Jayram, Ravi Kumar, and D. Sivakumar. Two applications of information complexity. In Proc. 35th ACM Symp. on Theory of Computing STOC), pages [23] Iordanis Kerenidis, Sophie Laplante, Virginie Lerays, Jérémie Roland, and David Xiao. Lower bounds on information complexity via zero-communication protocols and applications. In Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2, pages , Washington, DC, USA, 202. IEEE Computer Society. [24] Hartmut Klauck. Rectangle size bounds and threshold covers in communication complexity. In Proceedings of the 2003 IEEE 8th Annual Conference on Computational Complexity, CCC 8, pages 8 34, Washington, DC, USA, IEEE Computer Society. [25] Hartmut Klauck. A strong direct product theorem for disjointness. In Proceedings of the 42nd ACM symposium on Theory of computing, STOC 0, pages 77 86, New York, NY, USA, 200. ACM. [26] Eyal Kushilevitz and Noam Nisan. Communication Complexity. Cambridge University Press, 996. [27] Troy Lee, Adi Shraibman, and Robert Špalek. A direct product theorem for discrepancy. In Proceedings of the 2008 IEEE 23rd Annual Conference on Computational Complexity, CCC 08, pages 7 80, Washington, DC, USA, IEEE Computer Society. [28] Nati Linial and Adi Shraibman. Lower bounds in communication complexity based on factorization norms. In Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, STOC 07, pages , New York, NY, USA, ACM. [29] Ran Raz. A parallel repetition theorem. In Proceedings of the twenty-seventh annual ACM symposium on Theory of computing, STOC 95, pages , New York, NY, USA, 995. ACM. [30] Ran Raz. Exponential separation of quantum and classical communication complexity. In Proceedings of the thirty-first annual ACM symposium on Theory of computing, STOC 99, pages , New York, NY, USA, 999. ACM. [3] Alexander A. Razborov. On the distributional complexity of disjointness. Theoretical Computer Science, 062): , 992. [32] Oded Regev and Bo az Klartag. Quantum one-way communication can be exponentially stronger than classical communication. In Proceedings of the 43rd annual ACM symposium on Theory of computing, STOC, pages 3 40, New York, NY, USA, 20. ACM. [33] Alexander A. Sherstov. The communication complexity of gap hamming distance. Electronic Colloquium on Computational Complexity ECCC), 8:63, 20. [34] Alexander A. Sherstov. Strong direct product theorems for quantum communication and query complexity. In Proceedings of the 43rd annual ACM symposium on Theory of computing, STOC, pages 4 50, New York, NY, USA, 20. ACM. [35] Emanuele Viola. The communication complexity of addition. In Proceedings of the ACM- SIAM Symposium on Discrete Algorithms, SODA 3,

19 [36] Andrew C. Yao. Lower bounds by probabilistic arguments. In Proceedings of the 24th Annual Symposium on Foundations of Computer Science, SFCS 83, pages , Washington, DC, USA, 983. IEEE Computer Society. [37] Andrew Chi-Chih Yao. Some complexity questions related to distributive computing preliminary report). In Proceedings of the eleventh annual ACM symposium on Theory of computing, STOC 79, pages , New York, NY, USA, 979. ACM. [38] Andrew Chi-Chih Yao. Theory and applications of trapdoor functions. In Proceedings of the 23RD Symposium on Foundations of Computer Science, FOCS 82, pages 80 9, Washington, DC, USA, 982. IEEE Computer Society. Proof of Lemma 2.7: Let G = { a : Pr[A =a] Pr[A=a] ε r }, then 2ε Pr[A = a] Pr[A = a] Pr[A = a] Pr[A = a] a a G = Pr[A = a] Pr[A = a] Pr[A = a] Pr ε [a G] a A r. a G Thus Pr a A [a G] 2r. The second inequality follows immediately. Proof of Lemma 2.: Let c = srec z,λ f). Let g be such that +ε ) δ recz,λ g) = c and β,δ +ε ) δ β Pr x,y) λ [fx, y) gx, y)] δ. If D λ ε f) c log4/ε) then we are done using Fact 2.8. So lets assume for contradiction that D λ ε f) < c log4/ε). This implies that there exists a deterministic protocol Π for f with communication c log4/ε) and distributional error under λ bounded by ε. Since Pr x,y) λ [fx, y) gx, y)] δ, the protocol Π will have distributional error at most ε + δ for g. Let M represent the message transcript of Π and let O represent protocol s output. We assume that the last log Z bits of M contain O. We have,. Pr m M [Pr[M = m] 2 c ] ε/4, since the total number of message transcripts in Π is at most 2 c log4/ε). 2. Pr m M [O = z M = m] > β ε, since Pr x,y) λ [fx, y) = {z}] = β and distributional error of Π under λ is bounded by ε for f. ] 3. Pr m M [Pr x,y) XY )m [x, y, O) / g M = m] ε+δ β 2ε β 2ε, since distributional error of Π under λ is bounded by ε + δ for g. Using all of above we obtain a message transcript m such that Pr [M = m] > 2 c and O = z M = m) and ε + δ Pr [x, y, O) / g M = m] x,y) XY M=m) β 2ε < + ε ) δ β. This and the fact that the support of XY M = m) is a rectangle, implies that rec z,λ g) < c, +ε ) δ β contradicting the inition of c. Hence it must be that D λ ε f) c log4/ε), which using Fact 2.8 shows the desired. Proof of Claim 3.2: The following calculations are helpful for achieving 5). δ k > S X Y XY ) S X Y XY ) i / C S Xi Y ) X i Y i, 7) i 8

20 where the first inequality follows from the assumption that Pr [ T r) = ] > 2 δk, and the last inequality follows from Fact 2.2. The following calculations are helpful for 6). δ k > S X Y D U XY DU ) S X Y D U ) XY DU E d,u,x C,y C ) D,U,X C,Y C = i/ C = i/ C = 2 [ S X Y ) )] XY ) d,u,xc,yc d,u,xc,y C E d,u,x C [i ],y C [i ] ) D,U,X C [i ],Y C [i ] E d i,u i,r i) D i,u i,r i i/ C E r i,x i) R i,x i [ X S i Yi ) d,u,x C [i ], y C [i ] [ S X i Y i ) di,u i,r i Xi Y i ) di,u i,r i )] [ Y ) )] Yi S i r i,x i ) xi + 2 i / C 8) ) ] X iy i ) d,u,xc [i ], 9) y C [i ] 20) [ X )] Xi E S i ) )ri,yi r i,y i) R i,y yi. i 2) Above, Eq. 8) and Eq. 9) follow from Fact 2.2; Eq. 20) is from the inition of R i. Eq. 2) follows since Di is independent of R i and with probability half D i is 0, in which case U i = X i and with probability half Di is in which case U i = Y i. By Fact 2.4, I X i ; Ri ) Y i + I Y i ; Ri )) X i 2δ k. 22) i/ C We also need the following calculations, which exhibits that the information carried by messages about sender s input is small. δ ck M I X Y ; M D U XCY C ) ) I Xi Yi ; M D U XC [i ] Y C [i ] = i/ C = i/ C = 2 i/ C I Xi Yi ; M D i Ui Ri ) I X i ; M R i Yi ) + I Y i ; M R i Xi )). 23) Above first equality follows from chain rule for mutual information, second equality follows from inition of Ri and the third equality follows since with probability half D i is 0, in which case Ui = X i and with probability half Di is in which case U i = Y i. Combining Eqs. 7), 2) and 23), and making standard use of Markov s inequality, we can get a coordinate j / C such that S Xj Yj ) X j Y j 8δ, I Xj ; Rj ) Y j + I Y j ; Rj ) X j 6δ, I Xj ; M R j Yj ) + I Y j ; M R j Xj ) 6δ c. The first inequality is exactly the same as Eq. 5). Eq. 6) follows by adding the last two inequalities. 9

A direct product theorem for the two-party bounded-round public-coin communication complexity

A direct product theorem for the two-party bounded-round public-coin communication complexity A direct product theorem for the two-party bounded-round public-coin communication complexity Rahul Jain Centre for Quantum Technologies and Department of Computer Science National University of Singapore

More information

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan The Communication Complexity of Correlation Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan Transmitting Correlated Variables (X, Y) pair of correlated random variables Transmitting

More information

An Information Complexity Approach to the Inner Product Problem

An Information Complexity Approach to the Inner Product Problem An Information Complexity Approach to the Inner Product Problem William Henderson-Frost Advisor: Amit Chakrabarti Senior Honors Thesis Submitted to the faculty in partial fulfillment of the requirements

More information

On the tightness of the Buhrman-Cleve-Wigderson simulation

On the tightness of the Buhrman-Cleve-Wigderson simulation On the tightness of the Buhrman-Cleve-Wigderson simulation Shengyu Zhang Department of Computer Science and Engineering, The Chinese University of Hong Kong. syzhang@cse.cuhk.edu.hk Abstract. Buhrman,

More information

An approach from classical information theory to lower bounds for smooth codes

An approach from classical information theory to lower bounds for smooth codes An approach from classical information theory to lower bounds for smooth codes Abstract Let C : {0, 1} n {0, 1} m be a code encoding an n-bit string into an m-bit string. Such a code is called a (q, c,

More information

Lecture Lecture 9 October 1, 2015

Lecture Lecture 9 October 1, 2015 CS 229r: Algorithms for Big Data Fall 2015 Lecture Lecture 9 October 1, 2015 Prof. Jelani Nelson Scribe: Rachit Singh 1 Overview In the last lecture we covered the distance to monotonicity (DTM) and longest

More information

Communication vs information complexity, relative discrepancy and other lower bounds

Communication vs information complexity, relative discrepancy and other lower bounds Communication vs information complexity, relative discrepancy and other lower bounds Iordanis Kerenidis CNRS, LIAFA- Univ Paris Diderot 7 Joint work with: L. Fontes, R. Jain, S. Laplante, M. Lauriere,

More information

14. Direct Sum (Part 1) - Introduction

14. Direct Sum (Part 1) - Introduction Communication Complexity 14 Oct, 2011 (@ IMSc) 14. Direct Sum (Part 1) - Introduction Lecturer: Prahladh Harsha Scribe: Abhishek Dang 14.1 Introduction The Direct Sum problem asks how the difficulty in

More information

Exponential Separation of Quantum Communication and Classical Information

Exponential Separation of Quantum Communication and Classical Information Exponential Separation of Quantum Communication and Classical Information Dave Touchette IQC and C&O, University of Waterloo, and Perimeter Institute for Theoretical Physics jt. work with Anurag Anshu

More information

Lower bounds on information complexity via zero-communication protocols and applications

Lower bounds on information complexity via zero-communication protocols and applications Lower bounds on information complexity via zero-communication protocols and applications Iordanis Kerenidis Sophie Laplante Virginie Lerays Jérémie Roland David Xiao April 6, 2012 Abstract We show that

More information

Nondeterminism LECTURE Nondeterminism as a proof system. University of California, Los Angeles CS 289A Communication Complexity

Nondeterminism LECTURE Nondeterminism as a proof system. University of California, Los Angeles CS 289A Communication Complexity University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Matt Brown Date: January 25, 2012 LECTURE 5 Nondeterminism In this lecture, we introduce nondeterministic

More information

Direct Product Theorems for Classical Communication Complexity via Subdistribution Bounds

Direct Product Theorems for Classical Communication Complexity via Subdistribution Bounds Direct Product Theorems for Classical Communication Complexity via Subdistribution Bounds Rahul Jain U. Waterloo Hartmut Klauck Goethe-Universität Frankfurt December 17, 2007 Ashwin Nayak U. Waterloo &

More information

Quantum Communication Complexity

Quantum Communication Complexity Quantum Communication Complexity Ronald de Wolf Communication complexity has been studied extensively in the area of theoretical computer science and has deep connections with seemingly unrelated areas,

More information

The one-way communication complexity of the Boolean Hidden Matching Problem

The one-way communication complexity of the Boolean Hidden Matching Problem The one-way communication complexity of the Boolean Hidden Matching Problem Iordanis Kerenidis CRS - LRI Université Paris-Sud jkeren@lri.fr Ran Raz Faculty of Mathematics Weizmann Institute ran.raz@weizmann.ac.il

More information

CS Foundations of Communication Complexity

CS Foundations of Communication Complexity CS 2429 - Foundations of Communication Complexity Lecturer: Sergey Gorbunov 1 Introduction In this lecture we will see how to use methods of (conditional) information complexity to prove lower bounds for

More information

An exponential separation between quantum and classical one-way communication complexity

An exponential separation between quantum and classical one-way communication complexity An exponential separation between quantum and classical one-way communication complexity Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical

More information

Communication Complexity

Communication Complexity Communication Complexity Jie Ren Adaptive Signal Processing and Information Theory Group Nov 3 rd, 2014 Jie Ren (Drexel ASPITRG) CC Nov 3 rd, 2014 1 / 77 1 E. Kushilevitz and N. Nisan, Communication Complexity,

More information

CS Communication Complexity: Applications and New Directions

CS Communication Complexity: Applications and New Directions CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the

More information

Information Complexity vs. Communication Complexity: Hidden Layers Game

Information Complexity vs. Communication Complexity: Hidden Layers Game Information Complexity vs. Communication Complexity: Hidden Layers Game Jiahui Liu Final Project Presentation for Information Theory in TCS Introduction Review of IC vs CC Hidden Layers Game Upper Bound

More information

Information Complexity and Applications. Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017

Information Complexity and Applications. Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017 Information Complexity and Applications Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017 Coding vs complexity: a tale of two theories Coding Goal: data transmission Different channels

More information

Simultaneous Communication Protocols with Quantum and Classical Messages

Simultaneous Communication Protocols with Quantum and Classical Messages Simultaneous Communication Protocols with Quantum and Classical Messages Oded Regev Ronald de Wolf July 17, 2008 Abstract We study the simultaneous message passing model of communication complexity, for

More information

Near-Optimal Lower Bounds on the Multi-Party Communication Complexity of Set Disjointness

Near-Optimal Lower Bounds on the Multi-Party Communication Complexity of Set Disjointness Near-Optimal Lower Bounds on the Multi-Party Communication Complexity of Set Disjointness Amit Chakrabarti School of Mathematics Institute for Advanced Study Princeton, NJ 08540 amitc@ias.edu Subhash Khot

More information

Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets

Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets Emanuele Viola July 6, 2009 Abstract We prove that to store strings x {0, 1} n so that each prefix sum a.k.a. rank query Sumi := k i x k can

More information

Grothendieck Inequalities, XOR games, and Communication Complexity

Grothendieck Inequalities, XOR games, and Communication Complexity Grothendieck Inequalities, XOR games, and Communication Complexity Troy Lee Rutgers University Joint work with: Jop Briët, Harry Buhrman, and Thomas Vidick Overview Introduce XOR games, Grothendieck s

More information

Direct product theorem for discrepancy

Direct product theorem for discrepancy Direct product theorem for discrepancy Troy Lee Rutgers University Robert Špalek Google Direct product theorems: Why is Google interested? Direct product theorems: Why should Google be interested? Direct

More information

Fourier analysis of boolean functions in quantum computation

Fourier analysis of boolean functions in quantum computation Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge

More information

QUANTUM COMMUNICATIONS BASED ON QUANTUM HASHING. Alexander Vasiliev. Kazan Federal University

QUANTUM COMMUNICATIONS BASED ON QUANTUM HASHING. Alexander Vasiliev. Kazan Federal University QUANTUM COMMUNICATIONS BASED ON QUANTUM HASHING Alexander Vasiliev Kazan Federal University Abstract: In this paper we consider an application of the recently proposed quantum hashing technique for computing

More information

Internal Compression of Protocols to Entropy

Internal Compression of Protocols to Entropy Internal Compression of Protocols to Entropy Balthazar Bauer 1, Shay Moran 2, and Amir Yehudayoff 3 1 Département d Infomatique, ENS Lyon Lyon, France balthazarbauer@aol.com 2 Departments of Computer Science,

More information

Lecture 10 + additional notes

Lecture 10 + additional notes CSE533: Information Theorn Computer Science November 1, 2010 Lecturer: Anup Rao Lecture 10 + additional notes Scribe: Mohammad Moharrami 1 Constraint satisfaction problems We start by defining bivariate

More information

Hardness of MST Construction

Hardness of MST Construction Lecture 7 Hardness of MST Construction In the previous lecture, we saw that an MST can be computed in O( n log n+ D) rounds using messages of size O(log n). Trivially, Ω(D) rounds are required, but what

More information

THE COMMUNICATION COMPLEXITY OF GAP HAMMING DISTANCE

THE COMMUNICATION COMPLEXITY OF GAP HAMMING DISTANCE Electronic Colloquium on Computational Complexity, Report No. 63 (2011) THE COMMUNICATION COMPLEXITY OF GAP HAMMING DISTANCE ALEXANDER A. SHERSTOV ABSTRACT. In the gap Hamming distance problem, two parties

More information

Direct product theorem for discrepancy

Direct product theorem for discrepancy Direct product theorem for discrepancy Troy Lee Rutgers University Joint work with: Robert Špalek Direct product theorems Knowing how to compute f, how can you compute f f f? Obvious upper bounds: If can

More information

Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity

Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity László Babai Peter G. Kimmel Department of Computer Science The University of Chicago 1100 East 58th Street Chicago,

More information

Algebraic Problems in Computational Complexity

Algebraic Problems in Computational Complexity Algebraic Problems in Computational Complexity Pranab Sen School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai 400005, India pranab@tcs.tifr.res.in Guide: Prof. R.

More information

arxiv: v3 [cs.cc] 28 Jun 2015

arxiv: v3 [cs.cc] 28 Jun 2015 Parity Decision Tree Complexity and 4-Party Communication Complexity of XOR-functions Are Polynomially Equivalent arxiv:156.2936v3 [cs.cc] 28 Jun 215 Penghui Yao CWI, Amsterdam phyao1985@gmail.com September

More information

Approximation norms and duality for communication complexity lower bounds

Approximation norms and duality for communication complexity lower bounds Approximation norms and duality for communication complexity lower bounds Troy Lee Columbia University Adi Shraibman Weizmann Institute From min to max The cost of a best algorithm is naturally phrased

More information

Rounds in Communication Complexity Revisited

Rounds in Communication Complexity Revisited Rounds in Communication Complexity Revisited Noam Nisan Hebrew University Avi Widgerson Hebrew University and Princeton University Abstract The k-round two-party communication complexity was studied in

More information

Simultaneous Communication Protocols with Quantum and Classical Messages

Simultaneous Communication Protocols with Quantum and Classical Messages Simultaneous Communication Protocols with Quantum and Classical Messages Dmitry Gavinsky Oded Regev Ronald de Wolf December 29, 2008 Abstract We study the simultaneous message passing (SMP) model of communication

More information

Direct product theorem for discrepancy

Direct product theorem for discrepancy Direct product theorem for discrepancy Troy Lee Rutgers University Adi Shraibman Weizmann Institute of Science Robert Špalek Google Direct product theorems: Why is Google interested? Direct product theorems:

More information

Exponential Separation of Quantum and Classical Non-Interactive Multi-Party Communication Complexity

Exponential Separation of Quantum and Classical Non-Interactive Multi-Party Communication Complexity Electronic Colloquium on Computational Complexity, Report No. 74 (2007) Exponential Separation of Quantum and Classical Non-Interactive ulti-party Communication Complexity Dmitry Gavinsky Pavel Pudlák

More information

Structure of protocols for XOR functions

Structure of protocols for XOR functions Electronic Colloquium on Computational Complexity, Report No. 44 (016) Structure of protocols for XOR functions Kaave Hosseini Computer Science and Engineering University of California, San Diego skhossei@ucsd.edu

More information

COS598D Lecture 3 Pseudorandom generators from one-way functions

COS598D Lecture 3 Pseudorandom generators from one-way functions COS598D Lecture 3 Pseudorandom generators from one-way functions Scribe: Moritz Hardt, Srdjan Krstic February 22, 2008 In this lecture we prove the existence of pseudorandom-generators assuming that oneway

More information

The Computational Complexity Column

The Computational Complexity Column The Computational Complexity Column by Vikraman Arvind Institute of Mathematical Sciences, CIT Campus, Taramani Chennai 600113, India arvind@imsc.res.in http://www.imsc.res.in/~arvind Communication complexity

More information

CS Foundations of Communication Complexity

CS Foundations of Communication Complexity CS 49 - Foundations of Communication Complexity Lecturer: Toniann Pitassi 1 The Discrepancy Method Cont d In the previous lecture we ve outlined the discrepancy method, which is a method for getting lower

More information

Lecture 3: Lower Bounds for Bandit Algorithms

Lecture 3: Lower Bounds for Bandit Algorithms CMSC 858G: Bandits, Experts and Games 09/19/16 Lecture 3: Lower Bounds for Bandit Algorithms Instructor: Alex Slivkins Scribed by: Soham De & Karthik A Sankararaman 1 Lower Bounds In this lecture (and

More information

Lecture 16 Oct 21, 2014

Lecture 16 Oct 21, 2014 CS 395T: Sublinear Algorithms Fall 24 Prof. Eric Price Lecture 6 Oct 2, 24 Scribe: Chi-Kit Lam Overview In this lecture we will talk about information and compression, which the Huffman coding can achieve

More information

On Quantum vs. Classical Communication Complexity

On Quantum vs. Classical Communication Complexity On Quantum vs. Classical Communication Complexity Dmitry Gavinsky Institute of Mathematics, Praha Czech Academy of Sciences Introduction Introduction The setting of communication complexity is one of the

More information

Communication is bounded by root of rank

Communication is bounded by root of rank Electronic Colloquium on Computational Complexity, Report No. 84 (2013) Communication is bounded by root of rank Shachar Lovett June 7, 2013 Abstract We prove that any total boolean function of rank r

More information

Tight Bounds for Distributed Streaming

Tight Bounds for Distributed Streaming Tight Bounds for Distributed Streaming (a.k.a., Distributed Functional Monitoring) David Woodruff IBM Research Almaden Qin Zhang MADALGO, Aarhus Univ. STOC 12 May 22, 2012 1-1 The distributed streaming

More information

Affine extractors over large fields with exponential error

Affine extractors over large fields with exponential error Affine extractors over large fields with exponential error Jean Bourgain Zeev Dvir Ethan Leeman Abstract We describe a construction of explicit affine extractors over large finite fields with exponentially

More information

1 Randomized Computation

1 Randomized Computation CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at

More information

ROUNDS IN COMMUNICATION COMPLEXITY REVISITED

ROUNDS IN COMMUNICATION COMPLEXITY REVISITED ROUNDS IN COMMUNICATION COMPLEXITY REVISITED NOAM NISAN AND AVI WIDGERSON Abstract. The k-round two-party communication complexity was studied in the deterministic model by [14] and [4] and in the probabilistic

More information

Poly-logarithmic independence fools AC 0 circuits

Poly-logarithmic independence fools AC 0 circuits Poly-logarithmic independence fools AC 0 circuits Mark Braverman Microsoft Research New England January 30, 2009 Abstract We prove that poly-sized AC 0 circuits cannot distinguish a poly-logarithmically

More information

On the power of lower bound methods for one-way quantum communication complexity

On the power of lower bound methods for one-way quantum communication complexity On the power of lower bound methods for one-way quantum communication complexity Shengyu Zhang The Chinese University of Hong Kong syzhang@cse.cuhk.edu.hk Abstract. This paper studies the three known lower

More information

EXPONENTIAL SEPARATION OF QUANTUM AND CLASSICAL ONE-WAY COMMUNICATION COMPLEXITY

EXPONENTIAL SEPARATION OF QUANTUM AND CLASSICAL ONE-WAY COMMUNICATION COMPLEXITY EXPONENTIAL SEPARATION OF QUANTUM AND CLASSICAL ONE-WAY COMMUNICATION COMPLEXITY ZIV BAR-YOSSEF, T. S. JAYRAM, AND IORDANIS KERENIDIS Abstract. We give the first exponential separation between quantum

More information

Higher Lower Bounds for Near-Neighbor and Further Rich Problems

Higher Lower Bounds for Near-Neighbor and Further Rich Problems Higher Lower Bounds for Near-Neighbor and Further Rich Problems Mihai Pǎtraşcu mip@mit.edu Mikkel Thorup mthorup@research.att.com December 4, 2008 Abstract We convert cell-probe lower bounds for polynomial

More information

Lecture 21: Quantum communication complexity

Lecture 21: Quantum communication complexity CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 21: Quantum communication complexity April 6, 2006 In this lecture we will discuss how quantum information can allow for a

More information

Separating Deterministic from Nondeterministic NOF Multiparty Communication Complexity

Separating Deterministic from Nondeterministic NOF Multiparty Communication Complexity Separating Deterministic from Nondeterministic NOF Multiparty Communication Complexity (Extended Abstract) Paul Beame 1,, Matei David 2,, Toniann Pitassi 2,, and Philipp Woelfel 2, 1 University of Washington

More information

Product theorems via semidefinite programming

Product theorems via semidefinite programming Product theorems via semidefinite programming Troy Lee Department of Computer Science Rutgers University Rajat Mittal Department of Computer Science Rutgers University Abstract The tendency of semidefinite

More information

Partitions and Covers

Partitions and Covers University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Dong Wang Date: January 2, 2012 LECTURE 4 Partitions and Covers In previous lectures, we saw

More information

1 Estimating Frequency Moments in Streams

1 Estimating Frequency Moments in Streams CS 598CSC: Algorithms for Big Data Lecture date: August 28, 2014 Instructor: Chandra Chekuri Scribe: Chandra Chekuri 1 Estimating Frequency Moments in Streams A significant fraction of streaming literature

More information

There are no zero-hard problems in multiparty communication complexity

There are no zero-hard problems in multiparty communication complexity There are no zero-hard problems in multiparty communication complexity László Babai and Denis Pankratov University of Chicago DRAFT: 2015-04-29 7:00pm Abstract We study the feasibility of generalizing

More information

PRGs for space-bounded computation: INW, Nisan

PRGs for space-bounded computation: INW, Nisan 0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,

More information

How to Compress Interactive Communication

How to Compress Interactive Communication How to Compress Interactive Communication Boaz Barak Mark Braverman Xi Chen Anup Rao March 1, 2013 Abstract We describe new ways to simulate 2-party communication protocols to get protocols with potentially

More information

Certifying Equality With Limited Interaction

Certifying Equality With Limited Interaction Certifying Equality With Limited Interaction Joshua Brody 1, Amit Chakrabarti 2, Ranganath Kondapally 2, David P. Woodruff 3, and Grigory Yaroslavtsev 4 1 Swarthmore College Swarthmore, PA, USA joshua.e.brody@gmail.com

More information

Structure of protocols for XOR functions

Structure of protocols for XOR functions Electronic Colloquium on Computational Complexity, Revision 1 of Report No. 44 (2016) Structure of protocols for XOR functions Hamed Hatami School of Computer Science McGill University, Montreal hatami@cs.mcgill.ca

More information

Strengths and Weaknesses of Quantum Fingerprinting

Strengths and Weaknesses of Quantum Fingerprinting Strengths and Weaknesses of Quantum Fingerprinting Dmitry Gavinsky University of Calgary Julia Kempe CNRS & LRI Univ. de Paris-Sud, Orsay Ronald de Wolf CWI, Amsterdam Abstract We study the power of quantum

More information

Lecture 16: Communication Complexity

Lecture 16: Communication Complexity CSE 531: Computational Complexity I Winter 2016 Lecture 16: Communication Complexity Mar 2, 2016 Lecturer: Paul Beame Scribe: Paul Beame 1 Communication Complexity In (two-party) communication complexity

More information

Cell-Probe Lower Bounds for the Partial Match Problem

Cell-Probe Lower Bounds for the Partial Match Problem Cell-Probe Lower Bounds for the Partial Match Problem T.S. Jayram Subhash Khot Ravi Kumar Yuval Rabani Abstract Given a database of n points in {0, 1} d, the partial match problem is: In response to a

More information

Multi-Party Quantum Communication Complexity with Routed Messages

Multi-Party Quantum Communication Complexity with Routed Messages Multi-Party Quantum Communication Complexity with Routed Messages Seiichiro Tani Masaki Nakanishi Shigeru Yamashita Abstract This paper describes a general quantum lower bounding technique for the communication

More information

Report on PIR with Low Storage Overhead

Report on PIR with Low Storage Overhead Report on PIR with Low Storage Overhead Ehsan Ebrahimi Targhi University of Tartu December 15, 2015 Abstract Private information retrieval (PIR) protocol, introduced in 1995 by Chor, Goldreich, Kushilevitz

More information

Breaking the Rectangle Bound Barrier against Formula Size Lower Bounds

Breaking the Rectangle Bound Barrier against Formula Size Lower Bounds Breaking the Rectangle Bound Barrier against Formula Size Lower Bounds Kenya Ueno The Young Researcher Development Center and Graduate School of Informatics, Kyoto University kenya@kuis.kyoto-u.ac.jp Abstract.

More information

Matrix Rank in Communication Complexity

Matrix Rank in Communication Complexity University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Mohan Yang Date: January 18, 2012 LECTURE 3 Matrix Rank in Communication Complexity This lecture

More information

How many rounds can Random Selection handle?

How many rounds can Random Selection handle? How many rounds can Random Selection handle? Shengyu Zhang Abstract The construction of zero-knowledge proofs can be greatly simplified if the protocol is only required be secure against the honest verifier.

More information

Query-to-Communication Lifting for BPP

Query-to-Communication Lifting for BPP 58th Annual IEEE Symposium on Foundations of Computer Science Query-to-Communication Lifting for BPP Mika Göös Computer Science Department Harvard University Cambridge, MA, USA Email: mika@seas.harvard.edu

More information

Lecture 22. m n c (k) i,j x i x j = c (k) k=1

Lecture 22. m n c (k) i,j x i x j = c (k) k=1 Notes on Complexity Theory Last updated: June, 2014 Jonathan Katz Lecture 22 1 N P PCP(poly, 1) We show here a probabilistically checkable proof for N P in which the verifier reads only a constant number

More information

CS Introduction to Complexity Theory. Lecture #11: Dec 8th, 2015

CS Introduction to Complexity Theory. Lecture #11: Dec 8th, 2015 CS 2401 - Introduction to Complexity Theory Lecture #11: Dec 8th, 2015 Lecturer: Toniann Pitassi Scribe Notes by: Xu Zhao 1 Communication Complexity Applications Communication Complexity (CC) has many

More information

Information Value of the Game

Information Value of the Game Information Value of the Game Mark Braverman Young Kun Ko Abstract We introduce a generalization of the standard framework for studying the difficulty of twoprover games. Specifically, we study the model

More information

Analyzing Linear Mergers

Analyzing Linear Mergers Analyzing Linear Mergers Zeev Dvir Ran Raz Abstract Mergers are functions that transform k (possibly dependent) random sources into a single random source, in a way that ensures that if one of the input

More information

Probabilistically Checkable Arguments

Probabilistically Checkable Arguments Probabilistically Checkable Arguments Yael Tauman Kalai Microsoft Research yael@microsoft.com Ran Raz Weizmann Institute of Science ran.raz@weizmann.ac.il Abstract We give a general reduction that converts

More information

Cell-Probe Proofs and Nondeterministic Cell-Probe Complexity

Cell-Probe Proofs and Nondeterministic Cell-Probe Complexity Cell-obe oofs and Nondeterministic Cell-obe Complexity Yitong Yin Department of Computer Science, Yale University yitong.yin@yale.edu. Abstract. We study the nondeterministic cell-probe complexity of static

More information

Towards a Reverse Newman s Theorem in Interactive Information Complexity

Towards a Reverse Newman s Theorem in Interactive Information Complexity Towards a Reverse Newman s Theorem in Interactive Information Complexity Joshua Brody 1, Harry Buhrman 2,3, Michal Koucký 4, Bruno Loff 2, Florian Speelman 2, and Nikolay Vereshchagin 5 1 Aarhus University

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

Differentially Private Multi-party Computation

Differentially Private Multi-party Computation ifferentially Private Multi-party Computation Peter Kairouz, Sewoong Oh, Pramod Viswanath epartment of Electrical and Computer Engineering University of Illinois at Urbana-Champaign {kairouz2, pramodv}@illinois.edu

More information

Combinatorial Auctions Do Need Modest Interaction

Combinatorial Auctions Do Need Modest Interaction Combinatorial Auctions Do Need Modest Interaction Sepehr Assadi University of Pennsylvania Motivation A fundamental question: How to determine efficient allocation of resources between individuals? Motivation

More information

Communication Complexity 16:198:671 2/15/2010. Lecture 6. P (x) log

Communication Complexity 16:198:671 2/15/2010. Lecture 6. P (x) log Communication Complexity 6:98:67 2/5/200 Lecture 6 Lecturer: Nikos Leonardos Scribe: Troy Lee Information theory lower bounds. Entropy basics Let Ω be a finite set and P a probability distribution on Ω.

More information

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm

On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm On-line Scheduling to Minimize Max Flow Time: An Optimal Preemptive Algorithm Christoph Ambühl and Monaldo Mastrolilli IDSIA Galleria 2, CH-6928 Manno, Switzerland October 22, 2004 Abstract We investigate

More information

arxiv: v5 [cs.cc] 10 Jul 2014

arxiv: v5 [cs.cc] 10 Jul 2014 The space complexity of recognizing well-parenthesized expressions in the streaming model: the Index function revisited Rahul Jain Ashwin Nayak arxiv:1004.3165v5 [cs.cc] 10 Jul 014 March 6, 014 Abstract

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

A direct product theorem for discrepancy

A direct product theorem for discrepancy A direct product theorem for discrepancy Troy Lee Department of Computer Science Rutgers University Robert Špalek Google, Inc. Adi Shraibman Department of Mathematics Weizmann Institute of Science Abstract

More information

Majority is incompressible by AC 0 [p] circuits

Majority is incompressible by AC 0 [p] circuits Majority is incompressible by AC 0 [p] circuits Igor Carboni Oliveira Columbia University Joint work with Rahul Santhanam (Univ. Edinburgh) 1 Part 1 Background, Examples, and Motivation 2 Basic Definitions

More information

On Randomized One-Round Communication Complexity. Ilan Kremer Noam Nisan Dana Ron. Hebrew University, Jerusalem 91904, Israel

On Randomized One-Round Communication Complexity. Ilan Kremer Noam Nisan Dana Ron. Hebrew University, Jerusalem 91904, Israel On Randomized One-Round Communication Complexity Ilan Kremer Noam Nisan Dana Ron Institute of Computer Science Hebrew University, Jerusalem 91904, Israel Email: fpilan,noam,danarg@cs.huji.ac.il November

More information

CMSC 858F: Algorithmic Lower Bounds: Fun with Hardness Proofs Fall 2014 Introduction to Streaming Algorithms

CMSC 858F: Algorithmic Lower Bounds: Fun with Hardness Proofs Fall 2014 Introduction to Streaming Algorithms CMSC 858F: Algorithmic Lower Bounds: Fun with Hardness Proofs Fall 2014 Introduction to Streaming Algorithms Instructor: Mohammad T. Hajiaghayi Scribe: Huijing Gong November 11, 2014 1 Overview In the

More information

Extractors and the Leftover Hash Lemma

Extractors and the Leftover Hash Lemma 6.889 New Developments in Cryptography March 8, 2011 Extractors and the Leftover Hash Lemma Instructors: Shafi Goldwasser, Yael Kalai, Leo Reyzin, Boaz Barak, and Salil Vadhan Lecturer: Leo Reyzin Scribe:

More information

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2 CMPT 881: Pseudorandomness Prof. Valentine Kabanets Lecture 20: N W Pseudorandom Generator November 25, 2004 Scribe: Ladan A. Mahabadi 1 Introduction In this last lecture of the course, we ll discuss the

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

Nonnegative Rank vs. Binary Rank

Nonnegative Rank vs. Binary Rank CHICAGO JOURNAL OF THEORETICAL COMPUTER SCIENCE 06, Article, pages 3 http://cjtcs.cs.uchicago.edu/ Nonnegative Rank vs. Binary Rank Thomas Watson Received February 3, 04; Revised August 5, 05, and on February

More information

EXPONENTIAL SEPARATION FOR ONE-WAY QUANTUM COMMUNICATION COMPLEXITY, WITH APPLICATIONS TO CRYPTOGRAPHY

EXPONENTIAL SEPARATION FOR ONE-WAY QUANTUM COMMUNICATION COMPLEXITY, WITH APPLICATIONS TO CRYPTOGRAPHY EXPONENTIAL SEPARATION FOR ONE-WAY QUANTUM COMMUNICATION COMPLEXITY, WITH APPLICATIONS TO CRYPTOGRAPHY DMITRY GAVINSKY, JULIA KEMPE, IORDANIS KERENIDIS, RAN RAZ, AND RONALD DE WOLF Abstract. We give an

More information

Randomized Communication Complexity for Linear Algebra Problems over Finite Fields

Randomized Communication Complexity for Linear Algebra Problems over Finite Fields Randomized Communication Complexity for Linear Algebra Problems over Finite Fields Xiaoming Sun 1 and Chengu Wang 2 1 Institute of Computing Technology, Chinese Academy of Sciences Beijing, China sunxiaoming@ict.ac.cn

More information