Upper and Lower Bounds on the Power of Advice

Size: px
Start display at page:

Download "Upper and Lower Bounds on the Power of Advice"

Transcription

1 Upper and Lower Bounds on the Power o Advice Aradev Chattopadhyay Je Edmonds Faith Ellen Toniann Pitassi March 1, 2016 Abstract Proving superpolylogarithmic lower bounds or dynamic data structures has remained an open problem despite years o research. Pǎtraşcu proposed an exciting approach or breaing this barrier via a two player communication model in which one player gets private advice at the beginning o the protocol. He gave reductions rom the problem o solving an asymmetric version o set-disjointness in his model to a diverse collection o natural dynamic data structure problems in the cell probe model. He also conjectured that, or any hard problem in the standard two-party communication model, the asymmetric version o the problem is hard in his model, provided not too much advice is given. In this paper, we prove several surprising results about his model. We show that there exist Boolean unctions requiring linear randomized communication complexity in the two-party model, or which the asymmetric versions in his model have deterministic protocols with exponentially smaller complexity. For set-disjointness, which also requires linear randomized communication complexity in the two-party model, we give a deterministic protocol or the asymmetric version in his model with a quadratic improvement in complexity. These results demonstrate that Pǎtraşcu s conjecture, as stated, is alse. In addition, we show that the randomized and deterministic communication complexities o problems in his model dier by no more than a logarithmic multiplicative actor. We also prove lower bounds in some restricted versions o this model or natural unctions such as set-disjointness and inner product. All o our upper bounds conorm to these restrictions. Moreover, a special case o one o these lower bounds implies a new proo o a strong lower bound on the tradeo between the query time and the amortized update time o dynamic datastructures with non-adaptive query algorithms. 1 Introduction In the cell probe model [25, 9, 34], the complexity o an algorithm is measured by the number o (ixed size) memory cells it accesses. Lower bounds in the cell probe model have been obtained or numerous static data structure problems, yielding lower bounds or these problems in the unit cost random access machine. Obtaining lower bounds or dynamic data structures in the cell probe model has been a challenge. In 1989, Fredman and Sas [10] introduced the chronogram method and used it to prove an Ω(log n/ log log n) lower bound on the worst case time per operation or the partial sums problem. Tata Institute o Fundamental Research, aradev.c@tir.res.in Yor University, je@cs.yoru.ca University o Toronto, aith@cs.toronto.edu University o Toronto, toni@cs.toronto.edu

2 In 1998, Alstrup, Huseldt and Rauhe [1] got the same bound or the dynamic mared ancestor problem. Reductions rom these problems to a variety o other dynamic data structure problems have also been obtained [1, 12, 13, 14]. In 2004, Pǎtraşcu and Demaine [30] introduced a beautiul inormation theoretic technique to prove Ω(log n) lower bounds or the partial sums problem and dynamic connectivity in undirected graphs. Later, Pǎtraşcu [29] used a reduction rom setdisjointness in an asymmetric two-party communication model to prove an Ω(log n/(log log n) 2 ) lower bound or the dynamic mared ancestor problem. More recently, Larsen proved the irst Ω(log 2 n) lower bounds, or the dynamic range counting problem [21] and dynamic polynomial evaluation (over a large ield) [22]. Despite these advances, it remains a longstanding open problem to prove polynomial (or even super-polylogarithmic) lower bounds or any dynamic data structure problem. 1.1 Pǎtraşcu s Conjecture Pǎtraşcu [28] listed a diverse collection o natural dynamic data structure problems that are conjectured to require superpolylogarithmic time per operation, including determining the existence o paths in dynamic directed graphs and inding the length o shortest paths in dynamic undirected graphs. He proposed an exciting new approach or obtaining polynomial lower bounds or all o these problems using a new communication model that we call the A B (B C) model. It augments the standard two-party communication model between two players Bob and Charlie, by providing advice (given by Alice) to one o the players (Bob). For any Boolean unction : X Y {0, 1}, Pǎtraşcu deined an asymmetric communication problem SEL 1 : {1,..., } X Y {0, 1}, where SEL 1 (i, x 1,..., x, y) = (x i, y). In the A B (B C) model, there are two players, Bob and Charlie, who, with advice rom Alice, compute SEL 1 (i, x 1,..., x, y) as ollows: Alice receives x 1,..., x and y, Bob receives y and i, and Charlie receives x 1,..., x and i. Alice irst sends some advice privately to Bob and then remains silent. Thereater, Bob and Charlie can communicate bac and orth, alternating arbitrarily, until they have computed the output o the unction. The last bit that is sent is the output o the protocol, which is supposed to be the value o the unction. This can also be viewed as a restricted version o a three-player number-on-the-orehead problem, in which Alice has i on her orehead, Bob has x 1,..., x on his orehead, and Charlie has y on his orehead. Pǎtraşcu presented simple reductions rom the problem o computing SEL 1 DISJ in the A B (B C) model, where DISJ denotes the set-disjointness problem, to many dynamic problems in the cell probe model. These reductions prove that, i SEL 1 DISJ cannot be solved by a protocol in which Alice gives o(ntw) bits o advice and Bob and Charlie communicate a total o o(tw) bits, then the worst case time per operation o the dynamic problems is Ω(t) in the cell probe model with w bit words. He conjectured that there exist positive constants δ < 1 and γ > 1+δ such that SEL 1 DISJ cannot be solved or Θ(n γ ) i Alice gives o(n 1+δ ) bits o advice and Bob and Charlie communicate a total o o(n δ ) bits. I his conjecture is true, then all o the dynamic problems presented in [28] require n Ω(1) time per operation in the cell probe model with O(log n) bit words. More generally, he stated the ollowing conjecture, which does not speciy whether the communication protocols involved are deterministic or randomized. Conjecture 1 (Pǎtraşcu) Let : {0, 1} n {0, 1} n {0, 1} be any unction. Consider a protocol 2

3 π or computing SEL 1 in the A B (B C) model. I Alice sends o() bits, then the cost o communication between Bob and Charlie is Ω(c), where c is the 2-party communication complexity o. The intuition is that, i Alice sends o() bits o advice, then, or many o the instances (x 1, y),..., (x, y), she is providing very little inormation. This suggests that, in the worst case, solving one o the these instances should be essentially as hard as computing in the standard two-party model. Furthermore, the generality o this conjecture, namely that it maes no assumptions about the structure o, invites the possibility o an inormation theoretic round elimination argument. 1.2 Reutations To our surprise, this intuition is not correct. While it is true that Alice cannot provide much inormation about the x i s, it turns out that she can provide a succinct message that will help Charlie learn y. This is the main intuition behind all o our upper bounds. For example, it is easy to disprove Pǎtraşcu s conjecture or deterministic protocols by considering the equality unction, EQ, where EQ(x, y) = 1 i and only i x = y. It has a very simple deterministic protocol in which Alice sends Bob the minimum j {1,..., } such that y = x j. I there is no such j, she sends him 0. Bob orwards this message to Charlie, who can determine that the output should be 1 i and only i he receives j 0 and x j = x i. Here, Alice teaches y to Charlie (via Bob) using a very short message. We exploit this intuition to prove a much stronger result, using notions rom learning theory and recent results about sign matrices. Speciically, we show that, even i a Boolean unction has large randomized complexity in the two-party model, SEL 1 can have small deterministic complexity in the A B (B C) model. Theorem 2 There exists a Boolean unction with two-party randomized communication complexity Ω(n) such that SEL 1 has a deterministic protocol in the A B (B C) model in which the total number o bits communicated is O(log 2 ). Note that when n O(1), the total amount o communication is O(log 2 n). Interestingly, we prove the upper bound using the harder side o Yao s min-max principle. Although it is standard to use the min-max principle or proving lower bounds, we are not aware o its application to prove upper bounds, especially or communication protocols. A natural hope would be that Pǎtraşcu s conjecture is still true or certain speciic Boolean unctions with Ω(n) two-party randomized complexity, such as set-disjointness. Our next result shows that this is not the case or set-disjointness. We directly design a protocol or set-disjointness, in which Alice reveals a careully chosen subset o y s bits so that, on the remaining bits, determining DISJ(x i, y) is easy, or each i {1,..., }, because either x i has ew 1 s or a large raction o the positions o 1 s in x i are also positions o 1 s in y. Theorem 3 There is a deterministic protocol or SEL 1 DISJ in the A B (B C) model, in which Alice sends at most n log bits, Bob sends at most 1 + n log bits, and Charlie sends at most n log n bits. 3

4 Note that when n O(1), the total amount o communication is O ( n log n ). It is worth remaring that Theorem 3 does not eliminate the possibility o proving strong lower bounds or dynamic data structure problems via the A B (B C) model. To obtain polynomial lower bounds or the dynamic problems listed above, it suices to prove that, or every protocol in which Alice sends o() bits o advice, Bob and Charlie must communicate Ω(n δ ) bits to compute, or some constant 0 < δ 1/2. Theorem 2 and Theorem 3 show that such a lower bound argument has to crucially use the structure o the set-disjointness unction. We also show that the randomized and deterministic communication complexities o computing SEL 1 in the A B (B C) model do not dier by much. Furthermore, or any Boolean SEL 1 DISJ unction with two-party randomized communication complexity R, we show that SEL 1 deterministic communication complexity O((R + log n + log ) log ) in the A B (B C) model. This immediately shows that problems which, in the 2-party model, have eicient randomized protocols, but are hard deterministically, give rise to easy asymmetric problems in the A B (B C) model. 1.3 Restricted Lower Bounds Finally, we provide lower bounds in the A B (B C) model or some restricted classes o protocols, which include those protocols used or our upper bounds in Theorem 2 and Theorem 3. In those protocols, Alice sends ar ewer bits o advice than she is allowed to. Moreover, ater Alice s message is sent, Bob and Charlie engage in a very limited orm o interaction. Our lower bounds show that each o these restrictions, by itsel, does not allow improvements in our upper bounds. For analyzing protocols where Alice s advice is less than n bits, we convert the problem into a direct product problem with n instances. Then we obtain our lower bounds using recent strong direct product theorems. For analyzing restricted interactions between Bob and Charlie, we present an inormation theoretic argument. We show that, i there is a limited interaction protocol in which Bob and Charlie communicate ew bits, then x 1,..., x can be compressed to substantially ewer than n bits, which is impossible, in general. While limited interaction protocols are a natural restriction o general ones, there is an additional motivation to study them. In particular, they are very related to restricted dynamic data structure algorithms. Consider Pǎtraşcu s three phase dynamic problem [28]. In the irst phase, data gets inserted and the algorithm pre-processes this eiciently. The second phase consists o a series o updates which are each processed in amortized time t u. The third phase gets a query and the algorithm outputs the answer in time t q. Suppose that the third phase is non-adaptive in the sense that, or each query, there is a ixed set o cells that are probed, which does not depend on the results o the queries. All other phases have no restrictions. Can one prove lower bounds or such algorithms? Ater the preliminary version o our wor [7] appeared, Brody and Larsen [6] proved strong lower bounds or such algorithms, which are independent o the pre-processing time allowed in the irst phase o the algorithm. We show that these lower bounds ollow rom our lower bounds on restricted protocols. More precisely, we observe that every non-adaptive query algorithm yields a very restrictive, non-interactive protocol in the A B (B C) model: In these protocols, Bob sends no messages at all. Our lower bounds or less restricted protocols then imply their non-adaptive lower bounds. has 4

5 1.4 Organization In Section 2, we introduce notation and deine necessary concepts rom two-party communication complexity. In Section 3.1, we prove that randomization does not help or solving SEL 1 in the A B (B C) model. Then, in Section 3.2, we present a unction with Ω(n) randomized twoparty complexity such that SEL 1 can be computed deterministically with only log O(1) bits o communication. In Section 3.3, we prove our upper bound or set-disjointness. In Section 4, we prove lower bounds in restricted settings and, in Section 5, we apply our lower bounds to prove strong polynomial lower bounds or nonadaptive dynamic data structure problems. We conclude in Section 6 with some open problems. 2 Preliminaries Communication complexity was irst studied or the two-party model in which the input is partitioned between two players, who compute a Boolean unction o their inputs [36]. At the end o the computation, both players now the value o the unction. For any unction : X Y Z, we use D() to denote the deterministic complexity o, which is the minimum over all deterministic protocols π that compute correctly on all inputs, o the maximum number o bits communicated during any execution o the protocol. I µ : X Y [0, 1] is a probability distribution and 0 < ɛ < 1, we use D ɛ µ() to denote the ɛ-error distributional complexity o or distribution µ, which is the minimum over all deterministic protocols that compute a unction g diering rom on a set o inputs with probability at most ɛ, o the maximum number o bits communicated during any execution o the protocol. Note that D ɛ µ() D() n+1 or any : X Y Z, µ : X Y [0, 1], and 0 < ɛ < 1, since one player can send its input to the other player, who responds with the answer. The computation o a randomized two-party protocol can be expressed as a unction o the players inputs, x and y, and a public (shared) sequence r o random bits that is provided to both players. A protocol π or has error probability ɛ i max{pr[π(x, y, r) does not compute (x, y)] x X, y Y } = ɛ, where the probability is taen over all choices o r. The ɛ-randomized complexity o, which we denote by R ɛ (), is the minimum over all randomized protocols or with error probability at most ɛ, o the maximum number o bits communicated during any execution o the protocol. Yao [35] gave the ollowing relationship between randomized and distributional communication complexities. It is oten called Yao s min-max principle. Theorem 4 For any unction and any 0 < ɛ < 1, R ɛ () = max µ { D ɛ µ () }. In act, the proo o Theorem 4 [20] applies to any reasonable non-uniorm model o computation with public coins. There are simple Boolean unctions that have very high deterministic complexity in the 2-party model, but have eicient 2-party randomized protocols. For example, consider the equality unction, EQ : {0, 1} n {0, 1} n {0, 1}, where { 1 i x = y EQ(x, y) = 0 i x y. 5

6 D(EQ) Θ(n), but R ɛ (EQ) O(1) or any constant 0 < ɛ < 1 2. Another example is the greater than unction, GT : {0, 1} n {0, 1} n {0, 1}, deined by { 1 i x > y GT(x, y) = 0 i x y, where x and y are interpreted as n-bit integers. D(GT) Θ(n), but R ɛ (GT) O(log n) or any constant 0 < ɛ < 1 2. [27]. Thus, in the context o two-player games, randomization can be substantially more powerul than determinism. The set-disjointness problem on n-bit strings, denoted by DISJ : {0, 1} n {0, 1} n {0, 1}, is deined by n DISJ(x, y) = (x[i] y[i]). i=1 In other words, i x and y are viewed as the characteristic vectors o two subsets o {1,..., n}, then DISJ(x, y) = 1 i and only i the two subsets are disjoint, i.e., or all i {1,..., n}, either x[i] = 0 or y[i] = 0. Babai, Franl and Simon [2] were the irst to prove a randomized lower bound o Ω( n) or setdisjointness. The lower bound was improved to Θ(n) by the celebrated result o Kalyanasundaram and Schnitger [15] and later simpliied by Razborov [31]. Subsequently, Bar-Yosse et.al. [3] gave an elegant inormation theoretic proo o the bound, which has been very inluential in the current development o inormation complexity. Newman [26] proved that any randomized two-party communication protocol (with public randomness) in which each player has an input in {0, 1} n can be simulated by a two-party protocol that uses O(log n) random bits. Implicit in Newman s proo is a more general result, which holds or any nonuniorm model o computation, such as communication protocols, boolean circuits, decision trees and non-uniorm Turing machines: Theorem 5 I there is a randomized computation or a unction with domain U and error probability at most ɛ < 1/2, then there is a randomized computation or that unction with the same cost and error probability O(ɛ) that uses only O(log log U ) random bits. Proo: Let F (u, r) denote the randomized computation o a unction with input u rom domain U and the (ininite) public sequence r o random bits. Suppose F has error probability at most ɛ, i.e. or all inputs u U, the probability that F (u, r) computes (u) is at least 1 ɛ, where the probability is taen over the choices, r, or the public binary sequence. We show that there exist δ O(ɛ) and t O(log U ) binary sequences r 1,..., r t such that, or each input u, i we choose a sequence r at random rom r 1,..., r t, then F (u, r) computes (u) with probability at least 1 δ. Suppose r 1,..., r t are chosen independently at random rom the space o binary sequences. For any input u U and any i {1,..., t}, Pr [F (u, r i ) computes (u)] 1 ɛ, so E [ # { i {1,..., t} F (u, r i ) computes (u) }] (1 ɛ)t. By the Cherno bound [18], or all 0 < δ < 1, Pr [ # { i {1,..., t} F (u, r i ) computes (u) } < (1 δ )(1 ɛ)t ] < e (1 ɛ)t(δ ) 2 /2. 6

7 Let δ = 1/2, let δ = (1 + ɛ)/2 O(ɛ), and let t = 8(1 ɛ) 1 ln U O(log U ) since ɛ < 1 2. Then (1 δ )(1 ɛ) = 1 δ and (1 ɛ)t(δ ) 2 /2 = ln U, so Pr [ # { i {1,..., t} F (u, r i ) computes (u) } < (1 δ)t ] < 1/ U. The union bound implies that Pr [ exists u U such that # { i {1,..., t} F (u, r i ) computes (u) } < (1 δ)t ] < 1. Hence, there exist choices o r 1,..., r t such that, or every input u U, # { i {1,..., t} F (u, r i ) computes (u) } (1 δ)t. On any input u U, the computation F chooses i {1,..., t} uniormly at random and perorms the computation F (u, r i ). Then Pr[F (u, i) computes (u)] 1 δ, so F (u, i) computes with error probability δ O(ɛ). 3 Upper Bounds in the A B (B C) model For any protocol π in the A B (B C) model, we deine CC A B (π) to be the worst case number o bits sent by Alice and CC B C (π) to be the worst case number o bits communicated between Bob and Charlie. 3.1 Alice can Derandomize We begin by showing that every randomized protocol or SEL 1 be eiciently derandomized. in the A B (B C) model can Theorem 6 Consider any Boolean unction : {0, 1} n {0, 1} n {0, 1}. Let π be a randomized protocol or SEL 1 in the A B ( ) ( ) (B C) model with CC A B π = m, CCB C π = l,. Then, there ex- ( such that CC A B π ) O((m+log +log n)(log )/ɛ 2 ), ( CC B C π ) O((l + log + log n)(log )/ɛ 2 ( ), and CC C B π ) O(q(log )/ɛ 2 ). ( ) CC C B π = q, and error probability at most 1 2 ɛ, or some constant 0 < ɛ < 1 2 ists a deterministic protocol π or SEL 1 Proo: By Theorem 5, we may assume that π uses only h O(log + log n) random bits. Let t (1 + 2ɛ)(ln )/ɛ 2 be an odd integer. Choose t strings r 1,..., r t independently at random rom {0, 1} h. Let x {0, 1} n, y {0, 1} n, and i {1,..., }. Then, or each j {1,..., t}, Pr[π(x, y, i, r j ) outputs (x i, y)] ɛ. Let δ = 1 1/(2ɛ + 1). Then (1 δ)( ɛ) = 1 2, so by the Cherno bound, Pr [ # { j {1,..., t} π(x, y, i, r j ) does not output (x i, y) } < t/2 ] < e ( 1 2 +ɛ)tδ2 /2 e ln = 1/. Hence, there is a nonzero probability that, or all i {1,..., }, π(x, y, i, r j ) outputs (x i, y) or the majority o j {1,..., t}. Thus, given x and y, Alice can ind a sequence o t Θ((log )/ɛ 2 ) strings r 1,..., r t {0, 1} h or which this is true. She sends these strings to Bob, together with the messages a 1,..., a t she sends in π(x, y, i, r j ) or all j {1,..., t}. Bob orwards the strings r 1,..., r t to Charlie. Then, or all j {1,..., t}, Bob and Charlie run π(x, y, i, r j ) with Alice s message a j and tae the output that is produced most oten. 7

8 Any two-party protocol or computing a Boolean unction : {0, 1} n {0, 1} n {0, 1} is a protocol or computing SEL 1 in the A B (B C) model in which Alice sends nothing. Thus, we immediately get the ollowing corollary: Corollary 7 Let : {0, 1} n {0, 1} n {0, 1} be any Boolean unction such that R ɛ () (log n) O(1), or some constant 0 < ɛ < 1/2. I n O(1), then there exists a deterministic protocol or SEL 1 in the A B (B C) model where Alice sends O ( log 2 n ) bits to Bob and Bob and Charlie communicate (log n) O(1) bits. It ollows rom Corollary 7 that both SEL 1 EQ and SEL 1 GT have eicient deterministic protocols in the A B (B C) model. These unctions reute Pǎtraşcu s conjecture or deterministic protocols. 3.2 Alice as a Teacher Next, we show that there exists a Boolean unction with very large randomized complexity in the two-party model, or which SEL 1 has eicient deterministic protocols in the A B (B C) model. We need some deinitions rom computational learning theory. For any set S o Boolean unctions over {0, 1} n, we associate a Boolean matrix M S, whose rows are indexed by {0, 1} n and whose columns are indexed by S, such that M S [x, ] = (x). A randomized algorithm L is said to learn S with conidence δ and accuracy ɛ rom m random examples drawn rom a distribution µ on {0, 1} n i, or each S and or x 1,..., x m {0, 1} n chosen independently rom the distribution µ, given (x 1, (x 1 )),..., (x m, (x m )), L outputs a Boolean hypothesis unction h : {0, 1} n {0, 1} that, with probability at least 1 δ, is ɛ-close to, i.e. i x is chosen rom µ, then Pr [ h(x) (x) ] ɛ. The Vapni-Chervonenis (VC) dimension, vc(m), o a matrix M is the largest number d such that M has a d 2 d sub-matrix all o whose columns are distinct, i.e., each vector in {0, 1} d appears exactly once as a column in the sub-matrix. The ollowing result, nown as the VC Theorem [16], shows the relevance o VC dimension to learning. Theorem 8 Let S be a set o Boolean unctions over {0, 1} n and let µ be an arbitrary distribution on {0, 1} n. Then there exists a randomized algorithm L that learns S with conidence δ and accuracy ɛ rom m random examples drawn rom µ, where ( 1 m O ɛ log 1 δ + vc ( ) MS ɛ For any Boolean unction : {0, 1} n {0, 1} n {0, 1}, let M denote the matrix, whose rows and columns are indexed by {0, 1} n, such that M [x, y] = (x, y). I, or each y {0, 1} n, we deine the Boolean unction y : {0, 1} n {0, 1} such that y (x) = (x, y) and we let S = { y y {0, 1} n }, then M S = M. Using an elegant argument, Kremer, Nisan and Ron [19] showed that, i M has small VC-dimension, then has small distributional communication complexity under product distributions (i.e. under distributions that can be expressed as the product o two distributions over {0, 1} n ). We exploit this connection to learning theory to prove the ollowing result. log 1 ɛ ). 8

9 Theorem 9 Let : {0, 1} n {0, 1} n {0, 1} be a Boolean unction and let 0 < ɛ < 1 2 be a constant. Then, there exists a randomized protocol π or SEL 1 in the A B (B C) model with error probability at most ɛ such that ( ) ( ) ( ) 1 CC A B π, CCB C π O ɛ vc( ) 1 M log ɛ log. Proo: Using Yao s min-max principle (Theorem 4) in the A B (B C) model, our tas reduces to showing that, or every distribution µ on {1,..., } ( {0, 1} n) {0, 1} n, there exists a deterministic protocol π µ or SEL 1 ( ) ( ) with error probability at most ɛ and CC A B πµ, CCB C πµ having the desired bound. We do this by irst constructing a randomized protocol that has the required error probability over its internal coin tosses and over µ. A standard averaging argument then yields the desired deterministic protocol. Let S denote the set o unctions { y y {0, 1} n }. For any inputs x = (x 1,..., x ) {0, 1} n and y {0, 1} n, Alice can determine the conditional distribution µ x,y induced on {1,..., }. By Theorem 8, there is a randomized algorithm L that learns the unction y S with conidence and accuracy ɛ/2 rom m random examples drawn rom µ x,y, where ( 2 m O ɛ log 2 ɛ + 2vc(M S) log 2 ). ɛ ɛ Alice draws m samples i 1,..., i m rom µ x,y and sends Bob a message containing ( i1, (x i1, y) ),..., ( i m, (x im, y) ). This requires communicating at most m(1 + log ) bits. Bob transmits this message to Charlie. In learning theoretic terms, Alice, the teacher, is trying to teach y to the learning algorithm Charlie. Charlie uses the randomized algorithm L to compute a hypothesis h consistent with Alice s m examples such that the probability h(x i ) (x i, y) is at most ɛ. Note that this probability is over the random coin tosses used by Alice to sample points and over the distribution µ x,y or i. Finally, Charlie completes the protocol by sending h(x i ) to Bob. By a standard averaging argument, Alice s coin tosses can be ixed such that the resulting deterministic protocol has error probability at most ɛ or distribution µ. The above theorem shows that i M has at most polylogarithmic VC-dimension, then SEL 1 has very eicient protocols in the A B (B C) model. Using earlier results o Ben-David et.al. [5] and Linial and Shraibman [24], Sherstov [33] showed (implicitly in the proo o Theorem 3.5) that there exists a unction with high randomized communication complexity in the two-party model such that M has low VC-dimension. Theorem 10 For any constant 0 < ɛ < 1, there are unctions such that M has VC-dimension O(1) and R ɛ () Ω(n). Now, we have everything in place to prove our irst main result. Theorem 2. There exists a Boolean unction with two-party randomized communication complexity Ω(n) such that, or n 2 (log n)o(1), SEL 1 has a deterministic protocol in the A B (B C) model, in which Alice sends Bob O ( log 2 ) bits and then Bob and Charlie communicate a total o O ( log 2 ) bits. 9

10 Proo: By Theorem 10, there is a unction such that R ɛ () Ω(n) and vc ( M ) = O(1). It ollows rom Theorem 9 that SEL 1 has a randomized protocol π in the A B (B C) model in which Alice sends O(log ) bits o advice and Bob and Charlie communicate O(log ) bits. Finally, applying Theorem 6, we derandomize π to obtain a deterministic protocol π such that CC A B ( π ) = O ( log 2 ) and CC B C ( π ) = O ( log 2 ). This disproves Conjecture 1, even or randomized protocols. 3.3 An Upper Bound or Set-Disjointness We construct a protocol or SEL 1 DISJ with o(n) communication complexity in the A B (B C) model. Throughout the construction, it is helpul to view the inputs x 1,..., x, and y as subsets o {1,..., n}. Theorem 3 There is a deterministic protocol or SEL 1 DISJ in the A B (B C) model, in which Alice sends at most n log bits, Bob sends at most 1 + n log bits, and Charlie sends at most n log n bits. Proo: Given x 1,..., x, and y, Alice repeatedly pics a set rom among x 1,..., x that is: disjoint rom y and contains a least n elements that are not in the union o the sets she has already piced. Then Alice send the indices o these sets to Bob. Note that each time Alice pics a set, the number o elements in the union o the sets she has piced increases by at least n. Since these sets are all subsets o {1,..., n}, Alice pics at most n sets. A set index can be represented using log bits, so Alice sends at most n log bits to Bob. Bob orwards the inormation he receives rom Alice to Charlie. Charlie computes the union o these sets and removes them rom x i, since none o them are in y. Let x denote the resulting set, so x y = x i y. I x contains at least n elements, then x is not disjoint rom y, since, otherwise, Alice would have piced more sets and sent more indices. In this case, Charlie sends 0 to Bob and the protocol terminates. I x contains ewer than n elements, then, Charlies send 1 to Bob, ollowed by each o the elements in x. Since each element can be represented using log n bits, Charlie sends at most n log n bits to Bob. In this case, Bob computes x y = x i y and sends the answer to Charlie. Thus Bob sends at most 1 + n log bits. 4 Lower Bounds in Restricted Models An interesting act is that our upper bounds do not use the ull power o the A B (B C) model. First, Alice sends ar ewer bits than she is allowed to. Second, Bob, the receiver o Alice s advice, is merely orwarding it to Charlie without processing it in any way. Third, the algorithms in Sections 3.2 and 3.3 have limited interaction between Bob and Charlie. We now discuss the limitations that these restrictions place on the power o the A B (B C) model. In Section 4.1, we prove our upper bound or set-disjointness cannot be substantially improved, unless we allow Alice to send more than n bits o advice, even i players interact arbitrarily. In Section 4.2, we complement this by showing the upper bound or set-disjointness cannot be improved i Bob and Charlie have limited interaction. 10

11 4.1 Lower Bounds via Strong Direct Product Theorems For any Boolean unction : {0, 1} n {0, 1} n {0, 1}, let () : {0, 1} n {0, 1} n {0, 1} denote the unction such that, or all x 1,..., x, y 1,..., y {0, 1} n, () (x 1,..., x, y 1,..., y ) = ((x 1, y 1 ),..., (x, y )). Suppose that every c-bit communication protocol or has probability o success σ < 1. Then a strong direct product theorem or states that any c-bit protocol or () has success probability that is exponentially small in. There is a rich history o both positive and negative results or strong direct product theorems in complexity theory, including Yao s amous XOR Lemma (see or example [11]). Shaltiel [32] initiated the study o strong direct product theorems in communication complexity and proved a strong direct product theorem or unctions where we have lower bounds via the discrepancy method over product distributions. This includes unctions such as the inner product unction. Lee, Shraibman, and Spale [23] strengthened Shaltiel s result by proving a strong direct product theorem or unctions that have lower bounds via the discrepancy method over any distribution. There is no nown lower bound or set-disjointness via the discrepancy method, although a weaer orm o a strong direct product theorem (with suboptimal parameters) was obtained by Beame, Pitassi, Segerlind and Wigderson [4]. Finally, Klauc [17] proved the ollowing optimal strong direct product theorem or set-disjointness. Theorem 11 There exist constants 0 < β < 1 and α > 0 such that, or all 1, every randomized protocol which computes DISJ () : {0, 1} n {0, 1} n {0, 1} using at most βn bits o communication has error probability greater than 1 2 α. Using this theorem, we obtain the ollowing lower bound or asymmetric set-disjointness in the A B (B C) model. A similar lower bound can also be obtained or any Boolean unction that has a strong direct product theorem. Theorem 12 There exist constants 0 < β < 1 and α > 0 such that, in any deterministic protocol n 1 or SELDISJ in the A B (B C) model, i Alice sends at most α n bits, then Bob and Charlie must communicate at least β n bits. Proo: Let α and β be constants that satisy Theorem 11 and let = n. To obtain a contradiction, suppose that there is a deterministic protocol or SEL 1 DISJ, where Alice sends α bits o advice to Bob, and then Bob and Charlie communicate c < β bits. Using this protocol, or every distribution µ on {0, 1} {0, 1}, we construct a deterministic c-bit protocol or DISJ () : {0, 1} {0, 1} {0, 1} with error probability at most ɛ = 1 2 α, i.e., Dµ ɛ (DISJ () ) c. Then Yao s min-max principle (Theorem 4) implies that R ɛ (DISJ () ) c. This contradicts Theorem 11, the direct product theorem or set-disjointness. Consider any distribution µ : {0, 1} {0, 1} [0, 1]. Given inputs x 1,..., x, y 1,..., y {0, 1}, we create inputs x 1,..., x, y {0, 1} n or SEL 1 DISJ as ollows: y = y 1 y and, or each i {1,..., }, x i = 0 (i 1) x i 0(n i). In particular, x i is all 0 s except or its i th bloc o bits, which is x i. Let µ be the resulting distribution on {0, 1}n {0, 1} n. Alice s α-bit message partitions the space {0, 1} n {0, 1} n into 2 α equivalence classes. Let C be an equivalence class with maximal weight under distribution µ. Then µ(c) 2 α. Since the A B (B C) protocol is deterministic, it answers correctly or every input ((x 1,..., x ), y) C and i {1,..., }. 11

12 Consider the ollowing two-party protocol or DISJ () : Given x 1,..., x, y 1,..., y {0, 1}, Bob and Charlie compute the answers to SEL 1 DISJ (i, (x 1,..., x ), y), or i = 1,...,, pretending, each time, that Alice sent the message or equivalence class C. They concatenate these one-bit answers to get a -bit answer to DISJ () (x 1,..., x, y 1,..., y ). Since Bob and Charlie communicate at most c bits to compute each one-bit answer, they communicate at most c < β 2 bits in total. Let C {0, 1} {0, 1} consist o all inputs (x 1,..., x, y 1,..., y ) rom which inputs ((x 1,..., x ), y) in C are produced. Then the two-party protocol correctly computes DISJ () or all inputs in C and µ (C ) 2 α. Thus Dµ ɛ (DISJ () ) c. This lower bound matches our upper bound to within actors o log n and log. 4.2 Lower Bounds via Compression In all our upper bounds, there is limited interaction between Bob and Charlie. We say that π is a 1.5 round (m, l, q)-protocol i it proceeds in the ollowing way: First, as usual, Alice sends m bits A = A(X, y) to Bob. Then there are two rounds o communication between Bob and Charlie. In the irst round, Bob communicates l bits B = B 1 (y, A) to Charlie that do not depend on i. (For example, Bob could orward l bits o Alice s message to Charlie.) In the second round, Charlie communicates q bits C(x 1,..., x, i, B) bac to Bob. Finally, using his nowledge o i, Bob computes the answer B 2 (y, i, A, C) = (X i, y). Note that there is no restriction on Alice s advice. The crucial restriction, beyond the act that there are only two rounds o communication ater the advice, is that Bob s communication to Charlie is independent o i. We say that this is only hal a round o communication. Interestingly, 1.5 round protocols have non-trivial power. The proo o Theorem 2, which reutes Pǎtraşcu s conjecture, employs a 1.5 round (O((log n) 2, O((log n) 2, 1)-protocol. For set-disjointness, Theorem 3 gives a 1.5 round ( n log, n log, n log n ) -protocol. It is un to veriy that unctions lie equality and greater-than can all be solved cheaply, without even using the 0.5 round communication rom Bob to Charlie, i.e. they both have deterministic (O(log n), 0, O(log n))- protocols. In this section, we show the ollowing limitations o 1.5 round protocols. Theorem 13 Let / log ω(n) and m n. For 1 q and 1 l n, every 1.5 round (m, l, q)- protocol or computing SEL 1 DISJ has l( 61nm + q ) 0.008n. I the protocol is randomized with error probability at most 1 2 ɛ, then it has ( l + log ) ( 61nm + q ) Ω ( ɛ 4 log 2 n). Note that l ( 61nm + q ) 0.008n implies that either m /61n or l (q+1) 0.008n. Since every 1.5 round (m, 0, q 0 )-protocol immediately gives a 1.5 round (m, 1, q 0 )-protocol, it ollows that every 1.5 round (m, 0, q 0 )-protocol or computing SEL 1 61nm DISJ has + q 0.008n, so either m Ω() or q Ω(n). I the protocol is randomized with error probability at most 1 2 ɛ, then either m Ω ( ɛ 4 log 2 ) or q Ω ( ɛ 4 log 2 n). The upper bound in Theorem 3 shows that Theorem 13 is tight to within logarithmic actors, when = n 2+δ or any constant δ > 0. The lower bound in Theorem 12 is incomparable, since it restricts the amount o advice Alice can send. Our next lower bound is or the well nown inner-product unction, IP(x, y) = n i=1 x iy i mod 2. The inner product unction is one o the hardest unctions in the standard two-party communication model. For example, Chor and Goldreich [8, 20] showed that, even or protocols with an inversesubexponential advantage over random guessing, inner product requires Ω(n) bits o communication 12

13 in the two party model. It is thus also a natural target or proving lower bounds on the cost o 1.5 round protocols in the A B (B C) model. Theorem 14 Let [ω(n), 2 o( n) ]. Consider any 1.5 round (m, l, q)-protocol or computing SEL 1 IP. I Alice communicates m (1 δ) bits or some δ > 0, then q + l (δ o(1))n. I the protocol is randomized with error probability at most 1 2 ɛ, then either m Ω( ɛ 2 log ) or q + l Ω ( ɛ 2 log n). Note that, i Alice communicates bits, she can send the answer or each i {1,..., }. Our lower bound or inner-product, l + q Ω(n), is stronger than our lower bound or setdisjointness, lq Ω(n). Recall that l and q are the number o bits communicated by Bob and Charlie, respectively. The naive algorithm in which Bob sends y to Charlie has l + q = n, which matches our lower bound or inner product. The algorithm or set-disjointness in Section 3.3 has lq O(n log log n), which is only logarithmic actors more than our lower bound. The main idea or proving both o these theorems is to ind an encoding o x 1,..., x using 1.5 round protocols. I the cost o the protocol is small, our encoding compresses n bits o inormation to ewer bits. However, this is impossible, since x 1,..., x has entropy n, We begin by proving Theorem 14, since compression can be done cleanly or the inner-product unction. Implementing compression or set-disjointness is more involved. Proo o Theorem 14: Assume that π is a deterministic 1.5 round (m, l, q)-protocol or computing SEL 1 IP, in which Alice communicates m (1 δ) bits. Our goal is to give a scheme or encoding x 1,..., x, where each x i is chosen uniormly rom {0, 1} n. Fix x 1,..., x. Because Alice does not now i and Bob s message B = B 1 (y, A(x 1,..., x, y)) to Charlie cannot depend on i, this message depends only on y. Hence, by averaging, there exists a message B ixed that Bob sends or at least 2 n l many y s. Thus, there exists a set, Y, o n l many linearly independent vectors such that Bob sends B ixed on each o them. Let Y be a set o l additional linearly independent vectors such that Y Y orms a basis o the vector space {0, 1} n. For each partial basis Y, we choose Y in some ixed way. Our encoding o x 1,..., x contains the ollowing: (a) the set Y, which can be represented using (n l) n bits; (b) Alice s message A(x 1,..., x, y) or each y Y, which can be represented using (n l) m bits; (c) or each index i {1,..., }, the q bit message C(x 1,..., x, B ixed, i) sent rom Charlie to Bob; and (d) extra inormation E consisting o the inner product o each x i with each y Y, which can be represented using l bits. A ey point is that Charlie s message does not depend on y Y. This is because Charlie does not see y and, or all y Y, Charlie receives the same message, B ixed, rom Bob. Given any such encoding o x 1,..., x, decoding can be done as ollows: First, simulate Bob in protocol π, or each i {1,..., } and each y Y. This is possible because y, i, Alice s message A(x 1,..., x, y), Bob s message B ixed, and Charlie s message C(x 1,..., x, B ixed, i) are all nown. Because π is a correct protocol or SEL 1 IP, the output o the protocol is IP(x i, y). From {IP(x i, y) i = 1,..., } and the inner products in E, x 1,..., x can be obtained by solving a system o linear equations o ull ran. Because no encoding o x 1,..., x can use less than n bits, it ollows that (n l) n + (n l) m + q + l n. Thus, q + l n (n l)(n+m). By assumption, 13

14 m (1 δ). Thereore q + l n (n l)( n + 1 δ) (δ n )n, which is in (δ o(1))n, since ω(n). We now extend the result to handle randomized protocols. Theorem 6 converts a randomized (m r, l r, q r )-protocol into a deterministic (m d, l d, q d )-protocol, with m d O((m r + log + log n)(log )/ɛ 2 ), l d O((l r + log + log n)(log )/ɛ 2 ), and q d O(q r (log )/ɛ 2 ). Setting δ = 0.5 gives that m d 0.5 or q d + l d (0.5 o(1))n. In the irst case, m r Ω ( ɛ 2 log m d log log n ), which is Ω ( ɛ 2 log ), since ω(n). In the second case, q r + l r Ω ( ɛ 2 log (q d + l d ) log log n ), which is Ω ( ɛ 2 log n), since ω(n) and (log ) 2 o(n). This proo can be easily adapted to prove a similar lower bound or the indexing unction, IN : {0, 1} n {1,..., n} {0, 1}, where IN ( x, y ) = x y. This unction outputs the y th bit o string x. Theorem 15 Consider any (m, 0, q) protocol or computing SEL 1 IN. I m (1 δ), or some δ > 0, then q δn. I the protocol is randomized with error probability at most 1 2 ɛ, then either m Ω ( ɛ 2 log ) or q Ω ( ɛ 2 log n). Proo: The indexing unction can be treated as a Boolean unction with domain {0, 1} n {0, 1} n, with the promise that y is an n-bit string with Hamming weight 1. In this case, IN ( x, y ) is the inner product o x and y. Then the proo is a simple specialization o the proo o Theorem 14. Since l = 0, Bob sends the same message (which consists o no bits) or every string y {0, 1} n with Hamming weight 1. These n strings are linearly independent. Let Y denote this set o strings. The encoding o x 1,..., x is the same, except or (a) and (e), which are not needed, since Y is ixed and Y is empty. It ollows that nm + q n, so m (1 δ) implies that q n nm/ n (1 δ)n = δn. The remainder o the proo, including the extension to randomized protocols, is the same as in the proo o Theorem 14. The inner-product o x i with any nown non-zero vector y provides 1 bit o inormation about x i. However, the act that DISJ(x i, y) = 0 does not provide much inormation about x i. This is the main source o complication when trying to apply the same approach to prove a lower bound or set-disjointness. On the other hand, the act that DISJ(x i, y) = 1 provides a lot o inormation about x i : all indices at which y is 1 are indices at which x i is 0. Thereore, to encode x 1,..., x eiciently, we would lie to choose a convenient set Y o y s such that DISJ(x i, y) = 1 or many i {1,..., }. Unortunately, i we choose vectors x and y uniormly at random rom {0, 1} n, then DISJ(x, y) = 0 with very high probability. Hence, we have to wor with a restricted set o vectors. This maes it delicate to ind the set Y. Let Γ x consist o the vectors in {0, 1} n with Hamming weight σ x and let Γ y consist o the vectors in {0, 1} n with Hamming weight σ y. We will consider only x 1,..., x Γ x and y Γ y. The ollowing act will be used in the proo o Claim 20 to show that, when σ x σ y is appropriately set, 85% o the vectors in Γ y do not intersect with any given vector in Γ x. Fact 16 Suppose σ x, σ y n/4. For each x Γ x, i y is chosen at random rom Γ y, then [ ] ( 4σ Pr y DISJ(x, y) = 1 exp xσ y ) n. 14

15 Proo: ( n σx ) ( σ ( y n ) = σ y Fix x Γ x. I y is chosen at random rom Γ y, the probability that x and y are disjoint is 1 σ x n ) ( 1 σ ) ( x 1 n 1 ) ( σ x 1 n σ y + 1 ) σ σy ( x > 1 4σ ) σy x, n σ y + 1 3n since σ y n/4. Now 1 t exp ( 3t ) or 0 t 1/3 and 4σ x /3n 1/3, since σ x n/4. Thereore ( ) 1 4σx σy 3n (exp( 4σx n )σy = exp( 4σxσy n ). Let Cover(x i, y) {1,..., n} denote the indices o x i that one learns are zero rom learning DISJ(x i, y). I DISJ(x i, y) = 1, then Cover(x i, y) = {j y j = 1} and, i DISJ(x i, y) = 0, then Cover(x i, y) =. For any Y Γ y, let Cover((x 1,..., x ), Y) = {(i, j) there exists y Y such that DISJ(x i, y) = 1 and y j = 1} = i {1,...,} y Y {i} Cover(x i, y). Next, we present the main lemma that enables x 1,..., x to be encoded eiciently. Lemma 17 Consider any deterministic 1.5 round (m, l, q)-protocol, where 1 l n/20. Let σ y = 5l, let σ x = 0.008n/l, and ix x 1,..., x Γ x. Then there exists a message B ixed and a set Y Γ y o size at most 30n such that, or each y Y, Bob sends B ixed to Charlie and Cover((x 1,..., x ), Y) 1 2 n. Proving this lemma needs technical wor. Beore we do that, let us see how the lower bound or disjointness ollows rom Lemma 17 via compression. We will need also the ollowing simple act: Fact 18 Let v 1,..., v, σ be positive integers such that v 1 + +v r and σ min {v i 1 i }. Then, ( ) ( ) vi r/. σ σ i=1 Proo: We will mae use o the inequality o arithmetic and geometric means: I a 1,..., a are non-negative real numbers, then ( ) a1 + + a a 1 a 2 a. Let j be any integer such that 0 j < σ and let a i = v i j or all i {1,..., }. Then, ( r ) (v i j) j ( r/ j), i=1 so, by the deinition o binomial coeicients, i=1 ( ) vi = 1 σ ( σ! ) σ 1 i=1 j=0 ( vi j ) 1 ( σ! ) σ 1 ( r/ j) j=0 ( ) r/ =. σ 15

16 Proo o Theorem 13: Let σ y = 5l and σ x = 0.008n/l. Fix any 1.5 round deterministic (m, l, q)- protocol π or SEL 1 DISJ and any x = x 1,..., x Γ x. I l > n/20, then l ( 61nm + q ) l > 0.05n 0.008n. Thus, we may assume that 1 l n/20. Then it is easy to veriy that σ x, σ y n/4. Let B ixed and Y be the message and subset o Γ y guaranteed by Lemma 17. Our encoding o x 1,..., x contains the ollowing: (a) the set Y, which can be represented using 30n n bits; (b) Bob s ixed message B ixed, which can be represented using l = 0.2σ y bits; (c) Alice s message A(x 1,..., x, y) or each y Y, which can be represented using 30n m bits; (d) or each i {1,..., }, the q bit message C(x 1,..., x, B ixed, i) sent rom Charlie to Bob; and (e) the remaining inormation E about x 1,..., x that is not learned rom Cover((x 1,..., x ), Y). For i {1,..., }, let S i = {j : (i, j) Cover((x 1,..., x ), Y). Observe that the indices or which x i has value one lie in S i. Let v i = S i. Thus, the number o possibilities or the locations o the ones o x 1,..., x is ( v 1 ) ( σ x v2 ) ( σ x v ) σ x. As Cover((x1,..., x ), Y) > 0.5n, we have v 1 + +v < 0.5n. Invoing Fact 18, inormation E can be transmitted in at most log ( ) 0.5n σ x bits. Given any such encoding o x 1,..., x, decoding can be done as ollows: First, simulate Bob in protocol π, or each i {1,..., } and each y Y. This is possible because y, i, Alice s message A(x 1,..., x, y), Bob s message B ixed, and Charlie s message C(x 1,..., x, B ixed, i) are all nown. Because π is a correct protocol or SEL 1 DISJ, the output o the protocol is DISJ(x i, y). From this, compute the indices in Cover((x 1,..., x ), Y) where x 1,..., x has zeroes. By deinition, E communicates the remaining inormation about x 1,..., x, so it is possible to decode correctly. Because no encoding o x 1,..., x can use less than its entropy H(x 1,..., x ), we have: Note that H(x 1,..., x ) = log ( n σ x ). Thus, 30n n + l + 30n m + q + H(E) H(x 1,..., x ). ( ( n ) ) σ H(x 1,..., x ) H(E) log x ( 0.5n ). σ x Using the observation that ( ) ( n σ x / 0.5n ) σ x 2 σ x, we obtain H(x 1,..., x ) H(E) σ x. Thus we have 30n n + l + 30n m + q σ x. Then, using the act that l n nm, we have 30n(n + m) + nm Since m n, l = 0.2σ y, and σ x σ y = 0.04n, it ollows that l ( 61nm + q ) l ( 30n(n + m) + nm + q σ x. + q ) lσ x = 0.2σ x σ y = 0.008n. As in the proo o Theorem 14, Theorem 6 can be used to extend the lower bound to randomized protocols, giving (l + log + log n) ( 61n (m + log + log n) + q) Ω( ɛ4 n). Since / log ω(n), log 2 it ollows that log + log n Θ(log ) and 61n log n) + q Θ(q). Hence (l + log ) ( 61nm 61n (log + log n) o(1). But q 1, so (log + + q ) Ω( ɛ4 log 2 n). 16

17 All that remains is to prove the existence o the set, Y(x 1,..., x ), promised by Lemma 17, or each x 1,..., x. We will do so in two stages, using the probabilistic method. First, we will construct an intermediate set, Y 0 (x 1,..., x ), with some nice properties. This will allow us to obtain our inal desired set by picing elements rom Y 0 at random. Lemma 19 Consider any deterministic 1.5 round (m, l, q)-protocol, where 1 l n/20. Let σ y = 5l and σ x = 0.008n/l. Then, or each x 1,..., x, there exists a set Y 0 (x 1,..., x ) such that: Bob sends the same message or each y Y 0 (x 1,..., x ), Y 0 (x 1,..., x ) 10 (0.8 ) σy Γ y, and {i {1,..., } Pr y Y0 [ DISJ(xi, y) = 1] > 0.2} > 0.7. Proo: Fix x 1,..., x. Choose B ixed at random, where the probability o choosing each message is proportional to the number o y s or which Bob sends the message. Let Y 0 denote the set o messages [ y Γ y or which Bob sends B ixed. Bob sends at most 2 l dierent messages. Hence, Pr Bixed Y0 < l Γ y ] < 1/4. It ollows that, or l suiciently large and, hence, or σ y = 5l 1 suiciently large, 4 2 l Γ y > 1 [ 4 (0.87)σy Γ y > 10 (0.8) σy Γ y. Thereore, Pr Bixed Y0 < 10 (0.8) σy Γ y ] < 1/4. The ollowing claim shows that, or a typical random message B ixed, or most messages y Y 0 and or most i {1,..., }, the sets represented by y and x i are disjoint. [ Claim 20 E Bixed {i {1,..., } Pr y Y0 [DISJ(x i, y) = 1] 0.2} ] 0.2. To prove the claim, consider any i {1,..., }. Let D i be the event that Pr y Y0 [ DISJ(xi, y) = 1 ] 0.2. Let a = Pr[D i ]. We will bound a rom above by computing Pr y Γy [ DISJ(xi, y) = 1 ] in two ways. First, note that choosing y at random rom Γ y is the same as irst choosing B ixed at random and then choosing y Y 0 at random. Hence, Pr y Γy [ DISJ(xi, y) = 1 ] = Pr[D i ] Pr y Y0 [DISJ(x i, y) = 1 D i ] + Pr[ D i ] Pr y Y0 [DISJ(x i, y) = 1 D i ] a (1 a) 1 = 1 0.8a. Since σ x, σ y n/4 and σ x σ y = 0.04n, Fact 16 implies that Pr y Γy [ DISJ(xi, y) = 1] e.16 > Hence 1 0.8a 0.85, which implies that a < < 0.2. This is true or all i {1,..., }. The claim now ollows rom the linearity o expectation. [ Applying Marov s inequality to this claim gives Pr Bixed {i {1,..., } Pr y Y0 [DISJ(x i, y) = 1] 0.2} ] /0.3 2/3. As 1/4 + 2/3 < 1, there is a non-zero probability that both Y 0 10 (0.8) σy Γ y and {i {1,..., } Pr y Y0 [ DISJ(xi, y) = 1] > 0.2} > 0.7. Let us now show why choosing Y 0 using Lemma 19 helps us construct Y. We will need one more act that ormalizes the ollowing natural intuition: i we tae a suiciently large subset o Γ y, then the distribution o the ones in the vectors o this subset is airly well spread out among {1,..., n}. For any such set S Γ y, let C(S) = { i {1,..., n} Pr y S [y i = 1] 1 2n}. 17

CS Communication Complexity: Applications and New Directions

CS Communication Complexity: Applications and New Directions CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the

More information

Lecture 16: Communication Complexity

Lecture 16: Communication Complexity CSE 531: Computational Complexity I Winter 2016 Lecture 16: Communication Complexity Mar 2, 2016 Lecturer: Paul Beame Scribe: Paul Beame 1 Communication Complexity In (two-party) communication complexity

More information

Asymmetric Communication Complexity and Data Structure Lower Bounds

Asymmetric Communication Complexity and Data Structure Lower Bounds Asymmetric Communication Complexity and Data Structure Lower Bounds Yuanhao Wei 11 November 2014 1 Introduction This lecture will be mostly based off of Miltersen s paper Cell Probe Complexity - a Survey

More information

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction . ETA EVALUATIONS USING WEBER FUNCTIONS Introduction So ar we have seen some o the methods or providing eta evaluations that appear in the literature and we have seen some o the interesting properties

More information

CS Introduction to Complexity Theory. Lecture #11: Dec 8th, 2015

CS Introduction to Complexity Theory. Lecture #11: Dec 8th, 2015 CS 2401 - Introduction to Complexity Theory Lecture #11: Dec 8th, 2015 Lecturer: Toniann Pitassi Scribe Notes by: Xu Zhao 1 Communication Complexity Applications Communication Complexity (CC) has many

More information

(C) The rationals and the reals as linearly ordered sets. Contents. 1 The characterizing results

(C) The rationals and the reals as linearly ordered sets. Contents. 1 The characterizing results (C) The rationals and the reals as linearly ordered sets We know that both Q and R are something special. When we think about about either o these we usually view it as a ield, or at least some kind o

More information

Information Complexity and Applications. Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017

Information Complexity and Applications. Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017 Information Complexity and Applications Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017 Coding vs complexity: a tale of two theories Coding Goal: data transmission Different channels

More information

Circuit Complexity / Counting Problems

Circuit Complexity / Counting Problems Lecture 5 Circuit Complexity / Counting Problems April 2, 24 Lecturer: Paul eame Notes: William Pentney 5. Circuit Complexity and Uniorm Complexity We will conclude our look at the basic relationship between

More information

Umans Complexity Theory Lectures

Umans Complexity Theory Lectures New topic(s) Umans Complexity Theory Lectures Lecture 15: Approximation Algorithms and Probabilistically Checable Proos (PCPs) Optimization problems,, and Probabilistically Checable Proos 2 Optimization

More information

Finger Search in the Implicit Model

Finger Search in the Implicit Model Finger Search in the Implicit Model Gerth Stølting Brodal, Jesper Sindahl Nielsen, Jakob Truelsen MADALGO, Department o Computer Science, Aarhus University, Denmark. {gerth,jasn,jakobt}@madalgo.au.dk Abstract.

More information

Lecture 21: Quantum communication complexity

Lecture 21: Quantum communication complexity CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 21: Quantum communication complexity April 6, 2006 In this lecture we will discuss how quantum information can allow for a

More information

Separating Deterministic from Nondeterministic NOF Multiparty Communication Complexity

Separating Deterministic from Nondeterministic NOF Multiparty Communication Complexity Separating Deterministic from Nondeterministic NOF Multiparty Communication Complexity (Extended Abstract) Paul Beame 1,, Matei David 2,, Toniann Pitassi 2,, and Philipp Woelfel 2, 1 University of Washington

More information

On High-Rate Cryptographic Compression Functions

On High-Rate Cryptographic Compression Functions On High-Rate Cryptographic Compression Functions Richard Ostertág and Martin Stanek Department o Computer Science Faculty o Mathematics, Physics and Inormatics Comenius University Mlynská dolina, 842 48

More information

Communication Complexity

Communication Complexity Communication Complexity Jie Ren Adaptive Signal Processing and Information Theory Group Nov 3 rd, 2014 Jie Ren (Drexel ASPITRG) CC Nov 3 rd, 2014 1 / 77 1 E. Kushilevitz and N. Nisan, Communication Complexity,

More information

Quantum Communication Complexity

Quantum Communication Complexity Quantum Communication Complexity Ronald de Wolf Communication complexity has been studied extensively in the area of theoretical computer science and has deep connections with seemingly unrelated areas,

More information

CS 361 Meeting 28 11/14/18

CS 361 Meeting 28 11/14/18 CS 361 Meeting 28 11/14/18 Announcements 1. Homework 9 due Friday Computation Histories 1. Some very interesting proos o undecidability rely on the technique o constructing a language that describes the

More information

1 Randomized Computation

1 Randomized Computation CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at

More information

14. Direct Sum (Part 1) - Introduction

14. Direct Sum (Part 1) - Introduction Communication Complexity 14 Oct, 2011 (@ IMSc) 14. Direct Sum (Part 1) - Introduction Lecturer: Prahladh Harsha Scribe: Abhishek Dang 14.1 Introduction The Direct Sum problem asks how the difficulty in

More information

The Cell Probe Complexity of Dynamic Range Counting

The Cell Probe Complexity of Dynamic Range Counting The Cell Probe Complexity of Dynamic Range Counting Kasper Green Larsen MADALGO, Department of Computer Science Aarhus University Aarhus, Denmark email: larsen@cs.au.dk August 27, 2012 Abstract In this

More information

Nondeterminism LECTURE Nondeterminism as a proof system. University of California, Los Angeles CS 289A Communication Complexity

Nondeterminism LECTURE Nondeterminism as a proof system. University of California, Los Angeles CS 289A Communication Complexity University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Matt Brown Date: January 25, 2012 LECTURE 5 Nondeterminism In this lecture, we introduce nondeterministic

More information

Cell-Probe Proofs and Nondeterministic Cell-Probe Complexity

Cell-Probe Proofs and Nondeterministic Cell-Probe Complexity Cell-obe oofs and Nondeterministic Cell-obe Complexity Yitong Yin Department of Computer Science, Yale University yitong.yin@yale.edu. Abstract. We study the nondeterministic cell-probe complexity of static

More information

An approach from classical information theory to lower bounds for smooth codes

An approach from classical information theory to lower bounds for smooth codes An approach from classical information theory to lower bounds for smooth codes Abstract Let C : {0, 1} n {0, 1} m be a code encoding an n-bit string into an m-bit string. Such a code is called a (q, c,

More information

Hardness of MST Construction

Hardness of MST Construction Lecture 7 Hardness of MST Construction In the previous lecture, we saw that an MST can be computed in O( n log n+ D) rounds using messages of size O(log n). Trivially, Ω(D) rounds are required, but what

More information

CS Foundations of Communication Complexity

CS Foundations of Communication Complexity CS 49 - Foundations of Communication Complexity Lecturer: Toniann Pitassi 1 The Discrepancy Method Cont d In the previous lecture we ve outlined the discrepancy method, which is a method for getting lower

More information

Algebraic Problems in Computational Complexity

Algebraic Problems in Computational Complexity Algebraic Problems in Computational Complexity Pranab Sen School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai 400005, India pranab@tcs.tifr.res.in Guide: Prof. R.

More information

Majority is incompressible by AC 0 [p] circuits

Majority is incompressible by AC 0 [p] circuits Majority is incompressible by AC 0 [p] circuits Igor Carboni Oliveira Columbia University Joint work with Rahul Santhanam (Univ. Edinburgh) 1 Part 1 Background, Examples, and Motivation 2 Basic Definitions

More information

Approximation norms and duality for communication complexity lower bounds

Approximation norms and duality for communication complexity lower bounds Approximation norms and duality for communication complexity lower bounds Troy Lee Columbia University Adi Shraibman Weizmann Institute From min to max The cost of a best algorithm is naturally phrased

More information

Lecture Lecture 9 October 1, 2015

Lecture Lecture 9 October 1, 2015 CS 229r: Algorithms for Big Data Fall 2015 Lecture Lecture 9 October 1, 2015 Prof. Jelani Nelson Scribe: Rachit Singh 1 Overview In the last lecture we covered the distance to monotonicity (DTM) and longest

More information

Information Complexity vs. Communication Complexity: Hidden Layers Game

Information Complexity vs. Communication Complexity: Hidden Layers Game Information Complexity vs. Communication Complexity: Hidden Layers Game Jiahui Liu Final Project Presentation for Information Theory in TCS Introduction Review of IC vs CC Hidden Layers Game Upper Bound

More information

On the P versus NP intersected with co-np question in communication complexity

On the P versus NP intersected with co-np question in communication complexity On the P versus NP intersected with co-np question in communication complexity Stasys Jukna Abstract We consider the analog of the P versus NP co-np question for the classical two-party communication protocols

More information

Matrix Rank in Communication Complexity

Matrix Rank in Communication Complexity University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Mohan Yang Date: January 18, 2012 LECTURE 3 Matrix Rank in Communication Complexity This lecture

More information

Grothendieck Inequalities, XOR games, and Communication Complexity

Grothendieck Inequalities, XOR games, and Communication Complexity Grothendieck Inequalities, XOR games, and Communication Complexity Troy Lee Rutgers University Joint work with: Jop Briët, Harry Buhrman, and Thomas Vidick Overview Introduce XOR games, Grothendieck s

More information

Communication Complexity

Communication Complexity Communication Complexity Jaikumar Radhakrishnan School of Technology and Computer Science Tata Institute of Fundamental Research Mumbai 31 May 2012 J 31 May 2012 1 / 13 Plan 1 Examples, the model, the

More information

Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity

Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity Randomized Simultaneous Messages: Solution of a Problem of Yao in Communication Complexity László Babai Peter G. Kimmel Department of Computer Science The University of Chicago 1100 East 58th Street Chicago,

More information

Notes on Complexity Theory Last updated: November, Lecture 10

Notes on Complexity Theory Last updated: November, Lecture 10 Notes on Complexity Theory Last updated: November, 2015 Lecture 10 Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Randomized Time Complexity 1.1 How Large is BPP? We know that P ZPP = RP corp

More information

An exponential separation between quantum and classical one-way communication complexity

An exponential separation between quantum and classical one-way communication complexity An exponential separation between quantum and classical one-way communication complexity Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical

More information

SEPARATED AND PROPER MORPHISMS

SEPARATED AND PROPER MORPHISMS SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN The notions o separatedness and properness are the algebraic geometry analogues o the Hausdor condition and compactness in topology. For varieties over the

More information

Irredundant Families of Subcubes

Irredundant Families of Subcubes Irredundant Families of Subcubes David Ellis January 2010 Abstract We consider the problem of finding the maximum possible size of a family of -dimensional subcubes of the n-cube {0, 1} n, none of which

More information

Secure Communication in Multicast Graphs

Secure Communication in Multicast Graphs Secure Communication in Multicast Graphs Qiushi Yang and Yvo Desmedt Department o Computer Science, University College London, UK {q.yang, y.desmedt}@cs.ucl.ac.uk Abstract. In this paper we solve the problem

More information

SEPARATED AND PROPER MORPHISMS

SEPARATED AND PROPER MORPHISMS SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN Last quarter, we introduced the closed diagonal condition or a prevariety to be a prevariety, and the universally closed condition or a variety to be complete.

More information

New Characterizations in Turnstile Streams with Applications

New Characterizations in Turnstile Streams with Applications New Characterizations in Turnstile Streams with Applications Yuqing Ai 1, Wei Hu 1, Yi Li 2, and David P. Woodruff 3 1 The Institute for Theoretical Computer Science (ITCS), Institute for Interdisciplinary

More information

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan The Communication Complexity of Correlation Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan Transmitting Correlated Variables (X, Y) pair of correlated random variables Transmitting

More information

Supplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values

Supplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values Supplementary material or Continuous-action planning or discounted ininite-horizon nonlinear optimal control with Lipschitz values List o main notations x, X, u, U state, state space, action, action space,

More information

An Information Complexity Approach to the Inner Product Problem

An Information Complexity Approach to the Inner Product Problem An Information Complexity Approach to the Inner Product Problem William Henderson-Frost Advisor: Amit Chakrabarti Senior Honors Thesis Submitted to the faculty in partial fulfillment of the requirements

More information

Notes on Discrete Probability

Notes on Discrete Probability Columbia University Handout 3 W4231: Analysis of Algorithms September 21, 1999 Professor Luca Trevisan Notes on Discrete Probability The following notes cover, mostly without proofs, the basic notions

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

Fourier analysis of boolean functions in quantum computation

Fourier analysis of boolean functions in quantum computation Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge

More information

Learning convex bodies is hard

Learning convex bodies is hard Learning convex bodies is hard Navin Goyal Microsoft Research India navingo@microsoft.com Luis Rademacher Georgia Tech lrademac@cc.gatech.edu Abstract We show that learning a convex body in R d, given

More information

Higher Cell Probe Lower Bounds for Evaluating Polynomials

Higher Cell Probe Lower Bounds for Evaluating Polynomials Higher Cell Probe Lower Bounds for Evaluating Polynomials Kasper Green Larsen MADALGO, Department of Computer Science Aarhus University Aarhus, Denmark Email: larsen@cs.au.dk Abstract In this paper, we

More information

CS Foundations of Communication Complexity

CS Foundations of Communication Complexity CS 2429 - Foundations of Communication Complexity Lecturer: Sergey Gorbunov 1 Introduction In this lecture we will see how to use methods of (conditional) information complexity to prove lower bounds for

More information

Three Query Locally Decodable Codes with Higher Correctness Require Exponential Length

Three Query Locally Decodable Codes with Higher Correctness Require Exponential Length Three Query Locally Decodable Codes with Higher Correctness Require Exponential Length Anna Gál UT Austin panni@cs.utexas.edu Andrew Mills UT Austin amills@cs.utexas.edu March 8, 20 Abstract Locally decodable

More information

Partitions and Covers

Partitions and Covers University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Dong Wang Date: January 2, 2012 LECTURE 4 Partitions and Covers In previous lectures, we saw

More information

Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets

Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets Emanuele Viola July 6, 2009 Abstract We prove that to store strings x {0, 1} n so that each prefix sum a.k.a. rank query Sumi := k i x k can

More information

On the tightness of the Buhrman-Cleve-Wigderson simulation

On the tightness of the Buhrman-Cleve-Wigderson simulation On the tightness of the Buhrman-Cleve-Wigderson simulation Shengyu Zhang Department of Computer Science and Engineering, The Chinese University of Hong Kong. syzhang@cse.cuhk.edu.hk Abstract. Buhrman,

More information

Simultaneous Communication Protocols with Quantum and Classical Messages

Simultaneous Communication Protocols with Quantum and Classical Messages Simultaneous Communication Protocols with Quantum and Classical Messages Oded Regev Ronald de Wolf July 17, 2008 Abstract We study the simultaneous message passing model of communication complexity, for

More information

Lower Bounds for Dynamic Connectivity (2004; Pǎtraşcu, Demaine)

Lower Bounds for Dynamic Connectivity (2004; Pǎtraşcu, Demaine) Lower Bounds for Dynamic Connectivity (2004; Pǎtraşcu, Demaine) Mihai Pǎtraşcu, MIT, web.mit.edu/ mip/www/ Index terms: partial-sums problem, prefix sums, dynamic lower bounds Synonyms: dynamic trees 1

More information

Problem Set. Problems on Unordered Summation. Math 5323, Fall Februray 15, 2001 ANSWERS

Problem Set. Problems on Unordered Summation. Math 5323, Fall Februray 15, 2001 ANSWERS Problem Set Problems on Unordered Summation Math 5323, Fall 2001 Februray 15, 2001 ANSWERS i 1 Unordered Sums o Real Terms In calculus and real analysis, one deines the convergence o an ininite series

More information

Lecture 20: Lower Bounds for Inner Product & Indexing

Lecture 20: Lower Bounds for Inner Product & Indexing 15-859: Information Theory and Applications in TCS CMU: Spring 201 Lecture 20: Lower Bounds for Inner Product & Indexing April 9, 201 Lecturer: Venkatesan Guruswami Scribe: Albert Gu 1 Recap Last class

More information

arxiv: v2 [cs.ds] 3 Oct 2017

arxiv: v2 [cs.ds] 3 Oct 2017 Orthogonal Vectors Indexing Isaac Goldstein 1, Moshe Lewenstein 1, and Ely Porat 1 1 Bar-Ilan University, Ramat Gan, Israel {goldshi,moshe,porately}@cs.biu.ac.il arxiv:1710.00586v2 [cs.ds] 3 Oct 2017 Abstract

More information

2 Natural Proofs: a barrier for proving circuit lower bounds

2 Natural Proofs: a barrier for proving circuit lower bounds Topics in Theoretical Computer Science April 4, 2016 Lecturer: Ola Svensson Lecture 6 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent

More information

1 Distributional problems

1 Distributional problems CSCI 5170: Computational Complexity Lecture 6 The Chinese University of Hong Kong, Spring 2016 23 February 2016 The theory of NP-completeness has been applied to explain why brute-force search is essentially

More information

STAT 801: Mathematical Statistics. Hypothesis Testing

STAT 801: Mathematical Statistics. Hypothesis Testing STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o

More information

Definition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series.

Definition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series. 2.4 Local properties o unctions o several variables In this section we will learn how to address three kinds o problems which are o great importance in the ield o applied mathematics: how to obtain the

More information

Chebyshev Polynomials, Approximate Degree, and Their Applications

Chebyshev Polynomials, Approximate Degree, and Their Applications Chebyshev Polynomials, Approximate Degree, and Their Applications Justin Thaler 1 Georgetown University Boolean Functions Boolean function f : { 1, 1} n { 1, 1} AND n (x) = { 1 (TRUE) if x = ( 1) n 1 (FALSE)

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

Space-Bounded Communication Complexity

Space-Bounded Communication Complexity Space-Bounded Communication Complexity Joshua Brody 1, Shiteng Chen 2, Periklis A. Papakonstantinou 2, Hao Song 2, and Xiaoming Sun 3 1 Computer Science Department, Aarhus University, Denmark 2 Institute

More information

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010

CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 We now embark on a study of computational classes that are more general than NP. As these classes

More information

Clique vs. Independent Set

Clique vs. Independent Set Lower Bounds for Clique vs. Independent Set Mika Göös University of Toronto Mika Göös (Univ. of Toronto) Clique vs. Independent Set rd February 5 / 4 On page 6... Mika Göös (Univ. of Toronto) Clique vs.

More information

On a Closed Formula for the Derivatives of e f(x) and Related Financial Applications

On a Closed Formula for the Derivatives of e f(x) and Related Financial Applications International Mathematical Forum, 4, 9, no. 9, 41-47 On a Closed Formula or the Derivatives o e x) and Related Financial Applications Konstantinos Draais 1 UCD CASL, University College Dublin, Ireland

More information

A Tight Lower Bound for Dynamic Membership in the External Memory Model

A Tight Lower Bound for Dynamic Membership in the External Memory Model A Tight Lower Bound for Dynamic Membership in the External Memory Model Elad Verbin ITCS, Tsinghua University Qin Zhang Hong Kong University of Science & Technology April 2010 1-1 The computational model

More information

Randomness. What next?

Randomness. What next? Randomness What next? Random Walk Attribution These slides were prepared for the New Jersey Governor s School course The Math Behind the Machine taught in the summer of 2012 by Grant Schoenebeck Large

More information

Beyond Hellman s Time-Memory Trade-Offs with Applications to Proofs of Space

Beyond Hellman s Time-Memory Trade-Offs with Applications to Proofs of Space Beyond Hellman s Time-Memory Trade-Os with Applications to Proos o Space Hamza Abusalah 1, Joël Alwen 1, Bram Cohen 2, Danylo Khilko 3, Krzyszto Pietrzak 1, and Leonid Reyzin 4 1 Institute o Science and

More information

PRGs for space-bounded computation: INW, Nisan

PRGs for space-bounded computation: INW, Nisan 0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,

More information

Multiparty Communication Complexity for Set Disjointness: A Lower Bound. Sammy Luo, Brian Shimanuki

Multiparty Communication Complexity for Set Disjointness: A Lower Bound. Sammy Luo, Brian Shimanuki Multiparty Communication Complexity or Set Disjointness: A Lower Bound Sammy Luo, Brian Shimanuki May 6, 2016 Abstract The standard model o communication complexity involves two parties each starting with

More information

Lecture 4: Probability and Discrete Random Variables

Lecture 4: Probability and Discrete Random Variables Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1

More information

Handout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0.

Handout 5. α a1 a n. }, where. xi if a i = 1 1 if a i = 0. Notes on Complexity Theory Last updated: October, 2005 Jonathan Katz Handout 5 1 An Improved Upper-Bound on Circuit Size Here we show the result promised in the previous lecture regarding an upper-bound

More information

Communication is bounded by root of rank

Communication is bounded by root of rank Electronic Colloquium on Computational Complexity, Report No. 84 (2013) Communication is bounded by root of rank Shachar Lovett June 7, 2013 Abstract We prove that any total boolean function of rank r

More information

Lecture 16 Oct 21, 2014

Lecture 16 Oct 21, 2014 CS 395T: Sublinear Algorithms Fall 24 Prof. Eric Price Lecture 6 Oct 2, 24 Scribe: Chi-Kit Lam Overview In this lecture we will talk about information and compression, which the Huffman coding can achieve

More information

On Multi-Partition Communication Complexity

On Multi-Partition Communication Complexity On Multi-Partition Communication Complexity Pavol Ďuriš Juraj Hromkovič Stasys Jukna Martin Sauerhoff Georg Schnitger Abstract. We study k-partition communication protocols, an extension of the standard

More information

Roberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points

Roberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points Roberto s Notes on Dierential Calculus Chapter 8: Graphical analysis Section 1 Extreme points What you need to know already: How to solve basic algebraic and trigonometric equations. All basic techniques

More information

Multiparty Communication Complexity of Disjointness

Multiparty Communication Complexity of Disjointness Multiparty Communication Complexity of Disjointness Aradev Chattopadhyay and Anil Ada School of Computer Science McGill University, Montreal, Canada achatt3,aada@cs.mcgill.ca We obtain a lower bound of

More information

Lecture 11: Quantum Information III - Source Coding

Lecture 11: Quantum Information III - Source Coding CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that

More information

PROPERTY TESTING LOWER BOUNDS VIA COMMUNICATION COMPLEXITY

PROPERTY TESTING LOWER BOUNDS VIA COMMUNICATION COMPLEXITY PROPERTY TESTING LOWER BOUNDS VIA COMMUNICATION COMPLEXITY Eric Blais, Joshua Brody, and Kevin Matulef February 1, 01 Abstract. We develop a new technique for proving lower bounds in property testing,

More information

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds Lecturer: Toniann Pitassi Scribe: Robert Robere Winter 2014 1 Switching

More information

PERFECTLY secure key agreement has been studied recently

PERFECTLY secure key agreement has been studied recently IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1 + o(1))2 (

More information

Lecture 21: P vs BPP 2

Lecture 21: P vs BPP 2 Advanced Complexity Theory Spring 206 Prof. Dana Moshkovitz Lecture 2: P vs BPP 2 Overview In the previous lecture, we began our discussion of pseudorandomness. We presented the Blum- Micali definition

More information

On Randomized One-Round Communication Complexity. Ilan Kremer Noam Nisan Dana Ron. Hebrew University, Jerusalem 91904, Israel

On Randomized One-Round Communication Complexity. Ilan Kremer Noam Nisan Dana Ron. Hebrew University, Jerusalem 91904, Israel On Randomized One-Round Communication Complexity Ilan Kremer Noam Nisan Dana Ron Institute of Computer Science Hebrew University, Jerusalem 91904, Israel Email: fpilan,noam,danarg@cs.huji.ac.il November

More information

Notes on Complexity Theory Last updated: December, Lecture 27

Notes on Complexity Theory Last updated: December, Lecture 27 Notes on Complexity Theory Last updated: December, 2011 Jonathan Katz Lecture 27 1 Space-Bounded Derandomization We now discuss derandomization of space-bounded algorithms. Here non-trivial results can

More information

Lecture 10 + additional notes

Lecture 10 + additional notes CSE533: Information Theorn Computer Science November 1, 2010 Lecturer: Anup Rao Lecture 10 + additional notes Scribe: Mohammad Moharrami 1 Constraint satisfaction problems We start by defining bivariate

More information

VALUATIVE CRITERIA BRIAN OSSERMAN

VALUATIVE CRITERIA BRIAN OSSERMAN VALUATIVE CRITERIA BRIAN OSSERMAN Intuitively, one can think o separatedness as (a relative version o) uniqueness o limits, and properness as (a relative version o) existence o (unique) limits. It is not

More information

Lecture 22. m n c (k) i,j x i x j = c (k) k=1

Lecture 22. m n c (k) i,j x i x j = c (k) k=1 Notes on Complexity Theory Last updated: June, 2014 Jonathan Katz Lecture 22 1 N P PCP(poly, 1) We show here a probabilistically checkable proof for N P in which the verifier reads only a constant number

More information

Better Than Advertised: Improved Collision-Resistance Guarantees for MD-Based Hash Functions

Better Than Advertised: Improved Collision-Resistance Guarantees for MD-Based Hash Functions Better Than Advertised: Improved Collision-Resistance Guarantees or MD-Based Hash Functions Mihir Bellare University o Caliornia San Diego La Jolla, Caliornia mihir@eng.ucsd.edu Joseph Jaeger University

More information

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY. FLAC (15-453) Spring l. Blum TIME COMPLEXITY AND POLYNOMIAL TIME;

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY. FLAC (15-453) Spring l. Blum TIME COMPLEXITY AND POLYNOMIAL TIME; 15-453 TIME COMPLEXITY AND POLYNOMIAL TIME; FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY NON DETERMINISTIC TURING MACHINES AND NP THURSDAY Mar 20 COMPLEXITY THEORY Studies what can and can t be computed

More information

On the Sensitivity of Cyclically-Invariant Boolean Functions

On the Sensitivity of Cyclically-Invariant Boolean Functions On the Sensitivity of Cyclically-Invariant Boolean Functions Sourav Charaborty University of Chicago sourav@csuchicagoedu Abstract In this paper we construct a cyclically invariant Boolean function whose

More information

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky. Lecture 10

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky. Lecture 10 CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky Lecture 10 Lecture date: 14 and 16 of March, 2005 Scribe: Ruzan Shahinian, Tim Hu 1 Oblivious Transfer 1.1 Rabin Oblivious Transfer

More information

Communication vs. Computation

Communication vs. Computation Communication vs. Computation Prahladh Harsha Yuval Ishai Joe Kilian Kobbi Nissim S. Venkatesh October 18, 2005 Abstract We initiate a study of tradeoffs between communication and computation in well-known

More information

Finite Dimensional Hilbert Spaces are Complete for Dagger Compact Closed Categories (Extended Abstract)

Finite Dimensional Hilbert Spaces are Complete for Dagger Compact Closed Categories (Extended Abstract) Electronic Notes in Theoretical Computer Science 270 (1) (2011) 113 119 www.elsevier.com/locate/entcs Finite Dimensional Hilbert Spaces are Complete or Dagger Compact Closed Categories (Extended bstract)

More information