Interactive information and coding theory

Size: px
Start display at page:

Download "Interactive information and coding theory"

Transcription

1 Interactive information and coding theory Mark Braverman Abstract. We give a high-level overview of recent developments in interactive information and coding theory. These include developments involving interactive noiseless coding and interactive error-correction. The overview is primarily focused on developments related to complexity-theoretic applications, although the broader context and agenda are also set out. As the present paper is an extended abstract, the vast majority of proofs and technical details are omitted, and can be found in the respective publications and preprints. Mathematics Subject Classification (2010). Primary 94A15; Secondary 68Q99. Keywords. Coding theory, communication complexity, information complexity, interactive computation. 1. Introduction 1.1. A high-level overview of information and coding theory. We begin with a very high-level overview of information and coding theory. This is an enormous field of study, with subareas dealing with questions ranging from foundations of probability and statistics to applied wireless transmission systems. We will focus only on some of the very basic foundational aspects, which were set forth by Shannon in the late 1940s, or shortly after. The goal will be to try and translate those to interactive communication settings, of the type that is used in theoretical computer science. This program is only very partially complete, but some of the early results are promising. While our overview of information and coding theory in this section focuses on fairly simple facts, we present those in some detail nonetheless, as they will be used as a scaffold for the interactive coding discussion. A thorough introduction into modern information theory is given in [15]. Noiseless coding. Classical information theory studies the setting where one terminal (Alice) wants to transmit information over a channel to another terminal (Bob). Two of the most important original contributions by Shannon are the Work supported in part by an NSF CAREER award (CCF ), NSF CCF , NSF CCF , a Turing Centenary Fellowship, and a Packard Fellowship in Science and Engineering. c Mark Braverman This paper has appeared in the Proceedings of the International Congress of Mathematicians (ICM 2014).

2 2 Mark Braverman Noiseless Coding (or Source Coding) Theorem and the Noisy Coding (or Channel Coding) Theorem. The Source Coding Theorem asserts that the cost of Alice transmitting n i.i.d. copies of a discrete random variable X to Bob over a noiseless channel scales as Shannon s entropy H(X) as n : H(X) = x supp(x) Pr[X = x] log 1 Pr[X = x]. (1) If we denote by X n the concatenation of n independent samples from X, and by C(Y ) the (expected) number of bits needed for Alice to transmit a sample of random variable Y to Bob, then the Source Coding Theorem asserts that C(X n ) lim = H(X). (2) n n This fact can be viewed as the operational definition of entropy, i.e. one that is grounded in reality. Whereas definition (1) may appear artificial, (2) implies that it is the right one, since it connects to the natural quantity C(X n ). Another indirect piece of evidence indicating that H(X) is a natural quantity is its additivity property: H(X n ) = n H(X), (3) and more generally, if XY is the concatenation of random variables X and Y, then H(XY ) = H(X) + H(Y ) whenever X and Y are independent. Note that it is not hard to see that (3) fails to hold for C(X), making H(X) a nicer quantity to deal with than C(X). Huffman coding (11) below blurs the distinction between the two, as they only differ by at most one additive bit, but we will return to it later in the analogous distinction between communication complexity and information complexity. Noisy coding. So far we assumed a noiseless channel bits sent over the channel by Alice are received by Bob unaltered. If the channel is noisy, that is, messages sent over the channel may get corrupted, then clearly some redundancy in transmission is necessary. Abstractly, the task of coding is the task of converting the message being sent into symbols to be transmitted over the channel, in a way that allows the original message to be recovered from what has been transmitted by the channel. The important considerations for how good a code is are the type (and amount) of errors it can withstand and still accomplish the transmission successfully, and the rate by which the error-correcting encoding enlarges the message being transmitted. Shannon s Noisy-Channel Coding Theorem was first to address the noisy coding scenario theoretically. The most important insight from that theorem is that, at least in the limit, the ability of a channel to conduct information defined All logs in this paper are base-2. In fact, Shannon s Source Coding Theorem asserts that due to concentration the worst case communication cost scales as H(X) as well, if we allow negligible error. We ignore this stronger statement at the present level of abstraction.

3 Interactive information and coding theory 3 formally as Shannon s channel capacity can be decoupled from the content being transmitted over the channel. Informally, for a memoryless channel C one can define its capacity cap(c) as how many bits of information is one utilization of C (i.e. one transmission over C) worth?. For any X, if we denote by C C (X n ) the expected number of utilizations of channel C needed to transmit n independent samples of X (except with negligible error), then C C (X n ) lim = H(X) n n cap(c). (4) This means, conveniently, that one can study properties of channels separately from properties of what is being transmitted over the channels. The informationtheoretic quantities needed to express cap(c) are conditional entropy and mutual information. While these are standard basic notions in information theory, we will define them here, to keep the exposition accessible. For a pair of random variables X and Y, the conditional entropy H(X Y ) can be thought of as the amount of uncertainty remaining in X for someone who knows Y : H(X Y ) := H(XY ) H(Y ) = E y Y H(X Y = y). (5) In the extreme case where X and Y are independent, we have H(X Y ) = H(X). In the other extreme, when X = Y, we have H(X X) = 0. The mutual information I(X; Y ) between two variables X and Y measures the amount of information revealing Y reveals about X, i.e. the reduction in X s entropy as a result of conditioning on Y. Thus I(X; Y ) = H(X) H(X Y ) = H(X) + H(Y ) H(XY ) = I(Y ; X). (6) Conditional mutual information is defined similarly to conditional entropy: I(X; Y Z) := H(X Z) H(X Y Z) = I(Y ; X Z). (7) A very important property of conditional mutual information is the chain rule: I(XY ; Z W ) = I(X; Z W ) + I(Y ; Z W X) = I(Y ; Z W ) + I(X; Z W Y ). (8) An informal interpretation of (8) is that XY reveal about Z what X reveals about Z, plus what Y reveals about Z to someone who already knows X. Abstractly, a memoryless channel (i.e. one where each utilization of the channel is independent of other utilization) can be viewed as a set of pairs of variables (X, C(X)) where X is the signal the sender inputs to the channel, and C(X) is the output of the channel received on input X from the sender. If the channel is noiseless then C(X) = X. Under this notation, the channel capacity of C is equal to cap(c) = sup I(Y ; C(Y )). (9) Y In other words, it is the supremum over all input distributions of the amount of information preserved by the channel. The scenario just discussed is obviously

4 4 Mark Braverman a very simple one, but even in more elaborate settings issues surrounding coding transmissions over a noisy channel (at least when the noise is random) are very well understood. For example, for the binary symmetric channel BSC ε that accepts bits b {0, 1} and outputs b with probability 1 ε and b with probability ε, the capacity is cap(bsc ε ) = 1 H(ε) := 1 (ε log 1/ε + (1 ε) log 1/(1 ε)). (10) One caveat is that mathematically striking characterizations such as above only become possible in the limit, where the size of the message we are trying to transmit over the channel i.e. the block-length grows to infinity. What happens for fixed block lengths, which we discuss next, is of course important for both practical and theoretical reasons, and it will be even more so in the interactive regime. For noiseless coding in the one-way regime, it turns out that while H(X) does not exactly equal the expected number of bits C(X) needed to transmit a single sample from X, it is very close to it. For example, the classical Huffman s coding [25] implies that H(X) C(X) < H(X) + 1, (11) where the hard direction of (11) is the upper bound. The upper bound showing that C(X) < H(X) + 1 is a compression result, showing how encode a message with low average information content (i.e. entropy) into a message with a low communication cost (i.e. number of bits in the transmission). Note that this result is much less clean than the limit result (2): in the amortized case the equality is exact, while in the one-shot case a gap is created. This gap is inevitable, if only for integrality reasons, but as we will see later, it becomes crucial in the interactive case. Adversarial noise and list-decoding. So far we only discussed channels affected by randomized errors. A variant of the noisy regime where the situation appears mathematically much less clear is one where the errors on the channel are introduced adversarially. For example, an adversarial ε-error rate binary channel receives a string S {0, 1} n of n bits, and outputs a string S such that the Hamming distance d H (S, S ) < εn, i.e. S differs from S in at most an ε-fraction of positions. A coding scheme for this setting is a pair of encoding and decoding functions E : {0, 1} m {0, 1} n and D : {0, 1} n {0, 1} m, respectively, such that for each X {0, 1} m and each S with d H (E(X), S ) < εn, X is recovered correctly from S, i.e. D(S ) = X. Clearly we want m to be as large as possible as a function of n. It turns out that such an encoding scheme is possible with m = Ω ε (n) for each ε < 1/4 (and ε < 1/2 if the binary alphabet is replaced with an alphabet Σ of size Σ = O ε (1)). Unlike the random-noise case the exact optimal rate of the code, i.e. the largest achievable ratio of m n is unknown for the adversarial model. Clearly, the limit cannot exceed cap(bsc ε ), but it is bound to be lower, since correcting adversarial errors is much harder than randomized ones. A priori it is not even obvious that the adversarial channel capacity is a positive The O ε(1) notation means a function that is bounded by a constant for each fixed ε.

5 Interactive information and coding theory 5 constant when ε < 1/4. Despite much work in the field [45, 38], even the basic binary channel capacity problem remains open, with a notorious gap between the Gilbert-Varshanov lower bound, and the Linear Programming upper bound [44]. A clear limitation of any error-correcting code, even over a large constantsize alphabet Σ, is that no decoding is possible when ε 1/2: for two valid codewords X 1, X 2 and any encoding function E, there is a string S such that d H (E(X 1 ), S ) n/2 and d H (E(X 2 ), S ) n/2, making decoding S an impossible task (note that over a large constant-size alphabet, with a high probability one can recover from random errors of rate exceeding 1/2). It turns out, however, that for any ε < 1, for Σ = O ε (1), it is possible to come up with a constantrate list-decoding scheme: one where the decoding function D(S ) outputs a list of size s = O ε (1) of possible X 1,..., X s such that these are the only possible X s satisfying d H (E(X), S ) < (1 ε)n. List decodable codes, first introduced in the 1950s [16, 47] have played an important role in a number of areas of theoretical computer science, a partial survey of which can be found in [23, 24] Interactive computation models in complexity theory. In theoretical computer science interactive communication models are studied within the area of communication complexity. While communication complexity can be viewed as a direct extension of the study of non-interactive communication models which were discussed in the previous section, its development has been largely disjoint from the development of information theory, and the areas have not reconnected until fairly recently. This may be partially explained by the combinatorial nature of the tools most prevalent in theoretical computer science. Communication complexity was introduced by Yao in [48], and is the subject of the text [30]. It has found numerous applications for unconditional lower bounds in a variety of models of computation, including Turing machines, streaming, sketching, data structure lower bounds, and VLSI layout, to name a few. In the basic (two-party) setup, the two parties Alice and Bob are given inputs X X and Y Y, respectively, and are required to compute a function F (X, Y ) of these inputs (i.e. both parties should know the answer in the end of the communication), while communicating over a noiseless binary channel. The parties are computationally unbounded, and their only goal is to minimize the number of bits transmitted in the process of computing F (X, Y ). In a typical setup F is a function F : {0, 1} n {0, 1} n {0, 1}. Examples of functions commonly discussed and used include the Equality function EQ n (X, Y ) := 1 X=Y (X, Y ), and the Disjointness function Disj n (X, Y ) := n ( X i Y i ). (12) We will return to these functions later in our discussion. Of course, the (non-interactive) transmission problem can be viewed as just a special case of computing the function P x : X { } X, which maps (X, ) to X. However, there are two important distinctions between the flavors of typical information theory results and communication complexity. Firstly, information i=1

6 6 Mark Braverman theory is often concerned with coding results where block length i.e. the number of copies of the communication task to be performed goes to infinity. Recall, for example, that Shannon s Source Coding Theorem (2) gave Shannon s entropy as a closed-form expression for the amortized transmission cost of sending a growing number of samples X (this is often but not always the case, for example, the Huffman coding (11) result is not of this type). On the other hand, communication complexity more commonly studies the communication cost of computing a single copy of F. Secondly, as in the examples above, communication complexity often studies functions whose output is only a single bit or a small number of bits, thus counting style direct lower bound proofs rarely apply. Tools that have been successfully applied in communication complexity over the years include combinatorics, linear algebra, discrepancy theory, and only later classical information theory. To make our discussion of communication complexity more technical, we will focus on the two-party setting. We briefly discuss the multi-party setting, which also has many important applications, but is generally much less well-understood, in the last section of the paper. The basic notion in communication complexity is that of a communication protocol. A communication protocol over a binary channel formalizes a conversation, where each message only depends on the input to the speaker and the conversation so far: Definition 1.1. A (deterministic) protocol π for F : X Y {0, 1} is defined as a finite rooted binary tree, whose nodes correspond to partial communication transcripts, such that the two edges coming out of each vertex are labeled with a 0 and 1. Each leaf l is labeled by an output value f l {0, 1}. Each internal node v is labeled by a player s name and either by a function a v : X {0, 1} or b v : Y {0, 1} corresponding to the next message of Alice or Bob, respectively. The protocol π(x, Y ) is executed on a pair of inputs (X, Y ) by starting from the root of the tree. At each internal node labeled by a v the protocol follows the child a v (X) (corresponding to Alice sending a message), and similarly at each internal node labeled by b v the protocol follows b v (Y ). When a leaf l is reached the protocol outputs f l. By a slight abuse of notation, π(x, Y ) will denote both the transcript and the output of the protocol; which is the case will be clear from the context. The communication cost of a protocol is the depth of the corresponding protocol tree. A protocol succeeds on input (X, Y ) if π(x, Y ) = F (X, Y ). Its communication cost on this pair of inputs is the depth of the leaf reached by the execution. The communication complexity CC(F ) of a function F is the lowest attainable communication cost of a protocol that successfully computes F. In the case of deterministic communication we require the protocol to succeed on all inputs. A deterministic communication protocol π induces a partition of the input space X Y into sets S l by the leaf l that π(x, Y ) reaches. Since at each step the next move of the protocol depends only on either X or Y alone, each S l is a combinatorial rectangle of the form S l = Sl X S Y l. This key combinatorial property is at the heart of many combinatorial communication complexity lower

7 Interactive information and coding theory 7 bounds. To give an example of such a simple combinatorial proof, consider the rank bound. Let N = X, M = Y, and consider the N M matrix M F over R whose (X, Y )-th entry is F (X, Y ). Each protocol π with leaf set L of size L, induces a partition of X Y into combinatorial rectangles {S l } l L. Let M l be the matrix whose entries are equal to M X,Y for (X, Y ) S l and are 0 elsewhere. Since {S l } l L is a partition of X Y, we have M F = l L M l. Assuming π is always correct, each M l is monochromatic, i.e. either all-0, or all-1 on S l, depending on the value of f l. Thus, rank(m l ) 1, and rank(m F ) l L rank(m l ) L. (13) In fact, a stronger bound of L 1 holds unless M F is the trivial all-1 matrix. Thus any protocol computing F must have communication cost of at least log(rank(m F )+ 1), and it follows that the communication complexity of F is at least log(rank(m F )+ 1). As an example of an application, if F = EQ n is the Equality function, then M EQn = I 2 n is the identity matrix, and thus CC(EQ n ) n + 1. In other words, the trivial protocol where Alice sends Bob her input X (n bits), and Bob responds whether X = Y (1 bit), is optimal. As in many other areas of theoretical computer science, there is much to be gained from randomization. For example, in practice, the Equality function does not require linear communication as Alice and Bob can just hash their inputs and compare the hash keys. The shorter protocol may return a false positive, but it is correct with high probability, and reduces the communication complexity from n + 1 to O(log n). More generally, a randomized protocol is a protocol that tosses coins (i.e. accesses random bits), and produces the correct answer with high probability. The distributional setting, where there is a prior probability distribution µ on the inputs and the players need to output the correct answer with high probability with respect to µ is closely related to the randomized setting, as will be seen below. In the randomized setting there are two possible types of random coins. Public coins are generated at random and are accessible to both Alice and Bob at no communication cost. Private coins are coins generated privately by Alice and Bob, and are only accessible by the player who generated them. If Alice wants to share her coins with Bob, she needs to use the communication channel. In the context of communication complexity the pubic-coin model is clearly more powerful than the private coin one. Fortunately, the gap between the two is not very large [35], and can be mostly ignored. For convenience reasons, we will focus on the public-coin model. The definition of a randomized public-coin communication protocol π R is identical to Definition 1.1, except a public random string R is chosen at the beginning of the execution of the randomized π R, and all functions at the nodes of π R may depend on R in addition to the respective input X or Y. We still require the answer f l to be unequivocally determined by the leaf l alone. The communication cost π R of π R is still its worst-case communication cost (for historic reasons; an average-case notion would also have been meaningful to discuss here).

8 8 Mark Braverman The randomized communication complexity of F with error ε > 0 is given by R ε (F ) := min π R. (14) π R : X,Y Pr R [π R (X,Y )=F (X,Y )] 1 ε For a distribution µ on X Y the distributional communication complexity D µ,ε (F ) is defined as the cost of the best protocol that achieves expected error ε with respect to µ. Note that in this case fixing public randomness R to a uniformly random value does not change (on average) the expected success probability of π R with respect to µ. Therefore, without loss of generality, we may require π to be deterministic: D µ,ε (F ) := min π. (15) π:µ{x,y : π(x,y )=F (X,Y )} 1 ε It is easy to see that for all µ, D µ,ε (F ) R ε (F ). By an elegant minimax argument [49], a partial converse is also true: for each F and ε, there is a distribution against which the distributional communication complexity is as high as the randomized: R ε (F ) = max D µ,ε(f ). (16) µ For this reason, we will be able to discuss distributional and randomized communication complexity interchangeably. How can one prove lower bounds for the randomized setting? This setting is much less restrictive than the deterministic one, making lower bounds more challenging. Given a function F, one can guess the hard distribution µ, and then try to lower bound the distributional communication complexity D µ,ε (F ) that is, show that there is no low-communication protocol π that computes F with error ε with respect to µ. Such a protocol π of cost k = π still induces a partition {S l } l L of the inputs according to the leaf they reach, with L 2 k and each S l a combinatorial rectangle. However, it is no longer the case that when we consider the corresponding submatrix M l of M F it must be monochromatic the output of π is allowed to be wrong on a fraction of S l, and thus for some inputs the output of π on S l may disagree with the value of F. Still, it should be true that for most leaves the value of F on S l is strongly biased one way or the other, since the contribution of S l to the error is e(s l ) = min ( µ(s l F 1 (0)), µ(s l F 1 (1)) ). (17) In particular, a fruitful lower bound strategy is to show that all large rectangles with respect to µ have e(s l )/µ(s l ) ε, and thus there must be many smaller rectangles giving a lower bound on L 2 π. One simple instantiation of this strategy is the discrepancy bound: for a distribution µ, the discrepancy Disc µ (F ) of F with respect to µ is the maximum over all combinatorial rectangles R of Disc µ (R, F ) := µ(f 1 (0) R) µ(f 1 (1) R). In other words, if F has low discrepancy with respect to µ then only very small rectangles (as measured by µ) can be unbalanced. With some calculations, it can be shown that for all ε > 0 (see [30] and references therein), D µ, 1 2 ε (F ) log 2 (2ε/Disc µ (F )). (18)

9 Interactive information and coding theory 9 Note that (18) not only says that if the discrepancy is low then the communication complexity is high, but also that it remains high even if we are only trying to gain a tiny advantage over random guessing in computing F! An example of a natural function to which the discrepancy method can be applied is the n-bit Inner Product function IP n (X, Y ) = X, Y mod 2. This simple discrepancy method can be generalized to a richer family of corruption bounds that can be viewed as combinatorial generalizations of the discrepancy bound. More on this method can be found in the survey [31]. One of the early successes of applying combinatorial methods in communication complexity was the proof that the randomized communication complexity of the set disjointness problem (12) is linear, R 1/4 (Disj n ) = Θ(n). The first proof of this fact was given in the 1980s [26], and a much simpler proof was discovered soon after [41]. The proofs exhibit a specific distribution µ of inputs on which the distributional communication complexity D µ,1/4 (Disj n ) is Ω(n). Note that the uniform distribution would not be a great fit, since uniformly drawn sets are non-disjoint with a very high probability. It turns out that the following family of distributions µ is hard: select each coordinate pair (X i, Y i ) i.i.d. from a distribution on {(0, 0), (0, 1), (1, 0)} (e.g. uniformly). This generates a distribution on pairs of disjoint sets. Now, with probability 1/2 choose a uniformly random coordinate i U [n] and set (X i, Y i ) = (1, 1). Thus, under µ, X and Y are disjoint with probability 1/2. Treating communication complexity as a generalization of one-way communication and applying information-theoretic machinery to it is a very natural approach (perhaps the most natural, given the success of information theory in communication theory). Interestingly, however, this is not how the field has evolved. In fact, the fairly recent survey [31] was able to present the vast majority of communication complexity results to its date without dealing with information theory at all. It is hard to speculate why this might be the case. One possible explanation is that the mathematical machinery needed to tackle the (much more complicated) interactive case from the information-theoretic angle wasn t available until the 1990s; another possible explanation is that linear algebra, linear programming duality, and combinatorics (the main tools in communication complexity lower bounds) are traditionally more central to theoretical computer science research and education than information theory. A substantial amount of literature exists on communication complexity within the information theory community, see for example [36, 37] and references therein. The flavor of the results is usually different from the ones discussed above. In particular, there is much more focus on bounded-round communication, and significantly less focus on techniques for obtaining specific lower bounds for the communication complexity of specific functions such as the disjointness function. The most relevant work to our current discussion is a relatively recent line of work by Ishwar and Ma, which studied interactive amortized communication and obtained characterizations closely related to the ones discussed below [32, 33]. Within the theoretical computer science literature, in the context of communication complexity, information theoretic tools were explicitly introduced in [13] in

10 10 Mark Braverman the early 2000s for the simultaneous message model (i.e. 2 non-interactive rounds of communication). Building on this work, [1] developed tools for applying information theoretic reasoning to fully interactive communication, in particular giving an alternative (arguably, more intuitive) proof for the Ω(n) lower bound on the communication complexity of Disj n. The motivating questions for [13], as well as for subsequent works developing information complexity, were the direct sum [17] and direct product questions for (randomized) communication complexity. In general, a direct sum theorem quantifies the cost of solving a problem F n consisting of n sub-problems in terms of n and the cost of each sub-problem F. The value of such results to lower bounds is clear: a direct sum theorem, together with a lower bound on the (easier-to-reason-about) sub-problem, yields a lower bound on the composite problem (a process also known as hardness amplification). For example, the Karchmer-Wigderson program for boolean formula lower bounds can be completed via a (currently open) direct sum result for a certain communication model [27]. Direct product results further sharpen direct sum theorems by showing a threshold phenomenon, where solving F n with insufficient resources is shown to be impossible to achieve except with an exponentially small success probability. Classic results in complexity theory, such as Raz s Parallel Repetition Theorem [39] can be viewed as a direct product result. In the next section, we will formally introduce information complexity, first as a generalization of Shannon s entropy to interactive tasks. We will then discuss its connections to the direct sum and product questions for randomized communication complexity, and to recent progress towards resolving these questions. 2. Noiseless coding and information complexity Interactive information complexity. In this section we will work towards developing information complexity as the analogue of Shannon s entropy for interactive computation. It will sometimes be convenient to work with general interactive two-party tasks rather than just functions. A task T (X, Y ) is any action on inputs (X, Y ) that can be performed by a protocol. T (X, Y ) can be though of as a set of distributions of outputs that are acceptable given an input (X, Y ). Thus computing F (X, Y ) correctly with probability 1 ε is an example of a task, but there are examples of tasks that do not involve function or relation computation, for example Alice and Bob need to sample strings A and B, respectively, distributed according to (A, B) µ (X,Y ). For the purposes of the discussion, it suffices to think about T as the task of computing a function with some success probability. The communication complexity of a task T is then defined analogously to the communication complexity of functions. It is the least amount of communication needed to successfully perform the task T (X, Y ) by a communication protocol π(x, Y ). The information complexity of a task T is defined as the least amount of information Alice and Bob need to exchange (i.e. reveal to each other) about their inputs to successfully perform T. This amount is expressed using mutual informa-

11 Interactive information and coding theory 11 tion (specifically, conditional mutual information (7)). We start by defining the information cost of a protocol π. Given a prior distribution µ on inputs (X, Y ) the information cost IC(π, µ) := I(Y ; Π X) + I(X; Π Y ), (19) where Π is the random variable representing a realization of the protocol s transcript, including the public randomness it used. In other words, (19) represents the sum of the amount of information Alice learns about Y by participating in the protocol and the amount of information Bob learns about X by participating. Note that the prior distribution µ may drastically affect IC(π, µ). For example, if µ is a singleton distribution supported on one input (x 0, y 0 ), then IC(π, µ) = 0 for all π, since X and Y are already known to Bob and Alice respectively under the prior distribution µ. Definition (19), which will be justified shortly, generalizes Shannon s entropy in the non-interactive regime. Indeed, in the transmission case, Bob has no input, thus X µ, Y =, and Π allows Bob to reconstruct X, thus IC(π, µ) = I(X; Π) = H(X) H(X Π) = H(X). The information complexity of a task T can now be defined similarly to communication complexity in (15): IC(T, µ) := inf IC(π, µ). (20) π successfully performs T One notable distinction between (15) and (20) is that the latter takes an infimum instead of a minimum. This is because while the number of communication protocols of a given communication cost is finite, this is not true about information cost. One can have a sequence π 1, π 2,... of protocols of ever-increasing communication cost, but whose information complexity IC(π n, µ) converges to IC(T, µ) in the limit. Moreover, as we will discuss later, this phenomenon is already observed in a task T as simple as computing the conjunction of two bits. Our discussion of information complexity will be focused on the slightly simpler to reason about distributional setting, where inputs are distributed according to some prior µ. In (20), if T is the task of computing a function F with error ε w.r.t. µ, the distribution µ is used twice: first in the definition of success, and then in measuring the amount of information learned. It turns out that it is possible to define worst-case information complexity [7] as the information complexity with respect to the worst-possible prior distribution in the spirit of the minimax relationship (16). In particular, the direct sum property of information complexity which we will discuss below holds for prior-free information complexity as well. Information complexity as defined here has been extensively studied in a sequence of recent works [2, 6, 7, 28, 12, 19], and the study is still very much in progress. In particular, it is surprisingly simple to show that information complexity is additive for tasks over independent pairs of inputs. Let T 1 and T 2 be two tasks over pairs of inputs (X 1, Y 1 ), (X 2, Y 2 ), and let µ 1, µ 2 be distributions on pairs (X 1, Y 1 ) and (X 2, Y 2 ), respectively. Denote by T 1 T 2 to task composed of successfully performing both T 1 and T 2 on the respective inputs (X 1, Y 1 ) and (X 2, Y 2 ). Then information complexity is additive over these two tasks: Theorem 2.1. IC(T 1 T 2, µ 1 µ 2 ) = IC(T 1, µ 1 ) + IC(T 2, µ 2 ).

12 12 Mark Braverman Proof. (Sketch, a complete proof of a slightly more general statement can be found in [7]). The easy direction of this theorem is the direction. Take two protocols π 1 and π 2 that perform T 1 and T 2 respectively, and consider the concatenation π = (π 1, π 2 ) (which clearly performs T 1 T 2 ). Consider what Alice learns from an execution of π with prior µ 1 µ 2. A straightforward calculation using, for example, repeated application of the chain rule (8) yields I(Y 1 Y 2 ; Π 1 Π 2 X 1 X 2 ) = I(Y 1 ; Π 1 X 1 ) + I(Y 2 ; Π 2 X 2 ), and similarly for what Bob learns. Therefore IC(π, µ 1 µ 2 ) = IC(π 1, µ 1 )+IC(π 2, µ 2 ). By passing to the limit as IC(π 1, µ 1 ) IC(T 1, µ 1 ) and IC(π 2, µ 2 ) IC(T 2, µ 2 ) we obtain the direction. The direction is more interesting, even if the proof is not much more complicated. In this direction we are given a protocol π for solving T 1 T 2 with information cost I = IC(π, µ 1 µ 2 ), and we need to construct out of it two protocols for T 1 and T 2 of information costs I 1 and I 2 that add up to I 1 + I 2 I. We describe the protocol π 1 (X 1, Y 1 ) below: π 1 (X 1, Y 1 ) : Bob samples a pair (X 2, Y 2 ) µ 2, and sends X 2 to Alice; Alice and Bob execute π((x 1, X 2 ), (Y 1, Y 2 )), and output the portion relevant to T 1 in the performance of T 1 T 2. It is not hard to see that the tuple (X 1, Y 1, X 2, Y 2 ) is distributed according to µ 1 µ 2, and hence by the assumption on π, π 1 successfully performs T 1. Note that there is a slight asymmetry in π 1 : X 2 is known to both Alice and Bob while Y 2 is only known to Bob. For the purpose of correctness, the protocol would have worked the same if Bob also sent Y 2 to Alice, but it is not hard to give an example where the information cost of π 1 in that case is too high. The information cost of π is thus given by the sum of what Bob learns about X 1 from π 1 and what Alice learns about Y 1 (note that (X 2, Y 2 ) are not part of the input): I 1 = I(X 1 ; Π X 2 Y 1 Y 2 ) + I(Y 1 ; Π X 1 X 2 ). The protocol π 2 (X 2, Y 2 ) is defined similarly to π 1 in a skew symmetric way: π 2 (X 2, Y 2 ) : Alice samples a pair (X 1, Y 1 ) µ 1, and sends Y 1 to Bob; Alice and Bob execute π((x 1, X 2 ), (Y 1, Y 2 )), and output the portion relevant to T 2 in the performance of T 1 T 2. We get that π 2 again successfully performs T 2, and its information cost is: Putting I 1 and I 2 together we get: I 2 = I(X 2 ; Π Y 1 Y 2 ) + I(Y 2 ; Π X 1 X 2 Y 1 ). I 1 +I 2 = I(X 1 ; Π X 2 Y 1 Y 2 )+I(Y 1 ; Π X 1 X 2 )+I(X 2 ; Π Y 1 Y 2 )+I(Y 2 ; Π X 1 X 2 Y 1 ) = I(X 2 ; Π Y 1 Y 2 ) + I(X 1 ; Π X 2 Y 1 Y 2 ) + I(Y 1 ; Π X 1 X 2 ) + I(Y 2 ; Π X 1 X 2 Y 1 ) = I(X 1 X 2 ; Π Y 1 Y 2 ) + I(Y 1 Y 2 ; Π X 1 X 2 ) = I.

13 Interactive information and coding theory 13 Once again, passing to the limit, gives us the direction, and completes the proof. If we denote an n-time repetition of a task T by T n, then repeatedly applying Theorem 2.1 yields IC(T n, µ n ) = n IC(T, µ). (21) Thus information complexity is additive and has the direct sum property: the cost of n copies of T scales as n times the cost of one copy. This fact can be viewed as an extension of the property H(X n ) = n H(X) to interactive problems, but what does it teach us about communication complexity? Direct sum and interactive compression. Let us return to the communication complexity setting, fixing T to be the task of computing a function F (X, Y ) with some error at most ε > 0 over a distribution µ (the case ε = 0 seems to be different from ε > 0). We will denote by Fε n the task of computing n copies of F on independent inputs distributed according to µ n, with error at most ε on each copy (note that computing F correctly with error at most ε on all copies simultaneously is a harder task). The direct sum question for communication complexity asks whether D µ n(fε n ) = Ω(n D µ (F ε ))? (22) While this question remains open, information complexity sheds light on this question by linking it to problems in interactive coding theory. As discussed below, information complexity appears to be the best tool for either proving or disproving (22), as well as for establishing the right direct sum theorem in case (22) is false. It is an easy observation that the information cost of a protocol π is always bounded by its length π, and therefore information complexity is always bounded by communication complexity. Therefore, by (21), 1 n D µ n(f n ε ) 1 n IC(F n ε, µ n ) = IC(F ε, µ). (23) It turns out that the converse is also true in the limit, as n [6]: 1 lim n n D µ n(f ε n ) = IC(F ε, µ). (24) Equation (24) can be viewed as the interactive version of the Source Coding Theorem (2). In particular, it gives an operational characterization of information complexity exclusively in terms of communication complexity. A promising attack route (that works to-date followed) on the direct sum question for communication complexity is to try and prove a relationship of the type IC(F ε, µ) D µ (F ε ) (as discussed above, the converse is trivially true). Indeed, if we could prove that IC(F ε, µ) = Ω(D µ (F ε )), by (23) it would imply that 1 n D µ n(f ε n ) = Ω(D µ (F ε )) and prove (22). One equivalent way to interpret the attempts to prove IC(F ε, µ) D µ (F ε ) is in terms of a search for an interactive analogue of Huffman coding (11) (where

14 14 Mark Braverman it does hold that H(X) > C(x) 1). (One way) Huffman coding shows how to encode a low-entropy uninformative signal into a short one. Its interactive version seeks to simulate a low information cost uninformative protocol π with a low communication protocol π. Until very recently, we did not know whether such a general compression scheme exists. Just this year, the first example of a relation whose information and communication complexities are exponentially separated was given in a striking work by Ganor, Kol, and Raz [19]. This result, in particular, shows a protocol π for a sampling problem that has information cost I, but which cannot be simulated by a protocol π with communication cost < 2 Ω(I). Note that (23), which follows from Theorem 2.1, can be further sharpened as follows. If there is a protocol π n for solving Fε n n copies of F with communication cost C n, then there is a protocol π 1 for solving a single copy of F ε whose communication cost is still at most C := C n, and whose information cost is at most I C n /n. To prove a lower bound on C n, we can assume that it is too small, and then show how to convert π 1 into a protocol π for F ε that uses < D µ (F ε ) communication. This brings us to the following general interactive coding/compression question: Problem 2.2. (Interactive compression problem). Given a protocol π whose communication cost is C and whose information cost is I, what is the smallest amount of communication needed to (approximately) simulate π? To prove the strongest possible direct sum theorem we need π to be compressed all the way down to O(I) bits of communication (the strongest possible interactive compression result), however, partial interactive compression results lead to weaker (but still non-trivial) direct sum theorems. At present, the two strongest compression results, which partially resolve Problem 2.2, compress π to Õ( C I) communication [2] and 2 O(I) communication [7], respectively. Note that these results are incomparable since C > I can be much (e.g. double-exponentially) larger than I. These result lead to direct sum theorems for randomized communication complexity. As the compression introduces an additional small amount of error, the first result implies for any constant ρ > 0: and the second one implies D µ n(f n ε ) = Ω( n D µ (F ε+ρ )), (25) D µ n(f n ε ) = Ω(n log(d µ (F ε+ρ ))). (26) The recent result of Ganor et al. [19] rules out the strongest possible direct sum theorem for relations. Since the hard-to-compress protocol in their example has a very high communication complexity (on the order of 2 2I ), it is still possible that any protocol can be compressed to O(I log O(1) (C)) communication, leading to a Here, the Õ( ) notation hides poly-logarithmic factors.

15 Interactive information and coding theory 15 n log O(1) n direct sum theorem with instead of just n. We should also note that the direct sum situation with functions (as opposed to relations) remains open. Why is interactive compression so much harder than non-interactive? The main difference between the interactive and non-interactive compression settings is that in the interactive setting each message of the protocol conveys an average of I/C 1 bits of information. There are many ways to compress communication in the relevant setting, but all of them incur an average loss of Ω(1) bits per round (Huffman coding being one example of this phenomenon). This is prohibitively expensive in the interactive case, if the number of rounds of interaction r is equal to C. Therefore, inevitably, to compress interactive communication one has to compress multiple rounds in one message. This problem disappears when I r, and this is what makes the direction of (24) go through when n is sufficiently large. Direct product for communication complexity. Next, we turn our attention to the more difficult direct product problem for communication complexity. The direct sum question talks about the amount of resources needed to achieve a certain probability of success on n copies of F. What if that amount of resources is not provided? For example, (23) implies that unless n IC(F ε, µ) bits of communication is allowed in the computation of Fε n, the computation of some copy of F will have < 1 ε success probability. What does it tell us about the success probability of all copies simultaneously? It only tells us that the probability of the protocol succeeding on all copies simultaneously is bounded by 1 ε. This is a very weak bound, since solving the n copies independently leads to a success probability of (1 ε) n, which is exponentially small for a constant ε. How can this gap be reconciled? In particular, can one show that Alice and Bob cannot pool the errors from all n copies on the same instances, thus keeping the success probability for each coordinate, as well as the global success probability, close to 1 ε? The direct product problem precisely addresses this question. Let us denote by suc(f, µ, C) as the highest success probability (w.r.t. µ) in computing F that can be attained using communication C. Thus suc(f, µ, C) 1 ε is equivalent to D µ (F ε ) C. Somewhat informally phrased, the direct product question asks whether suc(f n, µ n, o(n C)) < suc(f, µ, C) Ω(n)? (27) As with the direct sum question, the direct product question appears obvious : one would expect that the best we can do is just execute the best protocol for one copy of F n times independently. This will lead to a success probability of suc(f, µ, o(c)) n. A prominent setting within complexity theory where a question similar to the direct product question arose is that of parallel repetition for two-prover games [39]. Parallel repetition is used in the context of probabilistically checkable proofs (PCP) and hardness amplification. Hardness amplification is accomplished here by taking a hard task T (e.g. a verification procedure where the success probability of an unauthorized provers is 1 ε), and creating a task T n by taking n independent instances of T. It has been shown [39] that as n grows, the success probability

16 16 Mark Braverman goes to 0. Unfortunately, it does not go to 0 as (1 ε) n. Indeed, as shown by a counterexample constructed by Raz [40], the best rate one can hope for is (1 ε 2 ) n. The reason for this, pointed out by an earlier example by Feige and Verbitsky [18], is that the answers can be arranged to align errors together, so that when the provers fail, they fail on a lot more than εn coordinates at the same time. This is possible when answers are allowed to be correlated. The direct product question (27) for communication complexity combines features from the direct sum question (thus hinting that information complexity is to play a role here as well), and from the parallel repetition setup (since we want a success probability dropping exponentially in n). The direct sum discussion already suggests that for suc(f, µ, C) = 1 ε, the best scaling of the amount of communication one can hope for is as n I, where I = IC(F ε, µ). This is because, as n, the per-copy communication cost of computing F with error ε scales as n I. Thus, if we denote by suc i (F, µ, I) suc(f, µ, I) the best success probability one can attain at solving F while incurring an information cost of at most I, the direct product question for information asks whether suc(f n, µ n, o(n I)) < suc i (F, µ, I) Ω(n)? (28) Note that the success probability on the left-hand-side is still with respect to communication. A statement such as this with respect to information cost is bound to be false: Information cost being an average-case quantity, one can attain an information-cost I n protocol by doing nothing with probability 1 δ, and incurring an information cost of I n /δ n I with probability δ that can be taken only polynomially (and not exponentially) small. In a sequence of two papers, the second being very recent [11, 12], (28) was shown to be true up to polylogarithmic factors for boolean functions. To simplify parameters, suppose suc i (F, µ, I) < 2/3. Then there are constants c 1, c 2 such that if T log T < c 1 n I, then suc(f n, µ n, T ) < 2 c2n. (29) The proof of (29) is quite involved and combines ideas from the proof of direct sum theorems and of parallel repetition theorems. Exact communication complexity bounds. One of the great successes of information theory as it applies to (classical, one-way) communication problems is in its ability to give precise answers to fairly complicated asymptotic communication problems, for example ones involving complicated dependencies between terminals or complicated channels. For example, the capacity of the binary symmetric channel BSC 0.2 is precisely 1 H(0.2) 0.278, which means that to transmit n bits over such a channel, we will need 3.596n utilizations of the channel (i.e. will need to send 3.596n bits down the channel). Using combinatorial techniques, in most cases, such precision is inaccessible in the two-party setting, since the techniques often lose constant factors by design. In contrast, information complexity extends the precision benefits of one-way information theory to the interactive setting. We give one specific example of an exact communication complexity bound. Recall that the disjointness problem Disj n (X, Y ) takes two n-bit vectors X, Y and

17 Interactive information and coding theory 17 checks whether there is a location with X i = Y i = 1. Thus Disj n is just a disjunction of n independent copies of the two bit AND(X i, Y i ) function. Using techniques similar to the proof of Theorem 2.1, one can show that the communication complexity of disjointness is tightly linked with the information complexity of AN D. Note that disjointness becomes trivial if many coordinates (X j, Y j ) of the input are (1, 1). However, any distribution of inputs where µ((x j, Y j ) = (1, 1)) 1/n 0 will not be trivial. More formally, denote by 0 + a function f(n) of n such that f(n) = o(1) and f(n) 2 O(n). For example, one can take f(n) = 1/n. Then with some work one shows [9] that ( ) R 0 +(Disj n ) = inf IC(AND 0, µ) n ± o(n). (30) µ:µ(1,1)=0 Thus, understanding the precise asymptotics of the communication complexity of Disj n boils down to understanding the (0-error) information complexity of the two-bit AN D function. It turns out that one can give an explicit informationtheoretically optimal family of protocols for AN D, and calculate the quantity in (30) explicitly, obtaining R 0 +(Disj n ) = C DISJ n ± o(n) where C DISJ Interestingly, even in the case of such a simple function as two-bit AND, the information complexity is not attained by any particular protocol, but rather by an infinite family of communication protocols! Moreover, if we denote by IC r (AND 0 ) the information cost of AND where the infimum in (20) is only taken over protocols of length r, then it turns out that IC r (AND 0 ) = IC(AND 0 ) + Θ(1/r 2 ), implying that an asymptotically optimal protocol is only achieved with a super-constant number of rounds [9]. We do not yet know how general this 1/r 2 gap phenomenon is, and which communication tasks admit a minimum in (20). 3. Interactive error-correcting codes Adversarial error-correction. The discussion so far focused on coding for interactive computing over a noiseless binary channel. In this section we will focus on error-correction problems when the channel contains random or adversarial noise. The first regime we would like to consider is that of adversarial noise. In this regime Alice and Bob are trying to perform a task T over a channel in which an adversary is allowed to corrupt a constant fraction of the messages. Both the regime of a binary channel and that of a channel with constant-size alphabet Σ (i.e. where symbols σ Σ are being transmitted over the channel) are interesting. The one-way case has been extensively studied for several decades, as discussed in the introduction. If the task T is just a simple transmission task, then the theory of (worst-case) error-correcting codes [34, 44] applies. While there are many open problems in coding theory, the overall picture is fairly well understood. In particular, constructions of good positive-rate, constant-distance codes exist (i.e. codes Note that even when µ(1, 1) = 0 and thus AND(X, Y ) = 0 on supp(µ), the task AND 0 requires the protocol to always be correct even on the (1, 1) input. Otherwise, IC(AND 0, µ) would trivially be 0.

Information Complexity and Applications. Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017

Information Complexity and Applications. Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017 Information Complexity and Applications Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017 Coding vs complexity: a tale of two theories Coding Goal: data transmission Different channels

More information

Lecture 16 Oct 21, 2014

Lecture 16 Oct 21, 2014 CS 395T: Sublinear Algorithms Fall 24 Prof. Eric Price Lecture 6 Oct 2, 24 Scribe: Chi-Kit Lam Overview In this lecture we will talk about information and compression, which the Huffman coding can achieve

More information

CS Communication Complexity: Applications and New Directions

CS Communication Complexity: Applications and New Directions CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the

More information

Exponential Separation of Quantum Communication and Classical Information

Exponential Separation of Quantum Communication and Classical Information Exponential Separation of Quantum Communication and Classical Information Dave Touchette IQC and C&O, University of Waterloo, and Perimeter Institute for Theoretical Physics jt. work with Anurag Anshu

More information

Lecture 16: Communication Complexity

Lecture 16: Communication Complexity CSE 531: Computational Complexity I Winter 2016 Lecture 16: Communication Complexity Mar 2, 2016 Lecturer: Paul Beame Scribe: Paul Beame 1 Communication Complexity In (two-party) communication complexity

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

Lecture 11: Quantum Information III - Source Coding

Lecture 11: Quantum Information III - Source Coding CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that

More information

Lecture 21: Quantum communication complexity

Lecture 21: Quantum communication complexity CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 21: Quantum communication complexity April 6, 2006 In this lecture we will discuss how quantum information can allow for a

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding

Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding Mohsen Ghaffari MIT ghaffari@mit.edu Bernhard Haeupler Microsoft Research haeupler@cs.cmu.edu Abstract We study coding schemes

More information

Quantum Communication Complexity

Quantum Communication Complexity Quantum Communication Complexity Ronald de Wolf Communication complexity has been studied extensively in the area of theoretical computer science and has deep connections with seemingly unrelated areas,

More information

Information Complexity vs. Communication Complexity: Hidden Layers Game

Information Complexity vs. Communication Complexity: Hidden Layers Game Information Complexity vs. Communication Complexity: Hidden Layers Game Jiahui Liu Final Project Presentation for Information Theory in TCS Introduction Review of IC vs CC Hidden Layers Game Upper Bound

More information

Interactive Channel Capacity

Interactive Channel Capacity Electronic Colloquium on Computational Complexity, Report No. 1 (2013 Interactive Channel Capacity Gillat Kol Ran Raz Abstract We study the interactive channel capacity of an ɛ-noisy channel. The interactive

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan The Communication Complexity of Correlation Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan Transmitting Correlated Variables (X, Y) pair of correlated random variables Transmitting

More information

How to Compress Interactive Communication

How to Compress Interactive Communication How to Compress Interactive Communication Boaz Barak Mark Braverman Xi Chen Anup Rao March 1, 2013 Abstract We describe new ways to simulate 2-party communication protocols to get protocols with potentially

More information

An Information Complexity Approach to the Inner Product Problem

An Information Complexity Approach to the Inner Product Problem An Information Complexity Approach to the Inner Product Problem William Henderson-Frost Advisor: Amit Chakrabarti Senior Honors Thesis Submitted to the faculty in partial fulfillment of the requirements

More information

Noisy channel communication

Noisy channel communication Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University

More information

PERFECTLY secure key agreement has been studied recently

PERFECTLY secure key agreement has been studied recently IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract

More information

Introduction to Cryptography Lecture 13

Introduction to Cryptography Lecture 13 Introduction to Cryptography Lecture 13 Benny Pinkas June 5, 2011 Introduction to Cryptography, Benny Pinkas page 1 Electronic cash June 5, 2011 Introduction to Cryptography, Benny Pinkas page 2 Simple

More information

Lecture 10 + additional notes

Lecture 10 + additional notes CSE533: Information Theorn Computer Science November 1, 2010 Lecturer: Anup Rao Lecture 10 + additional notes Scribe: Mohammad Moharrami 1 Constraint satisfaction problems We start by defining bivariate

More information

Efficient Probabilistically Checkable Debates

Efficient Probabilistically Checkable Debates Efficient Probabilistically Checkable Debates Andrew Drucker MIT Andrew Drucker MIT, Efficient Probabilistically Checkable Debates 1/53 Polynomial-time Debates Given: language L, string x; Player 1 argues

More information

14. Direct Sum (Part 1) - Introduction

14. Direct Sum (Part 1) - Introduction Communication Complexity 14 Oct, 2011 (@ IMSc) 14. Direct Sum (Part 1) - Introduction Lecturer: Prahladh Harsha Scribe: Abhishek Dang 14.1 Introduction The Direct Sum problem asks how the difficulty in

More information

Two Query PCP with Sub-Constant Error

Two Query PCP with Sub-Constant Error Electronic Colloquium on Computational Complexity, Report No 71 (2008) Two Query PCP with Sub-Constant Error Dana Moshkovitz Ran Raz July 28, 2008 Abstract We show that the N P-Complete language 3SAT has

More information

Communication Complexity

Communication Complexity Communication Complexity Jie Ren Adaptive Signal Processing and Information Theory Group Nov 3 rd, 2014 Jie Ren (Drexel ASPITRG) CC Nov 3 rd, 2014 1 / 77 1 E. Kushilevitz and N. Nisan, Communication Complexity,

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

Notes on Complexity Theory Last updated: November, Lecture 10

Notes on Complexity Theory Last updated: November, Lecture 10 Notes on Complexity Theory Last updated: November, 2015 Lecture 10 Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Randomized Time Complexity 1.1 How Large is BPP? We know that P ZPP = RP corp

More information

Lecture 19: Interactive Proofs and the PCP Theorem

Lecture 19: Interactive Proofs and the PCP Theorem Lecture 19: Interactive Proofs and the PCP Theorem Valentine Kabanets November 29, 2016 1 Interactive Proofs In this model, we have an all-powerful Prover (with unlimited computational prover) and a polytime

More information

1 Distributional problems

1 Distributional problems CSCI 5170: Computational Complexity Lecture 6 The Chinese University of Hong Kong, Spring 2016 23 February 2016 The theory of NP-completeness has been applied to explain why brute-force search is essentially

More information

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions Columbia University - Crypto Reading Group Apr 27, 2011 Inaccessible Entropy and its Applications Igor Carboni Oliveira We summarize the constructions of PRGs from OWFs discussed so far and introduce the

More information

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Compute the Fourier transform on the first register to get x {0,1} n x 0. CS 94 Recursive Fourier Sampling, Simon s Algorithm /5/009 Spring 009 Lecture 3 1 Review Recall that we can write any classical circuit x f(x) as a reversible circuit R f. We can view R f as a unitary

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Towards Optimal Deterministic Coding for Interactive Communication

Towards Optimal Deterministic Coding for Interactive Communication Towards Optimal Deterministic Coding for Interactive Communication Ran Gelles Bernhard Haeupler Gillat Kol Noga Ron-Zewi Avi Wigderson Abstract We show an efficient, deterministic interactive coding scheme

More information

Computational Tasks and Models

Computational Tasks and Models 1 Computational Tasks and Models Overview: We assume that the reader is familiar with computing devices but may associate the notion of computation with specific incarnations of it. Our first goal is to

More information

IP = PSPACE using Error Correcting Codes

IP = PSPACE using Error Correcting Codes Electronic Colloquium on Computational Complexity, Report No. 137 (2010 IP = PSPACE using Error Correcting Codes Or Meir Abstract The IP theorem, which asserts that IP = PSPACE (Lund et. al., and Shamir,

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 2015 Abstract In information theory, Shannon s Noisy-Channel Coding Theorem states that it is possible to communicate over a noisy

More information

Lecture 20: Lower Bounds for Inner Product & Indexing

Lecture 20: Lower Bounds for Inner Product & Indexing 15-859: Information Theory and Applications in TCS CMU: Spring 201 Lecture 20: Lower Bounds for Inner Product & Indexing April 9, 201 Lecturer: Venkatesan Guruswami Scribe: Albert Gu 1 Recap Last class

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Randomness. What next?

Randomness. What next? Randomness What next? Random Walk Attribution These slides were prepared for the New Jersey Governor s School course The Math Behind the Machine taught in the summer of 2012 by Grant Schoenebeck Large

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

FORMULATION OF THE LEARNING PROBLEM

FORMULATION OF THE LEARNING PROBLEM FORMULTION OF THE LERNING PROBLEM MIM RGINSKY Now that we have seen an informal statement of the learning problem, as well as acquired some technical tools in the form of concentration inequalities, we

More information

Arthur-Merlin Streaming Complexity

Arthur-Merlin Streaming Complexity Weizmann Institute of Science Joint work with Ran Raz Data Streams The data stream model is an abstraction commonly used for algorithms that process network traffic using sublinear space. A data stream

More information

Majority is incompressible by AC 0 [p] circuits

Majority is incompressible by AC 0 [p] circuits Majority is incompressible by AC 0 [p] circuits Igor Carboni Oliveira Columbia University Joint work with Rahul Santhanam (Univ. Edinburgh) 1 Part 1 Background, Examples, and Motivation 2 Basic Definitions

More information

Communication vs information complexity, relative discrepancy and other lower bounds

Communication vs information complexity, relative discrepancy and other lower bounds Communication vs information complexity, relative discrepancy and other lower bounds Iordanis Kerenidis CNRS, LIAFA- Univ Paris Diderot 7 Joint work with: L. Fontes, R. Jain, S. Laplante, M. Lauriere,

More information

Lecture 4: Codes based on Concatenation

Lecture 4: Codes based on Concatenation Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes

More information

Noisy-Channel Coding

Noisy-Channel Coding Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/05264298 Part II Noisy-Channel Coding Copyright Cambridge University Press 2003.

More information

Lecture 26: Arthur-Merlin Games

Lecture 26: Arthur-Merlin Games CS 710: Complexity Theory 12/09/2011 Lecture 26: Arthur-Merlin Games Instructor: Dieter van Melkebeek Scribe: Chetan Rao and Aaron Gorenstein Last time we compared counting versus alternation and showed

More information

THE UNIVERSITY OF CHICAGO COMMUNICATION COMPLEXITY AND INFORMATION COMPLEXITY A DISSERTATION SUBMITTED TO

THE UNIVERSITY OF CHICAGO COMMUNICATION COMPLEXITY AND INFORMATION COMPLEXITY A DISSERTATION SUBMITTED TO THE UNIVERSITY OF CHICAGO COMMUNICATION COMPLEXITY AND INFORMATION COMPLEXITY A DISSERTATION SUBMITTED TO THE FACULTY OF THE DIVISION OF THE PHYSICAL SCIENCES IN CANDIDACY FOR THE DEGREE OF DOCTOR OF PHILOSOPHY

More information

Lecture 18: Quantum Information Theory and Holevo s Bound

Lecture 18: Quantum Information Theory and Holevo s Bound Quantum Computation (CMU 1-59BB, Fall 2015) Lecture 1: Quantum Information Theory and Holevo s Bound November 10, 2015 Lecturer: John Wright Scribe: Nicolas Resch 1 Question In today s lecture, we will

More information

Asymmetric Communication Complexity and Data Structure Lower Bounds

Asymmetric Communication Complexity and Data Structure Lower Bounds Asymmetric Communication Complexity and Data Structure Lower Bounds Yuanhao Wei 11 November 2014 1 Introduction This lecture will be mostly based off of Miltersen s paper Cell Probe Complexity - a Survey

More information

15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions

15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions 15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions Definitions We say a (deterministic) protocol P computes f if (x, y) {0,

More information

Lecture 22: Quantum computational complexity

Lecture 22: Quantum computational complexity CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 22: Quantum computational complexity April 11, 2006 This will be the last lecture of the course I hope you have enjoyed the

More information

Entropy. Probability and Computing. Presentation 22. Probability and Computing Presentation 22 Entropy 1/39

Entropy. Probability and Computing. Presentation 22. Probability and Computing Presentation 22 Entropy 1/39 Entropy Probability and Computing Presentation 22 Probability and Computing Presentation 22 Entropy 1/39 Introduction Why randomness and information are related? An event that is almost certain to occur

More information

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is

More information

Fourier analysis of boolean functions in quantum computation

Fourier analysis of boolean functions in quantum computation Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge

More information

Maximal Noise in Interactive Communication over Erasure Channels and Channels with Feedback

Maximal Noise in Interactive Communication over Erasure Channels and Channels with Feedback Maximal Noise in Interactive Communication over Erasure Channels and Channels with Feedback Klim Efremenko UC Berkeley klimefrem@gmail.com Ran Gelles Princeton University rgelles@cs.princeton.edu Bernhard

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Check Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University sergekhanna@rutgersedu, salimelrouayheb@rutgersedu arxiv:759569v3

More information

Lecture Lecture 9 October 1, 2015

Lecture Lecture 9 October 1, 2015 CS 229r: Algorithms for Big Data Fall 2015 Lecture Lecture 9 October 1, 2015 Prof. Jelani Nelson Scribe: Rachit Singh 1 Overview In the last lecture we covered the distance to monotonicity (DTM) and longest

More information

An exponential separation between quantum and classical one-way communication complexity

An exponential separation between quantum and classical one-way communication complexity An exponential separation between quantum and classical one-way communication complexity Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical

More information

A Mathematical Theory of Communication

A Mathematical Theory of Communication A Mathematical Theory of Communication Ben Eggers Abstract This paper defines information-theoretic entropy and proves some elementary results about it. Notably, we prove that given a few basic assumptions

More information

Intro to Information Theory

Intro to Information Theory Intro to Information Theory Math Circle February 11, 2018 1. Random variables Let us review discrete random variables and some notation. A random variable X takes value a A with probability P (a) 0. Here

More information

Competing Provers Protocols for Circuit Evaluation

Competing Provers Protocols for Circuit Evaluation THEORY OF COMPUTING www.theoryofcomputing.org Competing Provers Protocols for Circuit Evaluation Gillat Kol Ran Raz April 21, 2014 Abstract: Let C be a fan-in 2) Boolean circuit of size s and depth d,

More information

The sample complexity of agnostic learning with deterministic labels

The sample complexity of agnostic learning with deterministic labels The sample complexity of agnostic learning with deterministic labels Shai Ben-David Cheriton School of Computer Science University of Waterloo Waterloo, ON, N2L 3G CANADA shai@uwaterloo.ca Ruth Urner College

More information

A FRAMEWORK FOR UNCONDITIONALLY SECURE PUBLIC-KEY ENCRYPTION (WITH POSSIBLE DECRYPTION ERRORS)

A FRAMEWORK FOR UNCONDITIONALLY SECURE PUBLIC-KEY ENCRYPTION (WITH POSSIBLE DECRYPTION ERRORS) A FRAMEWORK FOR UNCONDITIONALLY SECURE PUBLIC-KEY ENCRYPTION (WITH POSSIBLE DECRYPTION ERRORS) MARIYA BESSONOV, DIMA GRIGORIEV, AND VLADIMIR SHPILRAIN ABSTRACT. We offer a public-key encryption protocol

More information

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions Engineering Tripos Part IIA THIRD YEAR 3F: Signals and Systems INFORMATION THEORY Examples Paper Solutions. Let the joint probability mass function of two binary random variables X and Y be given in the

More information

Notes 10: Public-key cryptography

Notes 10: Public-key cryptography MTH6115 Cryptography Notes 10: Public-key cryptography In this section we look at two other schemes that have been proposed for publickey ciphers. The first is interesting because it was the earliest such

More information

Recoverabilty Conditions for Rankings Under Partial Information

Recoverabilty Conditions for Rankings Under Partial Information Recoverabilty Conditions for Rankings Under Partial Information Srikanth Jagabathula Devavrat Shah Abstract We consider the problem of exact recovery of a function, defined on the space of permutations

More information

Lecture 1 : Data Compression and Entropy

Lecture 1 : Data Compression and Entropy CPS290: Algorithmic Foundations of Data Science January 8, 207 Lecture : Data Compression and Entropy Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will study a simple model for

More information

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY BURTON ROSENBERG UNIVERSITY OF MIAMI Contents 1. Perfect Secrecy 1 1.1. A Perfectly Secret Cipher 2 1.2. Odds Ratio and Bias 3 1.3. Conditions for Perfect

More information

Improved Direct Product Theorems for Randomized Query Complexity

Improved Direct Product Theorems for Randomized Query Complexity Improved Direct Product Theorems for Randomized Query Complexity Andrew Drucker Nov. 16, 2010 Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 1/28 Big picture Usually,

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

Other Topics in Quantum Information

Other Topics in Quantum Information p. 1/23 Other Topics in Quantum Information In a course like this there is only a limited time, and only a limited number of topics can be covered. Some additional topics will be covered in the class projects.

More information

Partitions and Covers

Partitions and Covers University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Dong Wang Date: January 2, 2012 LECTURE 4 Partitions and Covers In previous lectures, we saw

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security CPSC 467b: Cryptography and Computer Security Michael J. Fischer Lecture 10 February 19, 2013 CPSC 467b, Lecture 10 1/45 Primality Tests Strong primality tests Weak tests of compositeness Reformulation

More information

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy Haykin_ch05_pp3.fm Page 207 Monday, November 26, 202 2:44 PM CHAPTER 5 Information Theory 5. Introduction As mentioned in Chapter and reiterated along the way, the purpose of a communication system is

More information

CISC 876: Kolmogorov Complexity

CISC 876: Kolmogorov Complexity March 27, 2007 Outline 1 Introduction 2 Definition Incompressibility and Randomness 3 Prefix Complexity Resource-Bounded K-Complexity 4 Incompressibility Method Gödel s Incompleteness Theorem 5 Outline

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Probabilistically Checkable Arguments

Probabilistically Checkable Arguments Probabilistically Checkable Arguments Yael Tauman Kalai Microsoft Research yael@microsoft.com Ran Raz Weizmann Institute of Science ran.raz@weizmann.ac.il Abstract We give a general reduction that converts

More information

Lecture 11: Continuous-valued signals and differential entropy

Lecture 11: Continuous-valued signals and differential entropy Lecture 11: Continuous-valued signals and differential entropy Biology 429 Carl Bergstrom September 20, 2008 Sources: Parts of today s lecture follow Chapter 8 from Cover and Thomas (2007). Some components

More information

From Secure MPC to Efficient Zero-Knowledge

From Secure MPC to Efficient Zero-Knowledge From Secure MPC to Efficient Zero-Knowledge David Wu March, 2017 The Complexity Class NP NP the class of problems that are efficiently verifiable a language L is in NP if there exists a polynomial-time

More information

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory 1 The intuitive meaning of entropy Modern information theory was born in Shannon s 1948 paper A Mathematical Theory of

More information

Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code

Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code Lecture 16 Agenda for the lecture Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code Variable-length source codes with error 16.1 Error-free coding schemes 16.1.1 The Shannon-Fano-Elias

More information

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN A Thesis Presented to The Academic Faculty by Bryan Larish In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

PRGs for space-bounded computation: INW, Nisan

PRGs for space-bounded computation: INW, Nisan 0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

Lecture 6: Quantum error correction and quantum capacity

Lecture 6: Quantum error correction and quantum capacity Lecture 6: Quantum error correction and quantum capacity Mark M. Wilde The quantum capacity theorem is one of the most important theorems in quantum hannon theory. It is a fundamentally quantum theorem

More information

Hardness of MST Construction

Hardness of MST Construction Lecture 7 Hardness of MST Construction In the previous lecture, we saw that an MST can be computed in O( n log n+ D) rounds using messages of size O(log n). Trivially, Ω(D) rounds are required, but what

More information

List Decoding of Reed Solomon Codes

List Decoding of Reed Solomon Codes List Decoding of Reed Solomon Codes p. 1/30 List Decoding of Reed Solomon Codes Madhu Sudan MIT CSAIL Background: Reliable Transmission of Information List Decoding of Reed Solomon Codes p. 2/30 List Decoding

More information

IMPROVING THE ALPHABET-SIZE IN EXPANDER BASED CODE CONSTRUCTIONS

IMPROVING THE ALPHABET-SIZE IN EXPANDER BASED CODE CONSTRUCTIONS IMPROVING THE ALPHABET-SIZE IN EXPANDER BASED CODE CONSTRUCTIONS 1 Abstract Various code constructions use expander graphs to improve the error resilience. Often the use of expanding graphs comes at the

More information

Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs

Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs Dafna Kidron Yehuda Lindell June 6, 2010 Abstract Universal composability and concurrent general composition

More information

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview

AN INTRODUCTION TO SECRECY CAPACITY. 1. Overview AN INTRODUCTION TO SECRECY CAPACITY BRIAN DUNN. Overview This paper introduces the reader to several information theoretic aspects of covert communications. In particular, it discusses fundamental limits

More information

Cryptography and Security Final Exam

Cryptography and Security Final Exam Cryptography and Security Final Exam Solution Serge Vaudenay 29.1.2018 duration: 3h no documents allowed, except one 2-sided sheet of handwritten notes a pocket calculator is allowed communication devices

More information

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory 6895 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 3: Coding Theory Lecturer: Dana Moshkovitz Scribe: Michael Forbes and Dana Moshkovitz 1 Motivation In the course we will make heavy use of

More information

Lecture 14: IP = PSPACE

Lecture 14: IP = PSPACE IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 14: IP = PSPACE David Mix Barrington and Alexis Maciel August 3, 2000 1. Overview We

More information

Universal Semantic Communication II: A Theory of Goal-Oriented Communication

Universal Semantic Communication II: A Theory of Goal-Oriented Communication Universal Semantic Communication II: A Theory of Goal-Oriented Communication Brendan Juba Madhu Sudan MIT CSAIL {bjuba,madhu}@mit.edu February 13, 2009 Abstract We continue the investigation of the task

More information

Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions

Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions Yair Yona, and Suhas Diggavi, Fellow, IEEE Abstract arxiv:608.0232v4 [cs.cr] Jan 207 In this work we analyze the average

More information