The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan

Size: px
Start display at page:

Download "The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan"

Transcription

1 The Communication Complexity of Correlation Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan

2 Transmitting Correlated Variables (X, Y) pair of correlated random variables

3 Transmitting Correlated Variables (X, Y) pair of correlated random variables Alice Bob

4 Transmitting Correlated Variables (X, Y) pair of correlated random variables x Alice Bob Input (to Alice): x R X

5 Transmitting Correlated Variables (X, Y) pair of correlated random variables x Alice -z Bob Input (to Alice): x R X

6 Transmitting Correlated Variables (X, Y) pair of correlated random variables x Alice -z Bob y Input (to Alice): x R X Output (from Bob): y R Y X=x (ie., conditional distribution)

7 Transmitting Correlated Variables (X, Y) pair of correlated random variables x Alice -z Bob y Input (to Alice): x R X Output (from Bob): y R Y X=x (ie., conditional distribution) Question: What is the minimum "expected" number of bits (i.e, Z ) Alice needs to send Bob? (say, T(X : Y))

8 Transmitting Correlated Variables (X, Y) pair of correlated random variables x Alice -z Bob y Input (to Alice): x R X Output (from Bob): y R Y X=x (ie., conditional distribution) Question: What is the minimum "expected" number of bits (i.e, Z ) Alice needs to send Bob? (say, T(X : Y)) Easy to check: T(X : Y) I[X : Y] (mutual information)

9 Correlated variables an example W = (i, b) random variable uniformly distributed over [n] {0, 1}. X and Y two n bit strings such that X[i] = Y[i] = b remaining 2n 1 bits independently and uniformly chosen.

10 Correlated variables an example W = (i, b) random variable uniformly distributed over [n] {0, 1}. X and Y two n bit strings such that X[i] = Y[i] = b remaining 2n 1 bits independently and uniformly chosen. Exercise: I[X : Y] = o(1) but T(X : Y) = Θ(log n)

11 Information Theory Preliminaries (X, Y) - pair of random variables 1. Entropy: H[X] x p(x) log 1 p(x), where p(x) = Pr[X = x]. minimum expected number of bits to encode X (upto ±1) 2. Conditional Entropy: H[Y X] x Pr[X = x] H[Y X=x ] 3. Joint Entropy: H[XY] = H[X] + H[Y X] 4. Mutual Information: I[X : Y] H[X] + H[Y] H[XY] = H[Y] H[Y X]

12 Information Theory Preliminaries (Contd) 1. Independence: X and Y are independent if Pr[X = x, Y = y] = Pr[X = x] Pr[Y = y], for all x, y Equivalently, I[X : Y] = Markov Chain: X -Z -Y is called a Markov chain if X and Y are conditionally independent given Z (ie., I[X : Y Z] = 0) 3. Data Processing Inequality: X -Z -Y = I[X : Z] I[X : Y]

13 Transmitting Correlated Variables x Alice -z Bob y Input (to Alice): x R X Output (from Bob): y R Y X=x (ie., conditional distribution) Question: What is the minimum "expected" number of bits (i.e, z ) Alice needs to send Bob? (say, T(X : Y))

14 Transmitting Correlated Variables x Alice -z Bob y Input (to Alice): x R X Output (from Bob): y R Y X=x (ie., conditional distribution) Question: What is the minimum "expected" number of bits (i.e, z ) Alice needs to send Bob? (say, T(X : Y)) T(X : Y) = min H[Z] Z where the minimum is over all Markov chains X -Z -Y (i.e., Z such that X and Y are independent given Z)

15 Asymptotic Version (with error) x 1 x 2. x m Alice -z Bob Input: x 1, x 2,..., x m - i.i.d. samples from X ỹ 1 ỹ 2. ỹ m

16 Asymptotic Version (with error) x 1 x 2. x m Alice -z Bob Input: x 1, x 2,..., x m - i.i.d. samples from X Output: ỹ 1, ỹ 2,..., ỹ m such that ( 1 ((X 1, Y 1 ),..., (X m, Y m )) (X 1, Ỹ 1 ),..., (X m, Ỹ m )) λ ỹ 1 ỹ 2. ỹ m

17 Asymptotic Version (with error) x 1 x 2. x m Alice -z Bob Input: x 1, x 2,..., x m - i.i.d. samples from X Output: ỹ 1, ỹ 2,..., ỹ m such that ( 1 ((X 1, Y 1 ),..., (X m, Y m )) (X 1, Ỹ 1 ),..., (X m, Ỹ m )) λ ỹ 1 ỹ 2. ỹ m T λ (X m, Y m ) = min E[ Z ] Common Information: C(X : Y) lim inf λ 0 [ lim m T λ (X m : Y m ] ) m

18 Asymptotic Version (with error) Theorem (Wyner 1975) C(X : Y) = min I[XY : Z], Z where the minimum is taken over all Z such that X -Z -Y.

19 Asymptotic Version (with error) Theorem (Wyner 1975) C(X : Y) = min I[XY : Z], Z where the minimum is taken over all Z such that X -Z -Y. For instance in the example, C(X : Y) = 2 o(1) i.e., Can send significantly less in the asymptotic case (2 bits on average) compared to the one-shot case (Θ(log n) bits).

20 Asymptotic Version x 1 x 2. x m Alice -z Bob ỹ 1 ỹ 2. ỹ m ỹ 1, ỹ 2,..., ỹ m such that ( 1 ((X 1, Y 1 ),..., (X m, Y m )) (X 1, Ỹ 1 ),..., (X m, Ỹ m )) λ T λ (X m, Y m ) = min E[ Z ]

21 Asymptotic Version with Shared Randomness Random String R x 1 ỹ 1 x 2 ỹ 2. x m Alice -z Bob. ỹ m ỹ 1, ỹ 2,..., ỹ m such that ( 1 ((X 1, Y 1 ),..., (X m, Y m )) (X 1, Ỹ 1 ),..., (X m, Ỹ m )) λ T R λ (Xm, Y m ) = min E[ Z ]

22 Asymptotic Version with Shared Randomness Random String R x 1 ỹ 1 x 2 ỹ 2. x m Alice -z Bob. ỹ m ỹ 1, ỹ 2,..., ỹ m such that ( 1 ((X 1, Y 1 ),..., (X m, Y m )) (X 1, Ỹ 1 ),..., (X m, Ỹ m )) λ T R λ (Xm, Y m ) = min E[ Z ] Theorem (Winter 2002) [ lim inf lim λ 0 m Tλ R(Xm : Y m ] ) = I[X : Y] m

23 Main Result: One-shot Version R x Alice -z Bob y Input (to Alice): x R X Output (from Bob): y R Y X=x (ie., conditional distribution) Communication: T R (X : Y) = E[ Z ]

24 Main Result: One-shot Version R x Alice -z Bob y Input (to Alice): x R X Output (from Bob): y R Y X=x (ie., conditional distribution) Communication: T R (X : Y) = E[ Z ] Theorem (Main Result) I[X : Y] T R (X : Y) I[X : Y] + 2 log I[X : Y] + O(1)

25 Main Result: One-shot Version R x Alice -z Bob y Input (to Alice): x R X Output (from Bob): y R Y X=x (ie., conditional distribution) Communication: T R (X : Y) = E[ Z ] Theorem (Main Result) I[X : Y] T R (X : Y) I[X : Y] + 2 log I[X : Y] + O(1) Characterization of Mutual Information (upto lower order logarithmic terms)

26 One-shot vs. Asymptotic

27 One-shot vs. Asymptotic Typical Sets For large n, n i.i.d samples of X fall in "typical sets" Typical sets all elements equally probable and number of elements 2 nh[x]. Asymptotic statements arguably properties of typical sets as opposed to the underlying distributions.

28 One-shot vs. Asymptotic Typical Sets Applications For large n, n i.i.d samples of X fall in "typical sets" Typical sets all elements equally probable and number of elements 2 nh[x]. Asymptotic statements arguably properties of typical sets as opposed to the underlying distributions. Asymptotic versions not strong enough for applications.

29 Outline Introduction Proof of Main Result Rejection Sampling Procedure Applications of Main Result Communication Complexity: Direct Sum Result

30 Generating one distribution from another P, Q two distributions such that Supp(P) Supp(Q). Rejection Sampling Procedure: q 1 q 2 q 3 q 4 q q i Proc i Input: An infinite stream of independently drawn samples from Q Output: Index i such that q i is a sample from P

31 Generating one distribution from another P, Q two distributions such that Supp(P) Supp(Q). Rejection Sampling Procedure: q 1 q 2 q 3 q 4 q q i Proc i Input: An infinite stream of independently drawn samples from Q Output: Index i such that q i is a sample from P Question: What is the minimum expected length of the index (i.e., E[l(i )] over all such procedures?

32 Naive procedure Sample according to P to obtain item x Wait till item x appears in the stream and output corresponding index

33 Naive procedure Sample according to P to obtain item x Wait till item x appears in the stream and output corresponding index E[l(i )] x p(x) log 1 q(x)

34 Relative Entropy P, Q two distributions S(P Q) = x p(x) log p(x) q(x)

35 Relative Entropy P, Q two distributions S(P Q) = x p(x) log p(x) q(x) Properties: Asymmetric

36 Relative Entropy P, Q two distributions S(P Q) = x p(x) log p(x) q(x) Properties: Asymmetric S(P Q) < Supp(P) Supp(Q)

37 Relative Entropy P, Q two distributions S(P Q) = x p(x) log p(x) q(x) Properties: Asymmetric S(P Q) < Supp(P) Supp(Q) S(P Q) = 0 P Q

38 Relative Entropy P, Q two distributions S(P Q) = x p(x) log p(x) q(x) Properties: Asymmetric S(P Q) < Supp(P) Supp(Q) S(P Q) = 0 P Q S(P Q) 0

39 Rejection Sampling Lemma Lemma (Rejection Sampling Lemma) There exists a rejection sampling procedure that generates P from Q such that E[l(i )] S(P Q) + 2 log S(P Q) + O(1).

40 Proof of Main Result Fact: I[X : Y] = E x X [ S (Y X=x Y) ]

41 Proof of Main Result Fact: I[X : Y] = E x X [ S (Y X=x Y) ] Proof. Common random string: sequence of samples from marginal Y On input x, Alice performs rejection sampling procedure to generate Y X=x from Y E[ Z ] = E x X [ S(Y X=x Y) + 2 log S(Y X=x Y) + O(1) ] I[X : Y] + 2 log I[X : Y] + O(1)

42 Greedy Approach to Rejection Sampling q 1 q 2 q 3 q 4 q q i Proc i Input: An infinite stream of independently drawn samples from Q Output: Index i such that q i is a sample from P

43 Greedy Approach to Rejection Sampling q 1 q 2 q 3 q 4 q q i Proc i Input: An infinite stream of independently drawn samples from Q Output: Index i such that q i is a sample from P Greedy Approach: At each iteration, fill distribution P with best possible sub-distribution of Q, while maintaining that sum is always less then P

44 Greedy Approach 1. Set p 1 (x) p(x) [p i (x) = Probability for item x that still needs to be satisfied] 2. Set s 0 0 [s i = Pr[Greedy stops before examining (i + 1)th sample]] 3. For i 1 to 3.1 Examine sample q i 3.2 If q i = x, n o p With probability min i (x), 1 (1 s i 1 output i ) q(x) 3.3 Updates: For all x, set p i+1(x) n o p p i(x) (1 s i 1)q(x) min i (x), 1 (1 s i 1 = p )q(x) i(x) α i,x Set s i s i 1 + P x αi,x

45 Greedy Approach 1. Set p 1 (x) p(x) [p i (x) = Probability for item x that still needs to be satisfied] 2. Set s 0 0 [s i = Pr[Greedy stops before examining (i + 1)th sample]] 3. For i 1 to 3.1 Examine sample q i 3.2 If q i = x, n o p With probability min i (x), 1 (1 s i 1 output i ) q(x) 3.3 Updates: For all x, set p i+1(x) n o p p i(x) (1 s i 1)q(x) min i (x), 1 (1 s i 1 = p )q(x) i(x) α i,x Set s i s i 1 + P x αi,x Clearly, distribution generated is P.

46 Expected Index Length

47 Expected Index Length s i = Pr[ Greedy stops before examining sample q i+1 ] α i,x = Pr[ Greedy outputs x in iteration i].

48 Expected Index Length s i = Pr[ Greedy stops before examining sample q i+1 ] α i,x = Pr[ Greedy outputs x in iteration i]. Note p(x) = i α i,x

49 Expected Index Length s i = Pr[ Greedy stops before examining sample q i+1 ] α i,x = Pr[ Greedy outputs x in iteration i]. Note p(x) = i α i,x Suppose α i+1,x > 0, i.e., previous iterations not sufficient to provide the required probability p(x) to item x. Hence,

50 Expected Index Length s i = Pr[ Greedy stops before examining sample q i+1 ] α i,x = Pr[ Greedy outputs x in iteration i]. Note p(x) = i α i,x Suppose α i+1,x > 0, i.e., previous iterations not sufficient to provide the required probability p(x) to item x. Hence, i (1 s j 1 ) q(x) < p(x) j=1 i(1 s i )q(x) < p(x).

51 Expected Index Length s i = Pr[ Greedy stops before examining sample q i+1 ] α i,x = Pr[ Greedy outputs x in iteration i]. Note p(x) = i α i,x Suppose α i+1,x > 0, i.e., previous iterations not sufficient to provide the required probability p(x) to item x. Hence, i (1 s j 1 ) q(x) < p(x) j=1 i(1 s i )q(x) < p(x) i < 1 p(x) 1 s i q(x).

52 Expected Index Length E[l(i)] E[log i] = α i,x log i x i ( 1 α i,x log p(x) ) 1 s x i i 1 q(x) = 1 α i,x (log + log p(x) ) 1 s x i i 1 q(x) = p(x) log p(x) q(x) + ( ) 1 α i,x log 1 s x i x i 1 = S(P Q) α i log S(P Q) + log 1 s i i 1 0 = S(P Q) + O(1) 1 1 s ds

53 Expected Index Length E[l(i)] E[log i] = α i,x log i x i ( 1 α i,x log p(x) ) 1 s i 1 q(x) x i = 1 α i,x (log + log p(x) 1 s x i i 1 q(x) = p(x) log p(x) q(x) + ( ) 1 α i,x log 1 s x i x i 1 = S(P Q) α i log S(P Q) + 1 s i i 1 0 = S(P Q) + O(1) Actually, l(i) = log i + 2 log log i, hence extra log term in final result ) log 1 1 s ds

54 Greedy Approach (Contd) Clearly, the greedy approach generates the target distribution P Furthermore, it can be shown that the expected index length is at most E[l(i)] S(P Q) + 2 log S(P Q) + O(1).

55 Outline Introduction Proof of Main Result Rejection Sampling Procedure Applications of Main Result Communication Complexity: Direct Sum Result

56 Two Party Communication Complexity Model [Yao] f : X Y Z Alice Bob

57 Two Party Communication Complexity Model [Yao] f : X Y Z x Alice y Bob

58 Two Party Communication Complexity Model [Yao] f : X Y Z m 1 x Alice y Bob

59 Two Party Communication Complexity Model [Yao] f : X Y Z m 1 m 2 x Alice y Bob

60 Two Party Communication Complexity Model [Yao] f : X Y Z x Alice m 1 m 2 m 3 m 4.. m k y Bob

61 Two Party Communication Complexity Model [Yao] f : X Y Z x Alice m 1 m 2 m 3 m 4.. m k f (x, y) y Bob k-round protocol computing f

62 Two Party Communication Complexity Model [Yao] f : X Y Z x Alice m 1 m 2 m 3 m 4.. m k f (x, y) y Bob k-round protocol computing f Question: How many bits must Alice and Bob exchange to compute f?

63 Direct Sum Question Alice m 1 m 2 m 3 m 4.. m k Bob

64 Direct Sum Question (x 1, x 2,..., x t ) Alice m 1 m 2 m 3 m 4.. m k (f (x 1, y 1 ), f (x 2, y 2 ),..., f (x t, y t )) (y 1, y 2,..., y t ) Bob

65 Direct Sum Question (x 1, x 2,..., x t ) Alice m 1 m 2 m 3 m 4.. m k (f (x 1, y 1 ), f (x 2, y 2 ),..., f (x t, y t )) (y 1, y 2,..., y t ) Bob Direct Sum Question: Does the number of bits communicated increase t fold?

66 Direct Sum Question (x 1, x 2,..., x t ) Alice m 1 m 2 m 3 m 4.. m k (f (x 1, y 1 ), f (x 2, y 2 ),..., f (x t, y t )) (y 1, y 2,..., y t ) Bob Direct Sum Question: Does the number of bits communicated increase t fold? Direct Product Question: Keeping number of bits communicated fixed, does success probability fall exponentially in t?

67 Communication Complexity Measures randomized communication complexity: R k ɛ(f ) = min Π max (number of bits communicated) (x,y) Π - k-round public-coins randomized protocol, that computes f correctly with probability at least 1 ɛ on each input (x, y).

68 Communication Complexity Measures randomized communication complexity: R k ɛ(f ) = min Π max (number of bits communicated) (x,y) Π - k-round public-coins randomized protocol, that computes f correctly with probability at least 1 ɛ on each input (x, y). distributional communication complexity: For a distribution µ on the inputs X Y, D µ,k ɛ (f ) = min Π max (number of bits communicated) (x,y) where Π - "deterministic" k-round protocol for f with average error at most ɛ under µ.

69 Communication Complexity Measures randomized communication complexity: R k ɛ(f ) = min Π max (number of bits communicated) (x,y) Π - k-round public-coins randomized protocol, that computes f correctly with probability at least 1 ɛ on each input (x, y). distributional communication complexity: For a distribution µ on the inputs X Y, D µ,k ɛ (f ) = min Π max (number of bits communicated) (x,y) where Π - "deterministic" k-round protocol for f with average error at most ɛ under µ. Theorem (Yao s minmax principle) R k ɛ(f ) = max µ Dµ,k ɛ (f )

70 Direct Sum Results R k ɛ(f ) vs. R k ɛ(f t ) and D µ,k ɛ (f ) vs. D µt,k ɛ (f t )

71 Direct Sum Results R k ɛ(f ) vs. R k ɛ(f t ) and D µ,k ɛ (f ) vs. D µt,k ɛ (f t ) 1. [Chakrabarti, Shi, Wirth and Yao 2001] Direct Sum result for "Equality function" in Simultaneous message passing model.

72 Direct Sum Results R k ɛ(f ) vs. R k ɛ(f t ) and D µ,k ɛ (f ) vs. D µt,k ɛ (f t ) 1. [Chakrabarti, Shi, Wirth and Yao 2001] Direct Sum result for "Equality function" in Simultaneous message passing model. 2. [Jain, Radhakrishnan and Sen 2005] Extended above result to all functions in simultaneous message passing model and one-way communication model.

73 Direct Sum Results R k ɛ(f ) vs. R k ɛ(f t ) and D µ,k ɛ (f ) vs. D µt,k ɛ (f t ) 1. [Chakrabarti, Shi, Wirth and Yao 2001] Direct Sum result for "Equality function" in Simultaneous message passing model. 2. [Jain, Radhakrishnan and Sen 2005] Extended above result to all functions in simultaneous message passing model and one-way communication model. 3. [Jain, Radhakrishnan and Sen 2003] For bounded round communication models, for any f and any product distribution µ, ( ) D µt,k δ ɛ (f t 2 ) t 2k Dµ,k ɛ+2δ (f ) 2

74 Improved Direct Sum Result Theorem For any function f and any product distribution µ, D µt,k ɛ (f t ) t ( ) δd µ,k 2 ɛ+δ (f ) O(k).

75 Improved Direct Sum Result Theorem For any function f and any product distribution µ, D µt,k ɛ (f t ) t ( ) δd µ,k 2 ɛ+δ (f ) O(k). Applying Yao s minmax principle, ( t ( )) R k ɛ(f t ) max δd µ,k product µ 2 ɛ+δ (f ) O(k).

76 Information Cost X Alice Private Coins Protocol Π M M transcript Y Bob

77 Information Cost X Alice Information Cost of Π wrt µ: Private Coins Protocol Π M M transcript Y Bob IC µ (Π) = I[XY : M]

78 Information Cost X Alice Information Cost of Π wrt µ: Private Coins Protocol Π M M transcript Y Bob IC µ (Π) = I[XY : M] For a function f, let IC µ,k ɛ (f ) = min Π ICµ (Π), where Π k-round private-coins protocols for f with error at most ɛ under µ.

79 Three Lemmata Lemma (Direct Sum for Information Cost) For µ product distribution, IC µt,k ɛ (f t ) t IC µ,k ɛ (f ).

80 Three Lemmata Lemma (Direct Sum for Information Cost) For µ product distribution, IC µt,k ɛ (f t ) t IC µ,k ɛ (f ). Lemma (IC upper bounds distributional complexity) IC µ,k ɛ (f ) D µ,k ɛ (f ).

81 Three Lemmata Lemma (Direct Sum for Information Cost) For µ product distribution, IC µt,k ɛ (f t ) t IC µ,k ɛ (f ). Lemma (IC upper bounds distributional complexity) IC µ,k ɛ (f ) D µ,k ɛ (f ). Lemma ((Improved) Message Compression) D µ,k ɛ+δ (f ) 1 [ δ 2 IC µ,k ɛ ] (f ) + O(k).

82 Three Lemmata Lemma (Direct Sum for Information Cost) For µ product distribution, IC µt,k ɛ (f t ) t IC µ,k ɛ (f ). Lemma (IC upper bounds distributional complexity) IC µ,k ɛ (f ) D µ,k ɛ (f ). Lemma ((Improved) Message Compression) D µ,k ɛ+δ (f ) 1 [ δ 2 IC µ,k ɛ ] (f ) + O(k). Three Lemmata imply improved direct sum result.

83 Direct Sum for Information Cost For µ product distribution, IC µt,k ɛ (f t ) t IC µ,k ɛ (f ).

84 Direct Sum for Information Cost For µ product distribution, IC µt,k ɛ (f t ) t IC µ,k ɛ (f ). Private Coins Protocol Π achieves I = IC ɛ µ,k (f t ) X 1, X 2,..., X t Alice M M transcript Y 1, Y 2,..., Y t Bob

85 Direct Sum for Information Cost For µ product distribution, IC µt,k ɛ (f t ) t IC µ,k ɛ (f ). Private Coins Protocol Π achieves I = IC ɛ µ,k (f t ) X 1, X 2,..., X t Alice M M transcript Y 1, Y 2,..., Y t Bob Chain rule: I[XY : M] t i=1 I[X iy i : M]

86 Direct Sum for Information Cost For µ product distribution, IC µt,k ɛ (f t ) t IC µ,k ɛ (f ). Private Coins Protocol Π achieves I = IC ɛ µ,k (f t ) X 1, X 2,..., X t Alice M M transcript Y 1, Y 2,..., Y t Bob Chain rule: I[XY : M] t i=1 I[X iy i : M] Claim: For each i, I[X i Y i : M] IC µ,k ɛ (f )

87 Direct Sum for Information Cost For µ product distribution, IC µt,k ɛ (f t ) t IC µ,k ɛ (f ). Private Coins Protocol Π achieves I = IC ɛ µ,k (f t ) X 1, X 2,..., X t Alice M M transcript Y 1, Y 2,..., Y t Bob Chain rule: I[XY : M] t i=1 I[X iy i : M] Claim: For each i, I[X i Y i : M] IC µ,k ɛ (f ) Proof: On input X i and Y i, Alice and Bob fill in other components (based on product distribution µ) and perform above protocol

88 IC upper bounds distributional complexity IC µ,k ɛ (f ) D µ,k ɛ (f ).

89 IC upper bounds distributional complexity IC µ,k ɛ (f ) D µ,k ɛ (f ). Proof. Let Π be a protocol that achieves D µ,k ɛ (f ) and M be its transcript. Then, D µ,k ɛ (f ) E[ M ] H[M] I[XY : M] IC µ,k ɛ (f )

90 Message Compression To prove: D µ,k ɛ+δ (f ) 1 δ Private Coins Protocol Π m 1 m 2 m 3 m 4. m k Information Cost I [ 2 IC µ,k ɛ ] (f ) + O(k)

91 Message Compression To prove: D µ,k ɛ+δ (f ) 1 δ [ 2 IC µ,k ɛ ] (f ) + O(k) Private Coins Protocol Π Public Coins Protocol Π m 1 m 2 z 1 m1 m 3 m 4. m k Information Cost I m 2 z 2 z 3 m 4 z 4. z k m3 mk k i=1 E [Z i] 2I + O (k)

92 Message Compression To prove: D µ,k ɛ+δ (f ) 1 δ [ 2 IC µ,k ɛ ] (f ) + O(k) Private Coins Protocol Π Public Coins Protocol Π m 1 m 2 z 1 m1 m 3 m 4. m k Information Cost I m 2 z 2 z 3 m 4 z 4. z k m3 mk k i=1 E [Z i] 2I + O (k) I = I[XY : M] = I[XY : M 1 M 2... M k ] = k I[XY : M i M 1 M 2... M i 1 ] i=1

93 Message Compression To prove: D µ,k ɛ+δ (f ) 1 δ [ 2 IC µ,k ɛ ] (f ) + O(k) Private Coins Protocol Π Public Coins Protocol Π m 1 m 2 z 1 m1 m 3 m 4. m k Information Cost I I = I[XY : M] = I[XY : M 1 M 2... M k ] = Sufficient to prove, m 2 z 2 z 3 m 4 z 4. z k m3 mk k i=1 E [Z i] 2I + O (k) k I[XY : M i M 1 M 2... M i 1 ] i=1 For all i, E[Z i ] 2I[XY : M i M 1 M 2... M i 1 ] + O(1)

94 Message Compression (Contd) To prove: For all i, E[Z i ] 2I[XY : M i M 1 M 2... M i 1 ] + O(1) X Alice m 1 m 2 m 2. m i. m k m1 Y Bob

95 Message Compression (Contd) To prove: For all i, E[Z i ] 2I[XY : M i M 1 M 2... M i 1 ] + O(1) X Alice z 1 m 2 m 2. m i. m k m1 Y Bob

96 Message Compression (Contd) To prove: For all i, E[Z i ] 2I[XY : M i M 1 M 2... M i 1 ] + O(1) X Alice z 1 m 2 z 2. m i. m k m1 Y Bob

97 Message Compression (Contd) To prove: For all i, E[Z i ] 2I[XY : M i M 1 M 2... M i 1 ] + O(1) X Alice z 1 m 2 z 2. m i. m k m1 Y Bob Conditioned on all earlier messages, message M i is independent of Bob s Input Y. Hence, I i = I[XY : M i m 1... m i 1 ] = I[X : M i m 1... m i 1 ]

98 Message Compression (Contd) To prove: For all i, E[Z i ] 2I[XY : M i M 1 M 2... M i 1 ] + O(1) X Alice z 1 m 2 z 2. z i. m k m1 mi Y Bob Conditioned on all earlier messages, message M i is independent of Bob s Input Y. Hence, I i = I[XY : M i m 1... m i 1 ] = I[X : M i m 1... m i 1 ] Main Result implies M i can be generated on Bob s side sending only I i + 2 log I i + O(1) 2I i + O(1) bits

99 Summarizing Results A characterization of mutual information and relative entropy in terms of communication complexity (modulo lower order log terms) An improved direct sum result for communication complexity Thank You

Lecture 11: Quantum Information III - Source Coding

Lecture 11: Quantum Information III - Source Coding CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that

More information

Exponential Separation of Quantum Communication and Classical Information

Exponential Separation of Quantum Communication and Classical Information Exponential Separation of Quantum Communication and Classical Information Dave Touchette IQC and C&O, University of Waterloo, and Perimeter Institute for Theoretical Physics jt. work with Anurag Anshu

More information

Network coding for multicast relation to compression and generalization of Slepian-Wolf

Network coding for multicast relation to compression and generalization of Slepian-Wolf Network coding for multicast relation to compression and generalization of Slepian-Wolf 1 Overview Review of Slepian-Wolf Distributed network compression Error exponents Source-channel separation issues

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Information Theoretic Limits of Randomness Generation

Information Theoretic Limits of Randomness Generation Information Theoretic Limits of Randomness Generation Abbas El Gamal Stanford University Shannon Centennial, University of Michigan, September 2016 Information theory The fundamental problem of communication

More information

Communication Complexity 16:198:671 2/15/2010. Lecture 6. P (x) log

Communication Complexity 16:198:671 2/15/2010. Lecture 6. P (x) log Communication Complexity 6:98:67 2/5/200 Lecture 6 Lecturer: Nikos Leonardos Scribe: Troy Lee Information theory lower bounds. Entropy basics Let Ω be a finite set and P a probability distribution on Ω.

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

Interactive Communication for Data Exchange

Interactive Communication for Data Exchange Interactive Communication for Data Exchange Himanshu Tyagi Indian Institute of Science, Bangalore Joint work with Pramod Viswanath and Shun Watanabe The Data Exchange Problem [ElGamal-Orlitsky 84], [Csiszár-Narayan

More information

An Information Complexity Approach to the Inner Product Problem

An Information Complexity Approach to the Inner Product Problem An Information Complexity Approach to the Inner Product Problem William Henderson-Frost Advisor: Amit Chakrabarti Senior Honors Thesis Submitted to the faculty in partial fulfillment of the requirements

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 1. Cascade of Binary Symmetric Channels The conditional probability distribution py x for each of the BSCs may be expressed by the transition probability

More information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information

4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information 4F5: Advanced Communications and Coding Handout 2: The Typical Set, Compression, Mutual Information Ramji Venkataramanan Signal Processing and Communications Lab Department of Engineering ramji.v@eng.cam.ac.uk

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information

Lecture Lecture 9 October 1, 2015

Lecture Lecture 9 October 1, 2015 CS 229r: Algorithms for Big Data Fall 2015 Lecture Lecture 9 October 1, 2015 Prof. Jelani Nelson Scribe: Rachit Singh 1 Overview In the last lecture we covered the distance to monotonicity (DTM) and longest

More information

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye

Chapter 2: Entropy and Mutual Information. University of Illinois at Chicago ECE 534, Natasha Devroye Chapter 2: Entropy and Mutual Information Chapter 2 outline Definitions Entropy Joint entropy, conditional entropy Relative entropy, mutual information Chain rules Jensen s inequality Log-sum inequality

More information

LECTURE 13. Last time: Lecture outline

LECTURE 13. Last time: Lecture outline LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to

More information

Solutions to Set #2 Data Compression, Huffman code and AEP

Solutions to Set #2 Data Compression, Huffman code and AEP Solutions to Set #2 Data Compression, Huffman code and AEP. Huffman coding. Consider the random variable ( ) x x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0. 0.04 0.04 0.03 0.02 (a) Find a binary Huffman code

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

An approach from classical information theory to lower bounds for smooth codes

An approach from classical information theory to lower bounds for smooth codes An approach from classical information theory to lower bounds for smooth codes Abstract Let C : {0, 1} n {0, 1} m be a code encoding an n-bit string into an m-bit string. Such a code is called a (q, c,

More information

Homework Set #2 Data Compression, Huffman code and AEP

Homework Set #2 Data Compression, Huffman code and AEP Homework Set #2 Data Compression, Huffman code and AEP 1. Huffman coding. Consider the random variable ( x1 x X = 2 x 3 x 4 x 5 x 6 x 7 0.50 0.26 0.11 0.04 0.04 0.03 0.02 (a Find a binary Huffman code

More information

Information Complexity vs. Communication Complexity: Hidden Layers Game

Information Complexity vs. Communication Complexity: Hidden Layers Game Information Complexity vs. Communication Complexity: Hidden Layers Game Jiahui Liu Final Project Presentation for Information Theory in TCS Introduction Review of IC vs CC Hidden Layers Game Upper Bound

More information

CS Foundations of Communication Complexity

CS Foundations of Communication Complexity CS 2429 - Foundations of Communication Complexity Lecturer: Sergey Gorbunov 1 Introduction In this lecture we will see how to use methods of (conditional) information complexity to prove lower bounds for

More information

EE5585 Data Compression May 2, Lecture 27

EE5585 Data Compression May 2, Lecture 27 EE5585 Data Compression May 2, 2013 Lecture 27 Instructor: Arya Mazumdar Scribe: Fangying Zhang Distributed Data Compression/Source Coding In the previous class we used a H-W table as a simple example,

More information

MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016

MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016 MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016 Lecture 14: Information Theoretic Methods Lecturer: Jiaming Xu Scribe: Hilda Ibriga, Adarsh Barik, December 02, 2016 Outline f-divergence

More information

(Classical) Information Theory II: Source coding

(Classical) Information Theory II: Source coding (Classical) Information Theory II: Source coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract The information content of a random variable

More information

Homework 1 Due: Thursday 2/5/2015. Instructions: Turn in your homework in class on Thursday 2/5/2015

Homework 1 Due: Thursday 2/5/2015. Instructions: Turn in your homework in class on Thursday 2/5/2015 10-704 Homework 1 Due: Thursday 2/5/2015 Instructions: Turn in your homework in class on Thursday 2/5/2015 1. Information Theory Basics and Inequalities C&T 2.47, 2.29 (a) A deck of n cards in order 1,

More information

Introduction to Information Theory. B. Škorić, Physical Aspects of Digital Security, Chapter 2

Introduction to Information Theory. B. Škorić, Physical Aspects of Digital Security, Chapter 2 Introduction to Information Theory B. Škorić, Physical Aspects of Digital Security, Chapter 2 1 Information theory What is it? - formal way of counting information bits Why do we need it? - often used

More information

Lecture 16 Oct 21, 2014

Lecture 16 Oct 21, 2014 CS 395T: Sublinear Algorithms Fall 24 Prof. Eric Price Lecture 6 Oct 2, 24 Scribe: Chi-Kit Lam Overview In this lecture we will talk about information and compression, which the Huffman coding can achieve

More information

Lecture 20: Lower Bounds for Inner Product & Indexing

Lecture 20: Lower Bounds for Inner Product & Indexing 15-859: Information Theory and Applications in TCS CMU: Spring 201 Lecture 20: Lower Bounds for Inner Product & Indexing April 9, 201 Lecturer: Venkatesan Guruswami Scribe: Albert Gu 1 Recap Last class

More information

Chapter 8: Differential entropy. University of Illinois at Chicago ECE 534, Natasha Devroye

Chapter 8: Differential entropy. University of Illinois at Chicago ECE 534, Natasha Devroye Chapter 8: Differential entropy Chapter 8 outline Motivation Definitions Relation to discrete entropy Joint and conditional differential entropy Relative entropy and mutual information Properties AEP for

More information

14. Direct Sum (Part 1) - Introduction

14. Direct Sum (Part 1) - Introduction Communication Complexity 14 Oct, 2011 (@ IMSc) 14. Direct Sum (Part 1) - Introduction Lecturer: Prahladh Harsha Scribe: Abhishek Dang 14.1 Introduction The Direct Sum problem asks how the difficulty in

More information

Communication Complexity

Communication Complexity Communication Complexity Jaikumar Radhakrishnan School of Technology and Computer Science Tata Institute of Fundamental Research Mumbai 31 May 2012 J 31 May 2012 1 / 13 Plan 1 Examples, the model, the

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

Lecture 2: August 31

Lecture 2: August 31 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 2: August 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy

More information

Lecture 5: Asymptotic Equipartition Property

Lecture 5: Asymptotic Equipartition Property Lecture 5: Asymptotic Equipartition Property Law of large number for product of random variables AEP and consequences Dr. Yao Xie, ECE587, Information Theory, Duke University Stock market Initial investment

More information

Solutions to Homework Set #3 Channel and Source coding

Solutions to Homework Set #3 Channel and Source coding Solutions to Homework Set #3 Channel and Source coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits per channel use)

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 2015 Abstract In information theory, Shannon s Noisy-Channel Coding Theorem states that it is possible to communicate over a noisy

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions Columbia University - Crypto Reading Group Apr 27, 2011 Inaccessible Entropy and its Applications Igor Carboni Oliveira We summarize the constructions of PRGs from OWFs discussed so far and introduce the

More information

Quantum Achievability Proof via Collision Relative Entropy

Quantum Achievability Proof via Collision Relative Entropy Quantum Achievability Proof via Collision Relative Entropy Salman Beigi Institute for Research in Fundamental Sciences (IPM) Tehran, Iran Setemper 8, 2014 Based on a joint work with Amin Gohari arxiv:1312.3822

More information

Direct Product Theorems for Classical Communication Complexity via Subdistribution Bounds

Direct Product Theorems for Classical Communication Complexity via Subdistribution Bounds Direct Product Theorems for Classical Communication Complexity via Subdistribution Bounds Rahul Jain U. Waterloo Hartmut Klauck Goethe-Universität Frankfurt December 17, 2007 Ashwin Nayak U. Waterloo &

More information

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion

Solutions to Homework Set #1 Sanov s Theorem, Rate distortion st Semester 00/ Solutions to Homework Set # Sanov s Theorem, Rate distortion. Sanov s theorem: Prove the simple version of Sanov s theorem for the binary random variables, i.e., let X,X,...,X n be a sequence

More information

arxiv:cs.cc/ v2 14 Apr 2003

arxiv:cs.cc/ v2 14 Apr 2003 M N O A direct sum theorem in communication complexity via message compression ahul Jain Jaikumar adhakrishnan Pranab Sen arxiv:cscc/0304020 v2 14 Apr 2003 Abstract We prove lower bounds for the direct

More information

Lecture 15: Conditional and Joint Typicaility

Lecture 15: Conditional and Joint Typicaility EE376A Information Theory Lecture 1-02/26/2015 Lecture 15: Conditional and Joint Typicaility Lecturer: Kartik Venkat Scribe: Max Zimet, Brian Wai, Sepehr Nezami 1 Notation We always write a sequence of

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

Lecture 5 - Information theory

Lecture 5 - Information theory Lecture 5 - Information theory Jan Bouda FI MU May 18, 2012 Jan Bouda (FI MU) Lecture 5 - Information theory May 18, 2012 1 / 42 Part I Uncertainty and entropy Jan Bouda (FI MU) Lecture 5 - Information

More information

Information Complexity and Applications. Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017

Information Complexity and Applications. Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017 Information Complexity and Applications Mark Braverman Princeton University and IAS FoCM 17 July 17, 2017 Coding vs complexity: a tale of two theories Coding Goal: data transmission Different channels

More information

An exponential separation between quantum and classical one-way communication complexity

An exponential separation between quantum and classical one-way communication complexity An exponential separation between quantum and classical one-way communication complexity Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical

More information

EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018

EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018 Please submit the solutions on Gradescope. Some definitions that may be useful: EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018 Definition 1: A sequence of random variables X

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it

More information

On Function Computation with Privacy and Secrecy Constraints

On Function Computation with Privacy and Secrecy Constraints 1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The

More information

Lecture 2. Capacity of the Gaussian channel

Lecture 2. Capacity of the Gaussian channel Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN

More information

How to Compress Interactive Communication

How to Compress Interactive Communication How to Compress Interactive Communication Boaz Barak Mark Braverman Xi Chen Anup Rao March 1, 2013 Abstract We describe new ways to simulate 2-party communication protocols to get protocols with potentially

More information

CS Communication Complexity: Applications and New Directions

CS Communication Complexity: Applications and New Directions CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the

More information

Coding for Computing. ASPITRG, Drexel University. Jie Ren 2012/11/14

Coding for Computing. ASPITRG, Drexel University. Jie Ren 2012/11/14 Coding for Computing ASPITRG, Drexel University Jie Ren 2012/11/14 Outline Background and definitions Main Results Examples and Analysis Proofs Background and definitions Problem Setup Graph Entropy Conditional

More information

Distributed Source Coding Using LDPC Codes

Distributed Source Coding Using LDPC Codes Distributed Source Coding Using LDPC Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete May 29, 2010 Telecommunications Laboratory (TUC) Distributed Source Coding

More information

QB LECTURE #4: Motif Finding

QB LECTURE #4: Motif Finding QB LECTURE #4: Motif Finding Adam Siepel Nov. 20, 2015 2 Plan for Today Probability models for binding sites Scoring and detecting binding sites De novo motif finding 3 Transcription Initiation Chromatin

More information

Secret Key Agreement: General Capacity and Second-Order Asymptotics

Secret Key Agreement: General Capacity and Second-Order Asymptotics Secret Key Agreement: General Capacity and Second-Order Asymptotics Masahito Hayashi Himanshu Tyagi Shun Watanabe Abstract We revisit the problem of secret key agreement using interactive public communication

More information

10-704: Information Processing and Learning Fall Lecture 9: Sept 28

10-704: Information Processing and Learning Fall Lecture 9: Sept 28 10-704: Information Processing and Learning Fall 2016 Lecturer: Siheng Chen Lecture 9: Sept 28 Note: These notes are based on scribed notes from Spring15 offering of this course. LaTeX template courtesy

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Communication vs information complexity, relative discrepancy and other lower bounds

Communication vs information complexity, relative discrepancy and other lower bounds Communication vs information complexity, relative discrepancy and other lower bounds Iordanis Kerenidis CNRS, LIAFA- Univ Paris Diderot 7 Joint work with: L. Fontes, R. Jain, S. Laplante, M. Lauriere,

More information

CS 630 Basic Probability and Information Theory. Tim Campbell

CS 630 Basic Probability and Information Theory. Tim Campbell CS 630 Basic Probability and Information Theory Tim Campbell 21 January 2003 Probability Theory Probability Theory is the study of how best to predict outcomes of events. An experiment (or trial or event)

More information

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information.

Introduction to Information Theory. Uncertainty. Entropy. Surprisal. Joint entropy. Conditional entropy. Mutual information. L65 Dept. of Linguistics, Indiana University Fall 205 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission rate

More information

Dept. of Linguistics, Indiana University Fall 2015

Dept. of Linguistics, Indiana University Fall 2015 L645 Dept. of Linguistics, Indiana University Fall 2015 1 / 28 Information theory answers two fundamental questions in communication theory: What is the ultimate data compression? What is the transmission

More information

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16 EE539R: Problem Set 4 Assigned: 3/08/6, Due: 07/09/6. Cover and Thomas: Problem 3.5 Sets defined by probabilities: Define the set C n (t = {x n : P X n(x n 2 nt } (a We have = P X n(x n P X n(x n 2 nt

More information

Exponential Separation of Quantum and Classical Non-Interactive Multi-Party Communication Complexity

Exponential Separation of Quantum and Classical Non-Interactive Multi-Party Communication Complexity Electronic Colloquium on Computational Complexity, Report No. 74 (2007) Exponential Separation of Quantum and Classical Non-Interactive ulti-party Communication Complexity Dmitry Gavinsky Pavel Pudlák

More information

Common Randomness Principles of Secrecy

Common Randomness Principles of Secrecy Common Randomness Principles of Secrecy Himanshu Tyagi Department of Electrical and Computer Engineering and Institute of Systems Research 1 Correlated Data, Distributed in Space and Time Sensor Networks

More information

Interactive Channel Capacity

Interactive Channel Capacity Electronic Colloquium on Computational Complexity, Report No. 1 (2013 Interactive Channel Capacity Gillat Kol Ran Raz Abstract We study the interactive channel capacity of an ɛ-noisy channel. The interactive

More information

Lecture 11: Key Agreement

Lecture 11: Key Agreement Introduction to Cryptography 02/22/2018 Lecture 11: Key Agreement Instructor: Vipul Goyal Scribe: Francisco Maturana 1 Hardness Assumptions In order to prove the security of cryptographic primitives, we

More information

Application of Information Theory, Lecture 7. Relative Entropy. Handout Mode. Iftach Haitner. Tel Aviv University.

Application of Information Theory, Lecture 7. Relative Entropy. Handout Mode. Iftach Haitner. Tel Aviv University. Application of Information Theory, Lecture 7 Relative Entropy Handout Mode Iftach Haitner Tel Aviv University. December 1, 2015 Iftach Haitner (TAU) Application of Information Theory, Lecture 7 December

More information

Common Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014

Common Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014 Common Information Abbas El Gamal Stanford University Viterbi Lecture, USC, April 2014 Andrew Viterbi s Fabulous Formula, IEEE Spectrum, 2010 El Gamal (Stanford University) Disclaimer Viterbi Lecture 2

More information

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria Source Coding Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Asymptotic Equipartition Property Optimal Codes (Huffman Coding) Universal

More information

Exercises with solutions (Set B)

Exercises with solutions (Set B) Exercises with solutions (Set B) 3. A fair coin is tossed an infinite number of times. Let Y n be a random variable, with n Z, that describes the outcome of the n-th coin toss. If the outcome of the n-th

More information

3. If a choice is broken down into two successive choices, the original H should be the weighted sum of the individual values of H.

3. If a choice is broken down into two successive choices, the original H should be the weighted sum of the individual values of H. Appendix A Information Theory A.1 Entropy Shannon (Shanon, 1948) developed the concept of entropy to measure the uncertainty of a discrete random variable. Suppose X is a discrete random variable that

More information

Lecture 11: Continuous-valued signals and differential entropy

Lecture 11: Continuous-valued signals and differential entropy Lecture 11: Continuous-valued signals and differential entropy Biology 429 Carl Bergstrom September 20, 2008 Sources: Parts of today s lecture follow Chapter 8 from Cover and Thomas (2007). Some components

More information

AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013

AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013 AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013 Lecturer: Dr. Mark Tame Introduction With the emergence of new types of information, in this case

More information

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions Engineering Tripos Part IIA THIRD YEAR 3F: Signals and Systems INFORMATION THEORY Examples Paper Solutions. Let the joint probability mass function of two binary random variables X and Y be given in the

More information

Lecture 11: Polar codes construction

Lecture 11: Polar codes construction 15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last

More information

Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity

Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity Eckehard Olbrich MPI MiS Leipzig Potsdam WS 2007/08 Olbrich (Leipzig) 26.10.2007 1 / 18 Overview 1 Summary

More information

Note that the new channel is noisier than the original two : and H(A I +A2-2A1A2) > H(A2) (why?). min(c,, C2 ) = min(1 - H(a t ), 1 - H(A 2 )).

Note that the new channel is noisier than the original two : and H(A I +A2-2A1A2) > H(A2) (why?). min(c,, C2 ) = min(1 - H(a t ), 1 - H(A 2 )). l I ~-16 / (a) (5 points) What is the capacity Cr of the channel X -> Y? What is C of the channel Y - Z? (b) (5 points) What is the capacity C 3 of the cascaded channel X -3 Z? (c) (5 points) A ow let.

More information

EE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16

EE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16 EE539R: Problem Set 3 Assigned: 24/08/6, Due: 3/08/6. Cover and Thomas: Problem 2.30 (Maimum Entropy): Solution: We are required to maimize H(P X ) over all distributions P X on the non-negative integers

More information

COS597D: Information Theory in Computer Science September 21, Lecture 2

COS597D: Information Theory in Computer Science September 21, Lecture 2 COS597D: Information Theory in Computer Science September 1, 011 Lecture Lecturer: Mark Braverman Scribe: Mark Braverman In the last lecture, we introduced entropy H(X), and conditional entry H(X Y ),

More information

Information-theoretic Secrecy A Cryptographic Perspective

Information-theoretic Secrecy A Cryptographic Perspective Information-theoretic Secrecy A Cryptographic Perspective Stefano Tessaro UC Santa Barbara WCS 2017 April 30, 2017 based on joint works with M. Bellare and A. Vardy Cryptography Computational assumptions

More information

Secret Key Generation and Secure Computing

Secret Key Generation and Secure Computing Secret Key Generation and Secure Computing Himanshu Tyagi Department of Electrical and Computer Engineering and Institute of System Research University of Maryland, College Park, USA. Joint work with Prakash

More information

Interactive information and coding theory

Interactive information and coding theory Interactive information and coding theory Mark Braverman Abstract. We give a high-level overview of recent developments in interactive information and coding theory. These include developments involving

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

Information Theory: Entropy, Markov Chains, and Huffman Coding

Information Theory: Entropy, Markov Chains, and Huffman Coding The University of Notre Dame A senior thesis submitted to the Department of Mathematics and the Glynn Family Honors Program Information Theory: Entropy, Markov Chains, and Huffman Coding Patrick LeBlanc

More information

Course Introduction. 1.1 Overview. Lecture 1. Scribe: Tore Frederiksen

Course Introduction. 1.1 Overview. Lecture 1. Scribe: Tore Frederiksen Lecture 1 Course Introduction Scribe: Tore Frederiksen 1.1 Overview Assume we are in a situation where several people need to work together in order to solve a problem they cannot solve on their own. The

More information

EE 4TM4: Digital Communications II. Channel Capacity

EE 4TM4: Digital Communications II. Channel Capacity EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.

More information

Unequal Error Protection Querying Policies for the Noisy 20 Questions Problem

Unequal Error Protection Querying Policies for the Noisy 20 Questions Problem Unequal Error Protection Querying Policies for the Noisy 20 Questions Problem Hye Won Chung, Brian M. Sadler, Lizhong Zheng and Alfred O. Hero arxiv:606.09233v2 [cs.it] 28 Sep 207 Abstract In this paper,

More information

Quantitative Biology II Lecture 4: Variational Methods

Quantitative Biology II Lecture 4: Variational Methods 10 th March 2015 Quantitative Biology II Lecture 4: Variational Methods Gurinder Singh Mickey Atwal Center for Quantitative Biology Cold Spring Harbor Laboratory Image credit: Mike West Summary Approximate

More information

Internal Compression of Protocols to Entropy

Internal Compression of Protocols to Entropy Internal Compression of Protocols to Entropy Balthazar Bauer 1, Shay Moran 2, and Amir Yehudayoff 3 1 Département d Infomatique, ENS Lyon Lyon, France balthazarbauer@aol.com 2 Departments of Computer Science,

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

CS 229r Information Theory in Computer Science Feb 12, Lecture 5

CS 229r Information Theory in Computer Science Feb 12, Lecture 5 CS 229r Information Theory in Computer Science Feb 12, 2019 Lecture 5 Instructor: Madhu Sudan Scribe: Pranay Tankala 1 Overview A universal compression algorithm is a single compression algorithm applicable

More information

arxiv: v5 [cs.cc] 10 Jul 2014

arxiv: v5 [cs.cc] 10 Jul 2014 The space complexity of recognizing well-parenthesized expressions in the streaming model: the Index function revisited Rahul Jain Ashwin Nayak arxiv:1004.3165v5 [cs.cc] 10 Jul 014 March 6, 014 Abstract

More information

Example: Letter Frequencies

Example: Letter Frequencies Example: Letter Frequencies i a i p i 1 a 0.0575 2 b 0.0128 3 c 0.0263 4 d 0.0285 5 e 0.0913 6 f 0.0173 7 g 0.0133 8 h 0.0313 9 i 0.0599 10 j 0.0006 11 k 0.0084 12 l 0.0335 13 m 0.0235 14 n 0.0596 15 o

More information

Graph Coloring and Conditional Graph Entropy

Graph Coloring and Conditional Graph Entropy Graph Coloring and Conditional Graph Entropy Vishal Doshi, Devavrat Shah, Muriel Médard, Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge,

More information