Lossy Distributed Source Coding
|
|
- Cory Curtis
- 5 years ago
- Views:
Transcription
1 Lossy Distributed Source Coding John MacLaren Walsh, Ph.D. Multiterminal Information Theory, Spring Quarter, 202 Lossy Distributed Source Coding Problem X X 2 S {,...,2 R } S 2 {,...,2 R2 } Ẑ Ẑ 2 E d(z,n, Ẑ,n) D E d(z 2,n, Ẑ2,n) D 2 X M S M {,...,2 RM } Ẑ L E d(z L,n, ẐL,n) D L Y (X,n,...,X M,n,Y n,z,n,...,z L,n ) iid n p X,...,X M,Y,Z,...,Z L Figure : The general multiterminal lossy source coding problem. The general multiterminal lossy source coding problem depicted in Fig. is not solved, however we have inner and outer bounds. As we will discuss in section.3, these inner and outer bounds do not match on even very simple variants of this structure. Important special cases of this problem include when L = M and Z m = X m (the lossy variant of Slepian-Wolf), as well as the case L = which is known as the CEO problem.. Berger Tung Inner Bound The Berger Tung Inner bound uses the familiar random binning technique. A series of conditional distributions for a collection of message random variables U,..., U M and reconstructions Ẑ,..., ẐL are created according to the Markov chain relationships (X \m, U \m, Y, Z :L ) X m U m and (X :M, Z :L ) (Y, U :M ) Ẑ :L that also obey the distortion constraints E[d m (Z l, Ẑl)] D l, l {,..., L} () Encoder m generates a 2 ˆR m dimensional matrix with elements IID according to p Um, where the ith row is denoted by U m(i). Each codeword is randomly assigned to one of 2 Rm bins B m (j), j {,..., 2 Rm }. The bins and codebooks are shared with the decoder. Encoder m operates by selecting a codeword that is jointly typical with its observations: (U m(i), X m) Tɛ (U m, X m ), then transmitting the bin index j m of the bin to which this codeword was assigned. If there was no such codeword, it simply transmits the first bin index. The decoder operates by looking in the bins for (i,..., i M ) B (j ) B M (j M ) giving a collection of codewords jointly typical with each other and the side information: (U (i ),..., U M (i M ), Y ) Tɛ (U,..., U M, Y ). If it can find no such collection of codewords, it selects i m = for all m {,..., M}. The following error events are useful in analyzing the rates necessary to achieve the desired distortions: E m: There does not exists i m {,..., 2 ˆR m } such that (U m(i m ), X m) T ɛ (U m, X m )
2 E 2 : For every m {,..., M} there exists a i m {,..., 2 ˆR m } such that (U m(i m ), X m) Tɛ (U m, X m ) however (U :M (i), X :M, Y, Z :L ) / T ɛ (U :M, X :M, Y, Z :L ). EA 3 : For every m {,..., M} there exists a i m {,..., 2 ˆR m } such that (U m(i m ), X m) Tɛ (U m, X m ) and (U :M (i), X :M, Y, Z :L ) T ɛ (U :M, X :M, Y, Z :L ). However, for the set A {,..., M}, (U A(i A ), U A c(i A c), Y ) Tɛ (U A, U A c, Y ) for some i m i m, m A with i m B m (j m ) for each m A. The first error event involves the same style of analysis as in the rate distortion proof: P[E m] = 2 ˆRm i= ( ) 2 ˆRm P[(U m(i), X m) / Tɛ (U m, X m )] = P[(U m(i), X m) Tɛ (U m, X m )] ( ( ( ɛ)2 (I(Um;Xm)+ɛc) 2 ˆRm ) ( ɛ) + exp 2 ˆR m 2 (I(Xm;Um)+ɛc)) (3) (2) From this we observe that the probability of error will be arbitrarily small as provided that ˆR i > I(X i ; U i ) (4) Error E 2 has a probability that is arbitrarily small as via an extension of the Markov Lemma. Finally, turning our attention to the third collection of error events, we have [ ] P[EA] 3 = P (U A(i A), U A c(i A c), Y ) Tɛ (U A, U A c, Y ), i m B m (j m ) m A (U :M (i), X :M, Y ) Tɛ i A,i m im m A [ ] = P (U A(i A), U A c(i A c), Y ) T ɛ (U A, U A c, Y ) (U :M (i), X :M, Y ) Tɛ P [i m B m (j m ) m A] i A,i m im m A 2 Rm (2 ˆR m ) 2 i A,i m im m A ( m A 2 (I(U A;(Y,U Ac )) ɛc) ˆR m R m I(U A ; (Y, U A c)) + ɛc This can be made arbitrarily small as if i A 2 Rm 2 (I(U A;(Y,U Ac )) ɛc) m A m A ) ˆR i R i I(Y ; U A U A c) < 0 (6) Putting the two sets of inequalities together, we find that the requirements on the rates are R i > I(X A ; U A Y, U A c) (7) i A When its union over all possible choices of auxiliary variables U :M obeying the constraints is taken, this region may not be convex, but any two achievable strategies can be time shared to get a convex combination in (R, D) space, hence can take a convex hull. One way to show this convex hull in the math is to introduce a time sharing variable Q, as shown in the text..2 Berger Tung Outer Bound The Berger Tung outer bound involves the same rate and distortion expressions as the Berger Tung inner bound, but uses changes the message Markov Chain conditions: (X \m, Y, Z :L ) X m U m. (ote that the other messages U \m are absent in the condition.).3 Tightness & Other Bounds As noted in the text, both the Berger Tung Inner and the Berger Tung Outer bounds have cases where they are not tight. There is a gap between the Berger Tung inner bound and the Berger Tung Outer Bound for the scalar Gaussian squared error CEO problem, and the Berger Tung inner bound was shown to give the rate distortion region in this case (Oohama 05 and Prabhakaran, Tse, Ramachandran 04). As discussed in 2.3, the Berger Tung inner bound is also tight for two Gaussian source and squared error distortions. Wagner and Anatharam have given a tighter outer bound than the Berger Tung Outer bound (2008). These problems are an area of active work in the community. (5) 2
3 2 Multiple Descriptions Problem S {,...,2 R } ˆX E[d(X n, ˆX,n )] D S X ˆX 0 S 2 E[d(X n, ˆX 0,n )] D 0 S 2 {,...,2 R2 } ˆX 2 E[d(X n, ˆX 2,n )] D 2 Figure 2: The still generally unsolved multiple descriptions problem. Surprisingly, this problem is not solved. We presented a pair of achievable regions and discussed when one of them was tight. 2. El Gamal Cover Region The El Gamal cover region splits each of the two descriptions S, S 2 up into two parts S = (S 0, S ) and S 2 = (S 20, S 22 ). The parts S 0 and S 20 are discarded by the decoders receiving only one description, and are concatenated into S 0 = (S 0, S 20 ) by the decoder receiving both descriptions. Codebook Generation: Generate a 2 R matrix with elements IID according to p X, denote the ith row of this by X (i). Similarly, generate a 2 R2 matrix with elements IID according to p X2, denote the jth row of this by X 2 (j). For each (i, j) {,..., 2 R } {,..., 2 R22 } generate a 2 R0 matrix, with kth row X 0 (i, j, k), and with elements distributed according to the conditional distribution p X0 X,X 2 (x 0,n (i, j, k) x,n (i), x 2,n (j)) (8) Share these matrices with the encoder and decoder. Encoder: Select an i, j, k such that (X (i), X 2 (j), X 0 (i, j, k), X ) Tɛ (X, X 2, X 0, X). Split the bits in the index k up into two parts k = (k, k 2 ), and transmit the messages S = (i, k ) and S 2 = (j, k 2 ). Decoder: The decoder receiving only S = (i, k ) discards k and reproduces X (i). The decoder receiving only S 2 = (j, k 2 ) discards k 2 and reproduces X 2 (j). The decoder receiving both S, S 2 reproduces X 0 (i, j, k). Error events E x : X / Tɛ (X) E : for every i (X (i), X ) / T n ɛ (X, X) E 2 : for every j (X 2 (j), X ) / T n ɛ (X 2, X) E 2 : for every i, j (X (i), X 2 (j), X ) / T ɛ (X, X 2, X) E 0 : there exists i, j such that (X (i), X 2 (j), X ) Tɛ (X, X 2, X), but for every k {,..., 2 R0 }, (X (i), X 2 (j), X 0 (i, j, k), X ) / Tɛ (X, X 2, X 0, X) Properties of the typical set imply P[E x ] 0 as. A now standard proof bounding the probability that two independent sequences with matching marginals to a joint distribution wind up in the typical set shows that if R > I(X ; X) and if R 22 > I(X 2 ; X), P[E ] and P[E 2 ] go to 0 as. 3
4 The novel idea in the proof is the bounding of P[E 2 ] via a clever application of Chebyshev s inequality. In this vein, for each x X, define the random set G x = { (i, j) {,..., 2 R } {,..., 2 R22 } (X (i), X 2 (j), x) T ɛ (X, X 2, X) } (9) and denote, naturally, the cardinality of this set by G x oting that the event E 2 is equivalent to the event G X = 0, and applying the Chebyshev s inequality which implies by selecting ɛ = [ P X E[X] ɛ ] var(x) ɛ 2, (0) E[X] var(x) that P [ X E[X] E[X]] var(x) (E[X]) 2, we have P [E 2 E c x ] = P [ G X = 0 E c x ] P [ G X E[ G X ] E[ G X ]] var( G X ) (E[ G X ]) 2 () The mean and the variance can be bounded by viewing the cardinality as a sum of indicator functions. For x Tɛ E[ G x ] = i,j P[(X (i), X 2 (j), x) Tɛ (X, X 2, X)] = i,j (X), x,x 2 (x,x 2,x) T ɛ (X,X2,X) p X(i)(x )p X2(j)(x 2 ) (2) ( ɛ)2 (R+R22 H(X) H(X2)+H(X,X2 X) ɛc) (3) while the variance can be bounded via var( G x ) = [ (X (i), X 2 (j), x) Tɛ (X, X 2, X), (X (i ), X 2 (j ), x) Tɛ (X, X 2, X) ] = i,j,i,j P i,j,i,j P,j j + + = i,j,i,j P + [ (X (i), X 2 (j), x) Tɛ (X, X 2, X) ] P [ (X (i ), X 2 (j ), x) Tɛ (X, X 2, X) ] (4) P [ (X (i), X 2 (j), x) Tɛ (X, X 2, X) ] P [ (X (i ), X 2 (j ), x) Tɛ (X, X 2, X) ] P [ (X (i), X 2 (j), x) Tɛ (X, X 2, X), (X (i ), X 2 (j), x) Tɛ (X, X 2, X) ] P [ (X (i), X 2 (j), x) Tɛ (X, X 2, X), (X (i), X 2 (j ), x) Tɛ (X, X 2, X) ] [ (X (i), X 2 (j), x) Tɛ (X, X 2, X) ] P [ (X (i ), X 2 (j ), x) Tɛ (X, X 2, X) ] (5) P [ (X (i), X 2 (j), x) T ɛ P [ (X (i), X 2 (j), x) T ɛ (X, X 2, X) ] P [ (X (i ), X 2 (j), x) Tɛ (X, X 2, X) (X 2 (j), x) Tɛ n (X 2, X) ] (X, X 2, X) ] P [ (X (i), X 2 (j ), x) Tɛ (X, X 2, X) (X (i), x) Tɛ (X, X) ] P [ (X (i), X 2 (j), x) Tɛ (X, X 2, X) ] P [ (X (i), X 2 (j ), x) Tɛ (X, X 2, X) ] P [ (X (i), X 2 (j), x) Tɛ (X, X 2, X) ] P [ (X (i ), X 2 (j), x) Tɛ (X, X 2, X) ] ( [ P (X (i), X 2 (j), x) Tɛ (X, X 2, X) ]) 2 i,j 2 ( H(X) H(X2)+H(X,X2 X)+ɛc) 2 ( H(X) H(X2 X)+H(X,X2 X)+ɛc2) + 2 ( H(X) H(X2)+H(X,X2 X)+ɛc) 2 ( H(X X) H(X2)+H(X,X2 X) 2 (R+R22) ( (2 R ) + (2 R22 ) + ) ( ɛ) 2 2 2( H(X) H(X2)+H(X,X2 X) ɛc) (7) (6) (8) 4
5 2 2( H(X) H(X2)+H(X,X2 X)+ɛc) 2 I(X2;X) + ) ( ɛ) 2 2 4ɛc (2 R + 2 R22 2 (R+R22) ) 2 I(X;X) ( 2 2(R+R22 H(X) H(X2)+H(X,X2 X)+ɛc) 2 R22 ( 2 R )2 I(X2;X) + 2 R ( 2 R22 )2 I(X;X) ) ( ɛ) 2 2 4ɛc (2 R + 2 R22 2 (R+R22) ) (9) (20) Substituting these bounds (3) and (9) into our bound (), we have P [ G x E[ G x ] E[ G x ]] ( ɛ) 2 2 4cɛ ( 2 R22 ( 2 R )2 I(X2;X) + 2 R ( 2 R22 )2 I(X;X)) 2 2ɛc (2 R + 2 R22 2 (R+R22) ) (2) Which approaches 0 as for a fixed, but small enough ɛ, since R > I(X ; X) and R 22 > I(X 2 ; X). Then selecting R + R 22 > H(X ) + H(X 2 ) H(X, X 2 X) (22) in (3) will ensure that the associated E[ G x ] will grow large and not shrink to zero as. Finally considering the last error event P[E 0 ] P [ k {,..., 2 R0 }(X (i), X 2 (j), X 0 (i, j, k), X ) / Tɛ (X, X 2, X 0, X) (X (i), X 2 (j), X ) Tɛ (X, X 2, X) ] ( [ P (X (i), X 2 (j), X 0 (i, j, k), X ) Tɛ (X, X 2, X 0, X) (X (i), X 2 (j), X ) Tɛ (X, X 2, X) ]) (23) k ( ( ɛ)2 (I(X0;X X,X2)+ɛc)) 2 R 0 (24) ( ( ɛ) + exp 2 R0 2 (I(X0;X X,X2)+ɛc)) (25) which can be made arbitrarily small if R 0 > I(X 0 ; X X, X 2 ) (26) Putting the requirements (2), and (25) together we have R > I(X ; X) (27) R 22 > I(X 2 ; X) (28) R + R 22 > H(X ) + H(X 2 ) H(X, X 2 X) (29) R 0 > I(X 0 ; X X, X 2 ) (30) which implies for the rates R = R 0 + R and R 2 = R 02 + R 22 the region R I(X ; X) (3) R 2 I(X 2 ; X) (32) R + R 2 I(X ; X 2 ) + I((X 0, X, X 2 ); X) (33) where the last inequality was derived through H(X ) + H(X 2 ) H(X, X 2 X) + I(X 0 ; X X, X 2 ) = I(X ; X 2 ) + I((X 0, X, X 2 ); X) (34) Adding a time sharing variable Q to convexify the region, we have R I(X ; X Q) (35) R 2 I(X 2 ; X Q) (36) R + R 2 I(X ; X 2 Q) + I((X 0, X, X 2 ); X Q) (37) 5
6 2.2 Cases where EGC is Tight o Excess Rate: Regarding the decoder obtaining both descriptions, it is clear that is can not require less rate than a decoder receiving a single message requiring this distortion. When the rates are selected so that this bound is tight, that is for those rates such that R + R 2 = R(D 0 ), the (Ahlswede) Gaussian Squared Error. For X n scalar Gaussian distributed and d the squared error, the EGC region is tight. 2.3 Zhang Berger Region The Zhang Berger region grows the El Gamal cover region by allowing a certain part of each of the two descriptions to be identical, with the remaining descriptions generated conditionally on this part. The achievability construction is as follows Codebook Generation: Generate a 2 Rc matrix with elements IID according to p U, and denote the ith row of this matrix by U (i). For each i {,..., 2 Rc } generate a 2 R matrix whose rows X (i, j) are IID and distributed according to p X U (x,n (i, j) u n (i)) (38) and a 2 R22 matrix whose rows X 2 (i, k) are IID and distributed according to p X2 U (x 2,n (i, k) u n (i)) (39) For each (i, j, k) generate a 2 R0 matrix whose rows X 0 (i, j, k, l) are IID according to p X0 X,X 2,U (x 0,n (i, j, k, l) x,n (i, j), x 2,n (i, k), u n (i)) (40) Share the codebook matrices with the encoders and decoders. Encoder: Find (i, j, k, l) {,..., 2 Rc } {,..., 2 R } {,..., 2 R22 } {,..., 2 R0 } such that (U (i), X (i, j), X 2 (i, k), X 0 (i, j, k, l), X ) Tɛ (U, X, X 2, X 0, X). The bits of the index l is broken up into two parts l = (l, l 2 ), and the two descriptions that are sent are S = (i, j, l ) and S 2 = (i, k, l 2 ). Decoder: The decoder receiving only S discards l and produces X (i, j). The decoder receiving only S 2 discards l 2 and produces X 2 (i, k). The decoder receiving both descriptions produces X 0 (i, j, k, l). While we did not go into the details of the proof, we noted that this rate region obtains some extra points that are not in the El Gamal Cover region for the binary source Hamming distortion case. 6
7 3 Successive Refinements S {,...,2 R } X S S 2 ˆX ˆX 0 E[d(X n, ˆX,n )] D E[d(X n, ˆX 0,n )] D 0 S 2 {,...,2 R2 } Figure 3: The problem of successive refinement of information. In successive refinements, we neglect one of the two decoders receiving an individual description in the multiple descriptions problem, and we obtain the problem shown in Fig. 3. The El Gamal Cover inner bound for the multiple descriptions problem can be specialized to this case by selecting X 2 = 0. The region so obtained is 3. EGC Tightness: Converse Proof R I(X ; X) (4) R + R 2 I((X, X 0 ); X) (42) Successive refinements represent a special case of the more general multiple descriptions problem for which the El Gamal Cover region is tight. We presented the proof in class. Suppose that R and R 2 are the rates of the successive descriptions achieving distortions D and D 0, respectively. Following the normal rate distortion converse R H( ˆX ) H( ˆX ) H( ˆX X ) = I( ˆX ; X ) = H(X ) H(X ˆX ) = H(X n ) H(X n ˆX, X n ) H(X n ) H(X n ˆX,n ) = I(X n ; ˆX,n ) (43) (R + R 2 ) H( ˆX, ˆX 0 ) H( ˆX, ˆX 0 ) H( ˆX, ˆX 0 X ) = I(X ; ( ˆX 0, ˆX )) = H(X ) H(X ˆX 0, ˆX ) H(X n ) H(X n ˆX 0, ˆX, X n ) H(X n ) H(X n ˆX 0,n, ˆX,n ) = I(X n ; ( ˆX 0,n, ˆX,n )) (44) Rearranging, we obtain the following inequalities R R + R 2 D D 0 I(X n ; ˆX,n ) (45) I(X n ; ( ˆX 0,n, ˆX,n )) (46) E[d(X n, ˆX,n )] (47) E[d(X n, ˆX 0 )] (48) showing that the vector (R, R 2, D, D 2 ) is elementwise superior to a point in the convex hull of the rate distortion region given by (40) and (4). 3.2 Successive Refinability of a Source and Distortion Measure Regarding each of the two decoders individually in Figure 3, it is quite clear that they can not require less rate than a decoder which must only reproduce the source at their associated distortion. Hence, if R(D) is the rate distortion function for the 7
8 source X and the distortion metric d(, ), the rates in the successive refinements problem must obey the bound R R(D ) and R + R 2 R(D 0 ). A source is successively refineable if these bounds are achievable. That is, for a successively refineable source, the minimum average amount of extra information per symbol necessary to refine a description of the source at distortion D to a description of the source at D 0 is R(D 0 ) R(D ). A source is successively refineable if and only if for every D 0 D there is a conditional distribution p ˆX0, ˆX X = p ˆX0 X p ˆX ˆX 0 (i.e. obeying the Markov chain X ˆX 0 ˆX with p X Xand p X0 X attaining the rate distortion bounds: I(X; ˆX ) = R(D ) and I(X; ˆX 0 ) = R(D 0 ) 8
Network Coding on Directed Acyclic Graphs
Network Coding on Directed Acyclic Graphs John MacLaren Walsh, Ph.D. Multiterminal Information Theory, Spring Quarter, 0 Reference These notes are directly derived from Chapter of R. W. Yeung s Information
More informationDistributed Lossy Interactive Function Computation
Distributed Lossy Interactive Function Computation Solmaz Torabi & John MacLaren Walsh Dept. of Electrical and Computer Engineering Drexel University Philadelphia, PA 19104 Email: solmaz.t@drexel.edu &
More informationEE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions
EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where
More informationOn Scalable Coding in the Presence of Decoder Side Information
On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,
More informationA New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality
0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California
More informationDistributed Lossy Interactive Function Computation
Distributed Lossy Interactive Function Computation Solmaz Torabi & John MacLaren Walsh Dept. of Electrical and Computer Engineering Drexel University Philadelphia, PA 19104 Email: solmaz.t@drexel.edu &
More informationSOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003
SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SLEPIAN-WOLF RESULT { X i} RATE R x ENCODER 1 DECODER X i V i {, } { V i} ENCODER 0 RATE R v Problem: Determine R, the
More informationProblemsWeCanSolveWithaHelper
ITW 2009, Volos, Greece, June 10-12, 2009 ProblemsWeCanSolveWitha Haim Permuter Ben-Gurion University of the Negev haimp@bgu.ac.il Yossef Steinberg Technion - IIT ysteinbe@ee.technion.ac.il Tsachy Weissman
More informationarxiv: v1 [cs.it] 5 Feb 2016
An Achievable Rate-Distortion Region for Multiple Descriptions Source Coding Based on Coset Codes Farhad Shirani and S. Sandeep Pradhan Dept. of Electrical Engineering and Computer Science Univ. of Michigan,
More informationInteractive Hypothesis Testing with Communication Constraints
Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October - 5, 22 Interactive Hypothesis Testing with Communication Constraints Yu Xiang and Young-Han Kim Department of Electrical
More informationInteractive Decoding of a Broadcast Message
In Proc. Allerton Conf. Commun., Contr., Computing, (Illinois), Oct. 2003 Interactive Decoding of a Broadcast Message Stark C. Draper Brendan J. Frey Frank R. Kschischang University of Toronto Toronto,
More informationSuperposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels
Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:
More informationCoding Perspectives for Collaborative Distributed Estimation Over Networks
Coding Perspectives for Collaborative Distributed Estimation Over etworks John MacLaren Walsh and Sivagnanasundaram Ramanan Dept. of Electrical and Computer Engineering Drexel University Philadelphia,
More informationSide-information Scalable Source Coding
Side-information Scalable Source Coding Chao Tian, Member, IEEE, Suhas N. Diggavi, Member, IEEE Abstract The problem of side-information scalable (SI-scalable) source coding is considered in this work,
More informationOn Common Information and the Encoding of Sources that are Not Successively Refinable
On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa
More informationMultimedia Communications. Scalar Quantization
Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing
More informationMultiterminal Source Coding with an Entropy-Based Distortion Measure
Multiterminal Source Coding with an Entropy-Based Distortion Measure Thomas Courtade and Rick Wesel Department of Electrical Engineering University of California, Los Angeles 4 August, 2011 IEEE International
More informationInformation Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem
Information Theory for Wireless Communications. Lecture 0 Discrete Memoryless Multiple Access Channel (DM-MAC: The Converse Theorem Instructor: Dr. Saif Khan Mohammed Scribe: Antonios Pitarokoilis I. THE
More informationLattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function
Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,
More informationLecture 22: Final Review
Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information
More informationOn The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers
On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers Albrecht Wolf, Diana Cristina González, Meik Dörpinghaus, José Cândido Silveira Santos Filho, and Gerhard Fettweis Vodafone
More informationInformation Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results
Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel
More informationNetwork coding for multicast relation to compression and generalization of Slepian-Wolf
Network coding for multicast relation to compression and generalization of Slepian-Wolf 1 Overview Review of Slepian-Wolf Distributed network compression Error exponents Source-channel separation issues
More informationReliable Computation over Multiple-Access Channels
Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,
More informationDistributed Lossless Compression. Distributed lossless compression system
Lecture #3 Distributed Lossless Compression (Reading: NIT 10.1 10.5, 4.4) Distributed lossless source coding Lossless source coding via random binning Time Sharing Achievability proof of the Slepian Wolf
More informationThe Gallager Converse
The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing
More informationLecture 10: Broadcast Channel and Superposition Coding
Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional
More informationPaul Cuff, Han-I Su, and Abbas EI Gamal Department of Electrical Engineering Stanford University {cuff, hanisu,
Cascade Multiterminal Source Coding Paul Cuff, Han-I Su, and Abbas EI Gamal Department of Electrical Engineering Stanford University E-mail: {cuff, hanisu, abbas}@stanford.edu Abstract-We investigate distributed
More informationOn Multiple User Channels with State Information at the Transmitters
On Multiple User Channels with State Information at the Transmitters Styrmir Sigurjónsson and Young-Han Kim* Information Systems Laboratory Stanford University Stanford, CA 94305, USA Email: {styrmir,yhk}@stanford.edu
More informationMultimedia Communications. Mathematical Preliminaries for Lossless Compression
Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when
More informationOn Scalable Source Coding for Multiple Decoders with Side Information
On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,
More informationInformation Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST
Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it
More informationFeedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case
1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department
More informationLECTURE 13. Last time: Lecture outline
LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to
More informationRemote Source Coding with Two-Sided Information
Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State
More informationCapacity of a Class of Deterministic Relay Channels
Capacity of a Class of Deterministic Relay Channels Thomas M. Cover Information Systems Laboratory Stanford University Stanford, CA 94305, USA cover@ stanford.edu oung-han Kim Department of ECE University
More informationSimultaneous Nonunique Decoding Is Rate-Optimal
Fiftieth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 1-5, 2012 Simultaneous Nonunique Decoding Is Rate-Optimal Bernd Bandemer University of California, San Diego La Jolla, CA
More informationEE5585 Data Compression May 2, Lecture 27
EE5585 Data Compression May 2, 2013 Lecture 27 Instructor: Arya Mazumdar Scribe: Fangying Zhang Distributed Data Compression/Source Coding In the previous class we used a H-W table as a simple example,
More information18.2 Continuous Alphabet (discrete-time, memoryless) Channel
0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not
More informationECE Information theory Final (Fall 2008)
ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1
More informationRate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem
Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem Aaron B Wagner, Saurabha Tavildar, and Pramod Viswanath June 9, 2007 Abstract We determine the rate region of the quadratic Gaussian
More informationLecture 20: Quantization and Rate-Distortion
Lecture 20: Quantization and Rate-Distortion Quantization Introduction to rate-distortion theorem Dr. Yao Xie, ECE587, Information Theory, Duke University Approimating continuous signals... Dr. Yao Xie,
More informationThe Capacity Region for Multi-source Multi-sink Network Coding
The Capacity Region for Multi-source Multi-sink Network Coding Xijin Yan Dept. of Electrical Eng. - Systems University of Southern California Los Angeles, CA, U.S.A. xyan@usc.edu Raymond W. Yeung Dept.
More informationCapacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel
Capacity Region of Reversely Degraded Gaussian MIMO Broadcast Channel Jun Chen Dept. of Electrical and Computer Engr. McMaster University Hamilton, Ontario, Canada Chao Tian AT&T Labs-Research 80 Park
More informationLecture 3: Channel Capacity
Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.
More informationInteger-Forcing Source Coding
Or Ordentlich Joint work with Uri Erez June 30th, 2014 ISIT, Honolulu, HI, USA Motivation 1 - Universal Quantization [ x1 x 2 ] N(0,K xx) x 1 E 1 x 2 E 2 D (ˆx 1,d) (ˆx 2,d) Motivation 1 - Universal Quantization
More informationECE Information theory Final
ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the
More informationThe Gaussian Many-Help-One Distributed Source Coding Problem Saurabha Tavildar, Pramod Viswanath, Member, IEEE, and Aaron B. Wagner, Member, IEEE
564 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 1, JANUARY 2010 The Gaussian Many-Help-One Distributed Source Coding Problem Saurabha Tavildar, Pramod Viswanath, Member, IEEE, and Aaron B. Wagner,
More informationOn Gaussian MIMO Broadcast Channels with Common and Private Messages
On Gaussian MIMO Broadcast Channels with Common and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742 ersen@umd.edu
More informationNational University of Singapore Department of Electrical & Computer Engineering. Examination for
National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:
More informationMultiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets
Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,
More informationTHIS paper addresses the quadratic Gaussian two-encoder
1938 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 5, MAY 2008 Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem Aaron B Wagner, Member, IEEE, Saurabha Tavildar, Pramod Viswanath,
More informationThe Distributed Karhunen-Loève Transform
1 The Distributed Karhunen-Loève Transform Michael Gastpar, Member, IEEE, Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Manuscript received November 2004; revised March 5, 2006;
More informationChapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University
Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission
More informationLecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122
Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel
More informationOn Function Computation with Privacy and Secrecy Constraints
1 On Function Computation with Privacy and Secrecy Constraints Wenwen Tu and Lifeng Lai Abstract In this paper, the problem of function computation with privacy and secrecy constraints is considered. The
More informationOn Optimum Conventional Quantization for Source Coding with Side Information at the Decoder
On Optimum Conventional Quantization for Source Coding with Side Information at the Decoder by Lin Zheng A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the
More informationEE 4TM4: Digital Communications II. Channel Capacity
EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.
More informationLECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem
LECTURE 15 Last time: Feedback channel: setting up the problem Perfect feedback Feedback capacity Data compression Lecture outline Joint source and channel coding theorem Converse Robustness Brain teaser
More informationError Exponent Region for Gaussian Broadcast Channels
Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI
More informationDistributed Source Coding Using LDPC Codes
Distributed Source Coding Using LDPC Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete May 29, 2010 Telecommunications Laboratory (TUC) Distributed Source Coding
More informationMultiuser Successive Refinement and Multiple Description Coding
Multiuser Successive Refinement and Multiple Description Coding Chao Tian Laboratory for Information and Communication Systems (LICOS) School of Computer and Communication Sciences EPFL Lausanne Switzerland
More informationMulticoding Schemes for Interference Channels
Multicoding Schemes for Interference Channels 1 Ritesh Kolte, Ayfer Özgür, Haim Permuter Abstract arxiv:1502.04273v1 [cs.it] 15 Feb 2015 The best known inner bound for the 2-user discrete memoryless interference
More informationInformation Masking and Amplification: The Source Coding Setting
202 IEEE International Symposium on Information Theory Proceedings Information Masking and Amplification: The Source Coding Setting Thomas A. Courtade Department of Electrical Engineering University of
More informationSubset Typicality Lemmas and Improved Achievable Regions in Multiterminal Source Coding
Subset Typicality Lemmas and Improved Achievable Regions in Multiterminal Source Coding Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California - Santa Barbara {kumar,eakyol,rose}@ece.ucsb.edu
More informationUniversal Incremental Slepian-Wolf Coding
Proceedings of the 43rd annual Allerton Conference, Monticello, IL, September 2004 Universal Incremental Slepian-Wolf Coding Stark C. Draper University of California, Berkeley Berkeley, CA, 94720 USA sdraper@eecs.berkeley.edu
More informationCoding into a source: an inverse rate-distortion theorem
Coding into a source: an inverse rate-distortion theorem Anant Sahai joint work with: Mukul Agarwal Sanjoy K. Mitter Wireless Foundations Department of Electrical Engineering and Computer Sciences University
More informationOn the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation
On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013
More informationOn Source-Channel Communication in Networks
On Source-Channel Communication in Networks Michael Gastpar Department of EECS University of California, Berkeley gastpar@eecs.berkeley.edu DIMACS: March 17, 2003. Outline 1. Source-Channel Communication
More informationCommunication Cost of Distributed Computing
Communication Cost of Distributed Computing Abbas El Gamal Stanford University Brice Colloquium, 2009 A. El Gamal (Stanford University) Comm. Cost of Dist. Computing Brice Colloquium, 2009 1 / 41 Motivation
More informationOn the Capacity Region of the Gaussian Z-channel
On the Capacity Region of the Gaussian Z-channel Nan Liu Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 nkancy@eng.umd.edu ulukus@eng.umd.edu
More informationThe Sensor Reachback Problem
Submitted to the IEEE Trans. on Information Theory, November 2003. 1 The Sensor Reachback Problem João Barros Sergio D. Servetto Abstract We consider the problem of reachback communication in sensor networks.
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More informationWyner-Ziv Coding over Broadcast Channels: Digital Schemes
Wyner-Ziv Coding over Broadcast Channels: Digital Schemes Jayanth Nayak, Ertem Tuncel, Deniz Gündüz 1 Abstract This paper addresses lossy transmission of a common source over a broadcast channel when there
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationSOURCE coding problems with side information at the decoder(s)
1458 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 3, MARCH 2013 Heegard Berger Cascade Source Coding Problems With Common Reconstruction Constraints Behzad Ahmadi, Student Member, IEEE, Ravi Ton,
More informationCapacity of a channel Shannon s second theorem. Information Theory 1/33
Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,
More information6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011
6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student
More informationAn Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel
Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 8-3, An Achievable Rate Region for the 3-User-Pair Deterministic Interference Channel Invited Paper Bernd Bandemer and
More informationA Comparison of Superposition Coding Schemes
A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA
More informationCoding Perspectives for Collaborative Estimation over Networks
Coding Perspectives for Collaborative Estimation over Networks John M. Walsh & Sivagnanasundaram Ramanan Department of Electrical and Computer Engineering Drexel University Philadelphia, PA jwalsh@ece.drexel.edu
More informationApproximating the Gaussian multiple description rate region under symmetric distortion constraints
Approximating the Gaussian multiple description rate region under symmetric distortion constraints Chao Tian AT&T Labs-Research Florham Park, NJ 0793, USA. tian@research.att.com Soheil Mohajer Suhas Diggavi
More informationELEC546 Review of Information Theory
ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random
More informationLECTURE 10. Last time: Lecture outline
LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)
More informationECE 534 Information Theory - Midterm 2
ECE 534 Information Theory - Midterm Nov.4, 009. 3:30-4:45 in LH03. You will be given the full class time: 75 minutes. Use it wisely! Many of the roblems have short answers; try to find shortcuts. You
More informationCS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability
More informationOn the Duality between Multiple-Access Codes and Computation Codes
On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch
More informationOptimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels
Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Bike Xie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract arxiv:0811.4162v4 [cs.it] 8 May 2009
More informationSolutions to Homework Set #1 Sanov s Theorem, Rate distortion
st Semester 00/ Solutions to Homework Set # Sanov s Theorem, Rate distortion. Sanov s theorem: Prove the simple version of Sanov s theorem for the binary random variables, i.e., let X,X,...,X n be a sequence
More informationAppendix B Information theory from first principles
Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes
More informationSUCCESSIVE refinement of information, or scalable
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 1983 Additive Successive Refinement Ertem Tuncel, Student Member, IEEE, Kenneth Rose, Fellow, IEEE Abstract Rate-distortion bounds for
More informationShannon s noisy-channel theorem
Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for
More informationDegrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages
Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege
More information3238 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 60, NO. 6, JUNE 2014
3238 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 60, NO. 6, JUNE 2014 The Lossy Common Information of Correlated Sources Kumar B. Viswanatha, Student Member, IEEE, Emrah Akyol, Member, IEEE, and Kenneth
More informationDigital Image Processing Lectures 25 & 26
Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image
More informationApproximating the Gaussian Multiple Description Rate Region Under Symmetric Distortion Constraints
Approximating the Gaussian Multiple Description Rate Region Under Symmetric Distortion Constraints Chao Tian, Member, IEEE, Soheil Mohajer, Student Member, IEEE, and Suhas N. Diggavi, Member, IEEE Abstract
More informationEECS 750. Hypothesis Testing with Communication Constraints
EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.
More informationSHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe
SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/40 Acknowledgement Praneeth Boda Himanshu Tyagi Shun Watanabe 3/40 Outline Two-terminal model: Mutual
More informationLecture 1: The Multiple Access Channel. Copyright G. Caire 12
Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user
More informationLin Song Shuo Shao Jun Chen. 11 September 2013
and Result in Song Shuo Shao 11 September 2013 1 / 33 Two s and Result n S Encoder 1: f1 R 1 Decoder: Decoder: g 1 g1,2 d 1 1 1,2 d1,2 Encoder 2: f Encoder 2 R 2 Decoder: g 2 Decoder d 2 2 2 / 33 Two s
More information