Joint Source-Channel Coding

Size: px
Start display at page:

Download "Joint Source-Channel Coding"

Transcription

1 Winter School on Information Theory Joint Source-Channel Coding Giuseppe Caire, University of Southern California La Colle sur Loup, France, March 2007

2 Outline: Part I REVIEW OF BASIC RESULTS Capacity-cost and rate-distortion functions. Shannon separation theorem. Capacity of channels with state known to the transmitter. Rate-distortion with side information at the receiver. Gaussian Wyner-Ziv and Dirty-Paper codes: geometric intuition. 1

3 Outline: Part II GAUSSIAN SOURCE OVER GAUSSIAN BC Duality. Achievability schemes: Hybrid Digital Analog schemes. Towards an inner bound to the achievable region. 2

4 Outline: Part III GAUSSIAN SOURCE OVER GAUSSIAN FADING MIMO CHANNEL High-SNR regime: the diversity multiplexing tradeoff. The distortion SNR exponent. HDA schemes. Finite block length code construction. On-going work: Feedback. multi-layer schemes and schemes with Channel State 3

5 Outline: Part IV PRACTICAL JOINT SOURCE CHANNEL CODING A conceptual structure of transform lossy source coding. Weakness of the separated approach: entropy coding. catastrophicity of conventional Achieving the sup-entropy with linear codes. The proposed scheme: general principles and a case study. 4

6 Part I: Review of basic results 5

7 Capacity-cost function Simple setting: memoryless stationary channel, P (n) (y x) = n i=1 P Y X(y i x i ). Cost function: c : X R +. Per-letter cost of a n-sequence, c(x) = 1 n n i=1 c(x i). Operational definition of capacity-cost function: C(P) is the supremum of all rates R such that (f, g) codes with parameters (n, 2 nr ) exist with limsup n P e (n) < ǫ for all ǫ > 0, and 2 nr 2 nr m=1 c(f(m)) P. Coding theorem: C(P) = max I(X;Y ) X:E[c(X)] P Notice: max with respect to X means maximum over all joint distributions P X,Y (x, y) such that P Y X (y x) coincides with the channel transition probability and the marginal-x satisfies E[c(X)] P. 6

8 Rate-distortion function Simple setting: memoryless stationary source, P (k) (s) = k i=1 P S(s i ). Distortion function: d : S, Ŝ R +. Per-letter distortion of a pair of k- sequences, d(s,ŝ) = 1 k k i=1 d(s i, ŝ i ). Operational definition of rate-distortion function: R(D) is the infimum of all rates R such that (f,g) codes with parameters (k, 2 kr ) exist with and E[d(S k, g(f(s k )))] D. Coding theorem: R(D) = min I(S;Ŝ) S, S:E[d(S, S)] D Notice: min with respect to S, Ŝ means maximum over all joint distributions P S, S (s,ŝ) such that the marginal-s coincides with the source distribution and the constraint E[d(S,Ŝ)] D is satisfied. 7

9 Separation theorem Consider a memoryless stationary source to be transmitted over a memoryless stationary channel. The source produces W s symbols per unit time, and the channel supports W c channel uses per unit time, where λ = W c /W s is fixed. The channel is subject to an average input cost constraint P. A source-channel code with bandwidth ratio λ, input cost P and distortion D consists of a pair of mappings φ : S k X n and ψ : Y n Ŝ k such that: E[c(φ(S k ))] P, and E[d(S k, ψ(y n ))] D. The minimum D for given P and λ is given by R(D) = λc(p) 8

10 An achievability strategy: consider the concatenation of a (k,2 kr s ) source code with a (n, 2 nr c ) channel code such that n k = λ. (Notice: the number of transmitted information bits per source block is kr s = nr c ). Converse: for any (φ,ψ) source-channel code with E[d(S k, ψ(y n ))] D and E[c(φ(S k ))] P we have kr(d) I(S k ; ψ(y n )) I(φ(S k );Y n ) I(X n ; Y n ) nc(p) 9

11 Separation theorem: consequences The separation theorem, interpreted as an ubiquitous separation principle, is one of the theoretical pillars of today s digital era. A variety of information sources, analog or digital in nature, are converted into a common currency ( bits ) and transmitted over a common network infrastructure (the Internet) that operates essentially by disregarding the nature of the source that originated the bits. Source coding is confined to the Application layer (top of the stack). Channel coding is confined to the Physical layer/link layer (bottom ofthe stack). Advantages: VoIP and video streaming over the same data network infrastructure that was originally designed to deliver data. 10

12 Channel coding with side information at the encoder Consider a memoryless state dependent channel P Y X,Z (y x, z), such that P (n) (y x,z) = n i=1 P Y X,Z(y i x i, z i ) and suppose the state sequence Z n, i.i.d. P Z, is known non-causally to the transmitter but unknown to the receiver. We wish to send reliable information across the channel subject to an input cost constraint P. Coding theorem (Gelfand-Pinsker): C gp (P) = max Z,U,Y,f:E[c(f(Z,U))] P {I(Y ; U) I(Z;U)} The maximization is with respect to (Z,U,X,Y ) P Y X,Z P X Z,U P Z,U and with respect to the deterministic function f : Z U X. 11

13 Main ideas behind the achievability. Fix P U Z (u z) and let P U (u) = z P U Z(u z)p Z (z) denote the corresponding marginal-u. Also, fix f : Z U X. Codebook generation: generate {u(i) U n : i = 1,...,2 nr 1} randomly and i.i.d. P U. Random binning: for each codeword, generate randomly independently with uniform probability an index m {1,...,2 nr }. Let B(m) denote all the codewords associated to index m (we say: in the m-th bin ). Encoding: given m and Z n, find w(i) B(m) such that (Z n,u(i)) A n ǫ (Z,U). If this is not found, declare error. Then, send X n = f(z n,u(i)). Input cost: since Z n and u(i) are strongly jointly typical, the empirical cost of X n satisfies 1 n c(x i ) P n i=1 12

14 Decoding: given Y n, find the unique u(î) such that (Y n,u(î)) A n ǫ (Y,U). If this there is no codeword or more than one codeword jointly typical with Y n declare error. Error probability analysis: Z n is strongly typical with high probability. A codeword u(i) is found with high probability if the bins are not too small, i.e., if R 1 R > I(Z;U) + ǫ By the Markov lemma, (Y n,u(i)) are jointly strongly typical with high probability, hence, at least u(i) is found by the decoder. This is unique if the codebook is not too large, i.e., if R 1 < I(Y ; U) ǫ 13

15 It follows that the error probability vanishes as n if R < I(Y ;U) I(Z;U) 2ǫ 14

16 Rate-distortion with side information at the decoder Let {(S i, Z i )} be an i.i.d. sequence, such that P (k) (s,z) = k i=1 P S,Z(s i, z i ). We wish to encode S k with distortion D when the decoder has access to Z n. Coding theorem (Wyner-Ziv): R wz (D) = min {I(S;W) I(Z;W)} S,W,Z,f:E[d(S,f(Z,W))] D The minimization is with respect to (S,W,Z,Ŝ) P S Z,W P W S P S,Z and with respect to the deterministic function f : Z W Ŝ. 15

17 Main ideas behind the achievability. Fix P W S (w s) and let P W (w) = s P W S(w s)p S (s) denote the corresponding marginal-w. Also, fix f : Z W Ŝ. Codebook generation: generate {w(i) W k : i = 1,...,2 kr 1} randomly and i.i.d. P W. Random binning: for each codeword, generate randomly independently with uniform probability an index m {1,...,2 kr }. Let B(m) denote all the codewords associated to index m (we say: in the m-th bin ). Encoding: given S k, find w(i) in the codebook such that (S k,w(i)) A k ǫ (S, W). If this is not found, declare error. Then, send index m of the bin containing w(i). Decoding: given Z k and m, find the unique w(î) B(m) such that (Z k,w(î)) A k ǫ (Z,W). If this there is no codeword or more than one codeword jointly typical with Z k declare error. Error probability analysis: probability. (Y k,z k ) is jointly strongly typical with high 16

18 A codeword w(i) is found with high probability if the codebook is not too small, i.e., if R 1 > I(S;W) + ǫ By the Markov lemma, (Z k,w(i)) are jointly strongly typical with high probability, hence, at least w(i) is found in bin B(m). This is unique if the bin is not too large, i.e., if R 1 R < I(Z;W) ǫ It follows that the error probability vanishes as n if R > I(S;W) I(Z;W) + 2ǫ 17

19 Furthermore, conditioned on no-error event, (S k, Z k,w(i)) are strongly jointly typical. Then, we produce the source vector approximation Ŝ k = f(z k,w(i)) and since this has a typical statistics, its empirical distortion satisfies 1 k d(s i, Ŝ i ) E[d(S,f(Z,W))] k i=1 18

20 No loss conditions for G-P Capacity-cost with state known to both transmitter and receiver: C(P) = max Z,X:E[c(X)] P I(X;Y Z) When C gp (P) = C(P)? For the G-P capacity achieving (Z,U,Y,f) (with U (X,Z) Y ) it must be I(U;Y ) I(U;Z) = I(U;Y Z) = I(X;Y Z) that implies I(U;Z Y ) = 0, or U Y Z. 19

21 No loss conditions for W-Z Rate-distortion with side information known to both encoder and decoder: R(D) = max I(S;Ŝ Z) S, S:E[d(S, S)] D When R wz (D) = R(D)? For the W-Z rate-distortion achieving (Z,W, S,f) (with W S Z) it must be I(W;S) I(W;Z) = I(S;W Z) = I(S;Ŝ Z) that implies I(S;W Z,Ŝ) = 0, or W ( X,Z) S. 20

22 The Gaussian case: geometric intuition Gaussian channel with Gaussian additive interference: Costa s writing on dirty-paper Y = X + Z + N N N(0, σ 2 ), Z N(0, Q), E[X 2 ] P. Auxiliary RV: U X + αz with α = P σ 2 +P and X Nc(0, P). Encoder mapping function: f(u,z) = U αz. It is easy to show that C gp (P) = 1 2 log(1 + P σ 2 ) = C(P) (no loss). In fact one can show that U Y Z, that in the Gaussian case is equivalent to E[Z Y,U] = E[Z Y ]. In passing, we notice that this implies that the MMSE estimation of Z is not improved by knowing U in addition to Y. we shall see that this fact plays an important role in JSCC schemes!. 21

23 The Gaussian case: geometric intuition The codebook U is partitioned into bins B(m) such that each bin is a good vector quantizer for the Gaussian source αz. Given m, find the vector u B(m) such that u αz z. This amounts to find a good representation for αz in B(m) (minimum distance VQ). The vector x = u αz is transmitted. Notice that x 2 = u αz 2 np. The receiver produces a scaled version of the channel output vector y = x + z + n. We have αy = αx + αz + αn + u u = u (1 α)x + αn }{{} equivalent noise 22

24 Finally, the receiver finds a codeword û in the codebook U such that û αy 2 is close to (1 α)p = Pσ2 or, equivalently, decode at minimum distance P+σ 2 û = arg min αy u 2. It follows that the codebook U must be a good channel code for the virtual Gaussian channel with input u and additive noise v (1 α)x + αn. 23

25 The Gaussian case: geometric intuition Gaussian W-Z: source and side information statistics Z = S + V, S N(0, σ 2 s) and V N(0, σ 2 v ) independent. Equivalently, we have S = bz +V with b = σ2 s σ 2 s +σ2 v, V N(0, σ 2 v) independent of Z, such that σ 2 v = σ2 s σ2 v σ 2 s +σ2 v (notice: bz is the MMSE estimator of S given Z and V is the estimation error). Auxiliary RV: W αs + Q, with α = 1 D/σ 2 v and Q N(0, D) independent of S. Decoder mapping function: Ŝ = f(z,w) = b(1 α2 )Z+αW. Notice, f(z,w) is the MMSE estimator of S given Z,W. It is easy to show that E[ S f(z,w) 2 ] = D and that R wz (D) = 1 2 R(D) (no loss). [ ] log σ2 v D + = 24

26 The Gaussian case: geometric intuition The codebook W is a good vector quantizer for the scaled source αs. Given s, find the vector w such that w αs s. This amounts to find a good representation for αs in the codebook (minimum distance VQ). The index m of the bin B(m) where u belongs to is sent to the decoder. The receiver finds a codeword ŵ B(m) such that ŵ αz 2 is close to D + α 2 σ 2 v = σ 2 v or, equivalently, decode at minimum distance ŵ = arg min αz w 2. It follows that each bin B(m) must be a good channel code for the virtual Gaussian channel with input w and additive noise n q + αn. 25

27 Part II: Gaussian source on Gaussian channels 26

28 Functional duality Dimensions: channel n source k. Invariant blocks Blocks: scaling, adders. Dual blocks: 1. Multiplexing demultiplexing, 2. Channel decoding Q c, source vector quantizer Q s : min distance. 3. Channel encoding C c, source reconstruction C s : look-up table. 4. G-P decoding Q bin, W-Z encoding Q bin: min distance + bin-index 5. G-P encoding bin C, W-Z decoding bin C: min distance from bin 27

29 Dual signals: 1. S Y, 2. Ŝ X. 3. Z Z (state corresponds to itself). 4. W U. Two source-channel coding schemes A and B are dual if the transmitter of scheme A can be obtained from the receiver of B via duality transformations and the receiver of scheme A can be obtained by the transmitter of B via duality transformations. The dual of a scheme for the case k n (bandwidth expansion) is a scheme for the case k n (bandwidth compression). 28

30 Gaussian source over AWGN channel z N(0, N) s m x y m s Qs Cc Qc Cs k n k Without loss of generality we let σ 2 s = 1 and σ 2 n = N. It follows that C(P) = 1 2 log(1 + P/N) and R(D) = [ 1 2 log ] 1 D. + The separated source-channel coding scheme is self dual. It can handle (optimally) both cases k n and k n. 29

31 For k we have the (optimal) distortion D opt (P) = R 1 (λc (P)) = ( 1 + P ) λ N 35 Gaussian source over AWGN, bandwidth ratio b = 1 30 reconstruction SNR (1/D in db) P/N (db) 30

32 Analog transmission (AM) For λ = 1, analog AM achieves any point on the curve D opt (P) with a fixed transmitter. Encoder: scaling, x = Ps. Decoder: MMSE estimation... scaling again, ŝ = P P+N y. Distortion: MMSE, D = 1 Also the analog AM scheme is self-dual. P P+N = N P+N = (1 + P/N) 1. Interesting fact: in a Broadcast setting with two users, at SNRs P/N 1 and P/N 2, the analog AM scheme achieves simultaneously the optimal distortion for both users. 31

33 Suboptimality of separation: by concatenating an optimal successive refinement source code with an optimal broadcast code, we can achieve only ( ) λ (1 β)p D 1 (P) = 1 + (1 + P/N 1 ) λ N 1 + βp and D 2 (P) = [( 1 + (1 β)p N 1 + βp ) ( 1 + βp )] λ (1 + P/N 2 ) λ N 2 Here is an example where separation is not optimal: relevant for DTV, DAB, video streaming or MP3 streaming over wireless channels. 32

34 Hybrid Digital Analog (HDA) scheme, λ 1 s s d xa k s k Cs m x = [xa,x d ] n Qs Cc x d n k ya k sa y n s = sa + s d k y d n k Qc m Cs s d k 33

35 Analysis The pair Q s, C c must work at the rate-distortion limit for a source with block length k and channel with block length n k. Hence, the quantization error has variance ( D q = 1 + P ) λ+1 N It follows that x a = P (s ŝ d ) D q The resulting quadratic distortion is the MMSE of the estimated quantization error based on the observation of y a = x a + z. We have ( D q D = 1 + P/N = 1 + P ) λ N OPTIMAL! 34

36 Hybrid Digital Analog (HDA) scheme, λ 1 sa n xa power ap s k x = xa + x d n s d k n Qs m Cc x d n power (1 a)p y x d sa n y n Cc m s = [ sa, s d ] k Qc Cs s d k n 35

37 Analysis The pair Q s, C c must work at the rate-distortion limit for a source of block length k n and the channel with block length n and SNR = (1 a)p N+aP. Hence, the distortion achieved for ŝ d is given by D d = ( 1 + (1 a)p N + ap ) λ 1 λ The distortion achieved for the analog branch (MMSE estimation) is given by D a = ap/n The overall average distortion is given by D = λd a + (1 λ)d d 36

38 By optimizing with respect to a, we find again OPTIMAL!. D = (1 + P/N) λ The optimal power allocation is given by a = N P ( ( 1 + P ) λ 1) N and yields balanced analog and digital distortions: D a = D d = D opt (P). 37

39 Hybrid Wyner-Ziv (HWZ) scheme, λ 1 xa k s k x = [xa, x d ] n Qs u bin index lookup m Cc x d n k Wyner-Ziv encoder ya sa k y n s = (1 α 2 ) sa + αu k y d n k Q c m bin C u Wyner-Ziv decoder 38

40 Analysis The side information is created by sending s via the upper analog branch: this yields s = ŝ a + v where ŝ a = P N+P y a and where the MMSE estimation error v is independent of ŝ a and has variance σ 2 v = 1 k E[ s ŝ a 2 ] = 1 P N + P = P/N The Wyner-Ziv source coder and the channel code must work at the ratedistortion limit for the source of length k and the channel with n k channel uses. The Wyner-Ziv rate-distortion function is given by R wz (D) = 1 2 log σ2 v D 39

41 From the equality R wz (D) = λc(p) we obtain D = ( 1 + P ) λ N OPTIMAL! 40

42 Hybrid Costa-Coding (HCC) scheme, λ 1 sa xa power ap 1 α n s k x = (1 α)xa + u n s d k n Qs m bin C u n Modified Costa encoder sa n y n s = [ sa, s d ] k Qc u bin index lookup m Cs s d k n Costa decoder 41

43 On duality The digital layer codeword is obtained as x d = u αx a, of power (1 a)p. The Costa inflation factor is given by α = (1 a)p N+(1 a)p. Instead of producing explicitly x d and then x = x a + x d, the encoded signal is obtained directly by letting x = u + (1 α)x a. This is the dual of the Wyner-Ziv decoder, that produces ŝ, as a linear combination of the auxiliary codeword u and the side information ŝ a. 42

44 Analysis Decoding of the Costa code requires R(D) = λ 1 λc((1 a)p), where C((1 a)p) is the capacity of the AWGN channel with SNR (1 a)p N, achievable by Costa Coding. The distortion of the digital layer is given by D d = ( 1 (1 a)p N ) λ 1 λ The analog branch produces an MMSE estimation ŝ a of s a by treating x d as additional Gaussian noise. This yields D a = ap N+(1 a)p = N + (1 a)p N + P 43

45 The overall average distortion is given by D = λd a + (1 λ)d d By optimizing with respect to a, we obtain OPTIMAL! D = (1 + P/N) λ The optimal power allocation is given by a = 1 N P ( ( 1 + P ) 1 λ 1) N Again, this yields a balanced distortion in the analog and digital branches. 44

46 On the no loss condition Using u = α aps a + x d as an jointly Gaussian observation in addition to y = aps a + x d + z in order to estimate s a yields no improvement. Since Costa Coding is capacity lossless, the no loss Markov chain condition U Y S a holds. Therefore, u and s a are statistically independent given y. 45

47 SNR mismatch Towards the purpose of finding schemes for broadcasting a common Gaussian source to many receivers, in different SNR conditions, we investigate the effect of SNR mismatch on the schemes presented before, which are optimal when working at the design SNR Optimal (R D) limit Hybrid Costa coding scheme Superposition Scheme 8 Distortion (db) Actual channel SNR 46

48 Broadcasting a common source: bandwidth compression Superposition HDA approach works best when SNR SNR design, when the digital layer is successfully decoded and subtracted. Costa-coding HDA approach works best when SNR SNR design, when the digital layer is not decoded and hence creates interference. This hints the following breadcast strategy: Allow for an analog layer, encode digital layer using superposition (to be decoded by all users) and refinement using Costa coding (to be decoded only by the strong user). Model: noise variances N 1 and N 2 N 1. User 2 is the strong user, and user 1 is the weak user. Average distortion achevable region: distortion points (D 1, D 2 ). convex closure of all achievable 47

49 Broadcasting a common source: bandwidth compression sa xa power ap 1 α n x d s power bp k s d k n Qs m Cc n x s d Cs power cp s d = s d s d Qs m bin C u n Modified Costa encoder 48

50 y x d sa n y n Cc s = [ sa, s d + s d ] k k n Qc m Cs s d Qc u bin index m lookup Cs s d Costa decoder 49

51 Analysis The first n components of s, denoted by s a, are scaled by ap and transmitted as the anlog layer x a. The second k n components of s, denoted by s d, are quantized with distortion D q. The corresponding index, m, is channel-encoded producing the codeword x d, with power bp. The quantization error ŝ d = s d ŝ d is quantized with distortion D d,2 and the corresponding index m is Costa-encoded, treating x a +x d as side information (known at the transmitter). The resulting Costa encoded signal, x d = u α(x a+x d ), has power cp, such that a + b + c = 1. 50

52 Since the Costa encoded signal must be decoded by the strong user, we let the Costa inflation factor be α = cp N 2 + cp Quantization distortion (common part): Since the quantization index m must be decoded by both users, we have (1 λ)r(d q ) = λc(bp/(n 1 + (a + c)p)) yielding the quantization distortion of the first layer D q = ( N 1 + P N 1 + (a + c)p ) λ 1 λ Strong user: s a is MMSE-estimated after subtracting the decoded x d and by treating the Costa encoded signal x d as additive Gaussian noise. 51

53 The resulting distortion is given by D a,2 = N 2 + cp N 2 + (a + c)p Finally, Costa decoding sees a channel with capacity C(cP/N 2 ). Therefore, the distortion for the reconstruction of s d is obtained by imposing the condition that yields D d,2 = (1 λ)r(d d,2 /D q ) = λc(cp/n 2 ) ( N 1 + P N 1 + (a + c)p ) λ 1 λ ( 1 + c N 2 ) λ 1 λ Eventually, the average distortion achieved by the strong user is given by D 2 = λd a,2 + (1 λ)d d,2 52

54 Weak user: s a is MMSE-estimated after subtracting the decoded x d and by treating the Costa encoded signal x d as additive Gaussian noise. The resulting distortion is given by D a,1 = N 1 + cp N 1 + (a + c)p For the digital layer D d,1 = D q, since the Costa encoded layer cannot be decoded by the weak user. The resulting distortion is given by D 1 = λd a,1 + (1 λ)d d,1 EXTREME POINTS OPTIMALITY: letting b = 0 and a + c = 1, and by optimizing with respect to c, we find the optimal individual distortion for the 53

55 strong user, D 2,opt = (1 + P/N 2 ) λ obtained for c = N 2 P ( ( 1 + P N 2 ) 1 λ 1) Letting c = 0, a + b = 1, and optimizing with respect to a, we find the individually optimal distortion of the weak user D 1,opt = (1 + P/N 1 ) λ, for ( ( a = N P ) λ 1) P N 1 54

56 Best known scheme for bandwidth compression log 10 (D 1 ) Mittal and Phamdo Proposed Scheme log (D )

57 Brute-force application of duality s s d xa k power P s k Cs Qs m power bp Cc x d n k x = [xa, x d + x d ] n Qs u bin index lookup m Cc x d power cp Wyner-Ziv encoder Broadcast (superposition) encoder 56

58 ya sa k s d y n y d n k Qc m Cs k s k x d Cc y d x d Qc m bin C u k Wyner Ziv decoder 57

59 Reznic-Zamir-Feder scheme for bandwidth expansion The source is quantized, producing index m and represenation point ŝ d. The quantization error s ŝ d is scaled to obtain the analog signal x a with power P, transmitted in the first k components of x. For the remaining n k components, the codeword x d, encoding the quantization index m, is transmitted with power bp. In addition, the codeword x d is superimposed with power cp, such that b+c = 1. This codeword encodes the index m output by a Wyner-Ziv source coder. 58

60 Analysis Quantization (common part): The message m is decoded without errors, thus ŝ d is perfectly known. Since m must be also decoded by the weak user, this imposes ( ) bp R(D q ) = (λ 1)C N 1 + cp yielding ( ) λ+1 N1 + P D q = N 1 + cp Strong user: it performs MMSE estimation of the quantization error from the observation y a = x a +z 2 = P D q (s ŝ d ) +z 2, and obtains an estimate of the source ŝ a,2 with error variance D a,2 = D q 1 + P N 2 59

61 The side information model used by the Wyner-Ziv encoder is s = ŝ a,2 + v, where v is an estimation noise with variance D a,2. The coding rate of the Wyner-Ziv stage in order to achieve an overall distortion D 2 is given by R wz (D 2 ) = 1 2 log D a,2 D 2 = R(D 2 /D a,2 ) The strong user can subtract the decoded codeword x d from the received signal, in order to decode x d in interference-free condition. Imposing the decodability condition for the index m, that is R wz (D 2 ) = (λ 1)C(cP/N 2 ) we obtain D 2 = P N 2 ( 1 + cp ) λ+1 ( ) λ+1 N1 + P N 2 N 1 + cp 60

62 Weak user: it cannot decode the Wyner-Ziv index. Hence, it can achieve distortion D 1 = 1 ( ) λ+1 N1 + P 1 + P N N 1 + cp 1 by using only the knowledge of ŝ d and the MMSE estimation of the quantization error. EXTREME POINTS OPTIMALITY: For c = 1 we obtain the optimal condition for the strong user, that is, D 2 = (1 + P/N 2 ) λ. Letting c = 0 we can operate optimally for the weak user, in fact, we obtain D 1 = (1 + P/N 1 ) λ. The Reznic-Zamir-Feder scheme achieves a region obtained by letting c vary in [0, 1]. In the paper by Reznic Zamir and Feder, an outer bound to the achievable region is obtained in terms of certain jointly Gaussian auxiliary random variables and by using the EPI. 61

63 We checked that duality does not help (or does not seem to help in a straightforward way) to find a non-trivial outer bound for the case of compression λ < 1. OPEN PROBLEM! 62

64 Part III: Gaussian source on MIMO fading channels 63

65 Motivation We consider the transmission of a real analog source of bandwidth W s samples per second over a complex MIMO block-fading channel of bandwidth W c (channel use per second). Performance criterion: end-to-end quadratic distortion D(ρ) (MSE). This problem arises in (at least) two relevant cases: 1. Strict delay constraint, real-time (e.g., telephony) or streaming (e.g., video). 2. Multicast to a large number of static users. 64

66 The block-fading MIMO channel y t = ρ M Hx t + w t, t = 1,...,T T is the duration (in channel uses) of the transmitted block. H C N M is the channel matrix, assumed to be constant for all t = 1,...,T but random with i.i.d. elements h i,j CN(0, 1). x t is the transmitted signal at time t; the transmitted codeword, X = [x 1,...,x T ], is normalized such that tr(e[x H X]) MT. ρ denotes the Signal-to-Noise Ratio (SNR). For simplicity, we restrict our discussion to the case M N (the case M > N follows **more or less** easily). 65

67 P(e) limit performance: SNR exponent Capacity is zero: to have P e (ρ) 0 we need to increase SNR. [Zheng-Tse, IT-2003]: diversity-multiplexing tradeoff. Consider a family of space-time coding schemes {C rc (ρ)} of rate R c = r c log ρ. The SNR exponent of the family is defined as the limit d(r c ) = log P e(ρ) log ρ The SNR exponent of the channel, d (r c ) is the supremum, over all possible coding families, of d(r c ). For T M, d (r c ) is fully determined as the piecewise linear function joining the points (r c = j, d (j) = (N j)(m j)), for r c [0, M] (d (r c ) = 0 for r c > M). 66

68 M = N = 4 example 16 M = N = 4, T >= d(r) r 67

69 Problem statement (1) i.i.d. real Gaussian source N(0, 1). A K-to-(M T) source-channel encoder is a mapping SC : R K C M T that maps source blocks s R K onto channel codewords X. A source-channel decoder is a mapping C M T R K that maps the channel output Y = [y 1,...,y T ] into an approximation s of the source block. The average quadratic distortion is defined by D(ρ) = 1 K E[ s s 2 ] where expectaction is with respect to s,h and the channel noise. The spectral efficiency of the encoder is defined as η = K/T = W s /W c. 68

70 Problem statement (2) Consider a family of source-channel coding schemes {SC η (ρ)} of spectral efficiency η. We define the distortion SNR exponent of the family as the limit log D(ρ) a(η) = log ρ The distortion SNR exponent of the channel a (η) is the supremum, over all possible coding families, of a(η). 69

71 Main results (1) Theorem 1. [Exponent achievable by separation]. The distortion SNR exponent a sep (η) = 2(jd (j 1) (j 1)d [ ) (j)) 2(j 1) 2 + η(d (j 1) d, η (j)) d (j 1), 2j d (j) for j = 1,...,M, is achievable by a tandem source-channel coding scheme. Theorem 2 [Informed transmitter upper bound]. The optimal distortion SNR exponent a (η) is upperbounded by a ub (η) = M min i=1 { } 2, 2i 1 + M N η 70

72 Main results (2) Theorem 3 [Hybrid scheme lower bound]. Hybrid digital-analog (HDA) spacetime coding (see next!) achieves the following exponent: for η [ a hybrid (η) = 1 + 2(j 1)M Md (j 1) M+j 1, ( 2 η 1 ) jd (j 1) (j 1)d (j) 1 M 2 η 1 M + d (j 1) d (j) ) 2jM Md (j) M+j, j = 1,...,M 1 ( 2 a hybrid (η) = 1+ η 1 ) M(N M + 1) 1 M 2 η 1 M + N M + 1, and η [ ] 2M(M 1) M(N M) + M 1,2M a hybrid (η) = 2M η, η 2M 71

73 Main results (3) Corollary 1 [Characterization of a (η) for η 2M]. For η 2M, a (η) = 2M/η and it is achieved by the HDA space-time coding scheme. Proof. For η 2M, a ub (η) = a hybrid (η). Corollary 2 [Characterization of a (η) for M = N = 1]. For M = N = 1, a (η) = a hybrid (η) = { 1 η 2 2 η η 2 (1) and it is achieved by the HDA coding scheme. Proof. For M = N = 1, a ub (η) = a hybrid (η). 72

74 Scalar channel M = N = M = N = 1 1 SNR exponent a*(η) 0.5 a sep (η) η 73

75 MIMO channel M = N = M = N = a (η) ub 3 a(η) a hybrid (η) a sep (η) η 74

76 MIMO channel M = N = 4 M = N = a ub (η) SNR exponent a hybrid (η) 4 analog I Q modulation 2 a sep (η) η 75

77 Proof of Theorem 1 (main ideas) A tandem source-channel coding scheme consists of the concatenation of a quantizer Q, of rate R s nat/source sample, with a space-time code of rate R c nat/channel use. Since R s K = R c T, we have R c = ηr s. The end-to-end distortion is achievable: D sep (R s ) D Q (R s ) + κp(e) Using D Q (R s ) = D(R s ) = exp( 2R s ) = ρ 2r s and P(e) obtain D sep (ρ) ρ 2 η r c + κρ d (r c ) which yields r c = η 2 d (r c ),a sep (η) = 2r c /η.. = ρ d (r c ), we 76

78 Proof of Theorem 2 (main ideas) We assume a tandem scheme that, for any realization of H, chooses the coding rate R c (H) equal to the capacity of the MIMO channel with matrix H, and the quantization rate R s = R c (H)/η. We use R c (H) log det ( I + ρhh H) We obtain [ ] 1 D(ρ) E det(i + ρhh H ) 2/η The large-snr behavior of this quantity can be analyzed by using the same technique used in Zheng and Tse (Wishart distribution, Varadhan Lemma). 77

79 Proof of Theorem 3: case η 2M Kr log ρ bits s,..., s Quantizer Space-Time Encoder X C ŝ,..., ŝ - Reconstruct e,..., e Analog Space-Time Encoder X C 78

80 Proof of Theorem 3: case η > 2M s,..., 1 sk 1 K1rs log ρ bits Quantizer Space-Time Encoder X C 1 β s K,..., s 1 K Analog Space-Time Encoder β + X C 79

81 HDA code construction: key observations The exponent a hybrid (η) is achievable for any source with finite variance and by any scheme with finite block length K provided that:. 1. the quantizer distortion satisfies D Q (ρ) = ρ 2r s at rate R s = r s log ρ;. 2. the space-time code error probability satisfies P e (ρ) = ρ d (r c ) at rate R c = r c log ρ. These conditions are much simpler to satisfy. We propose to use cyclic division algebra codes [Belfiore et al., IT05, Elia- Kumar et al., ISIT 05] and scalar quantization. 80

82 HDA scheme for bandwidth expansion K Tandem encoder m T d Q C MUX Reconstruction quantization error 81

83 HDA scheme for bandwidth compression Source K K(1 2m/η) m T Q C Demux Tandem encoder K(2m/η) m T Scaling 82

84 Recent results Schemes based on multiple layers, and generalized dimension splitting and superposition: best known performance achieved so far... see Batthad, Narayanan and Caire, Asilomar 06. In particular, for the case N > M we show that a superposition scheme with large number of layers and suitable power allocation achieves the optimal exponent for η 2M N M+1. Schemes with partial channel state information at the transmitter: we consider a CSIT feedback channel of fixed cardinality K (log 2 K bits per feedback message). We show that very large improvements are possible even for moderate K, and when K = O(log 1 η ) the optimal exponent a (η) = 2M η is achievable. In general, the optimal scheme makes use of power control and rate allocation. Of course, adaptive rate and power is needed only for low η (bandwidth expansion). 83

85 Part IV: Practical joint source-channel coding 84

86 Joint Source-Channel Coding: finite-length issues Even though separation is optimal (classical S Tx Ch Rx U configuration), using blindly independently optimized source and channel coding schemes may lead to poor performance for practical low-complexity and non asymptotically large block length. We focus on practical finite-length schemes, when separation is asymptotically optimal. JSCC has been addressed for toy sources and very special cases (e.g.: Gaussian source over Gaussian channel under quadratic distortion, binary source over BSC with Hamming distortion, etc...). Next, we shall handle *real-life* sources and *general* channels. 85

87 Conceptual Structure of a Transform Coder s R K z u F (P+1) K 2 Entropy b F B 2 W( ) Q( ) Coding θ Probability Model Estimator Parameters for reconstruction (header) 86

88 Probability model, ML estimation and entropy coding The entropy encoder is based on a probability parametric model: {P (K) θ ( ) : θ Θ,K = 1, 2,...}. Maximum Likelihood estimate θ = arg max θ Θ P(K) θ (u) Entropy coding: assign length B = log P (K) (u) θ 87

89 Operational Shannon limit For large K and a consistent model estimator such that θ θ, we have that B KH θ (U). When transmitting over a channel with capacity C, the best possible efficiency η = K/N (source samples per channel use) is given by η = C/H θ (U). The achieved distortion is D Q. The point (C/H θ (U),D Q ) on the efficiency-distortion plane is the Shannon limit for any system based on the quantizer Q. From well-known results (Ziv, Feder, Zamir), the optimal efficiency for the same distortion, C/R(D Q ) is not too far. 88

90 Key Idea: Using a Single Linear Coding Stage For simplicity, we restrict to BIOS channels. We merge entropy coding and channel coding into a single non-catastrophic encoding operation, that maps linearly the redundant sequence u directly into the channel codeword x. Since binary linear codes are particularly simple and well understood, we shall implement this linear mapping in layers, bit-plane by bit-plane. We consider P + 1 linear codes C 0,...,C P with block length N 0,...,N P and generator matrices G 0,...,G P. We obtain the codeword x as the concatenation of x (0),...,x (P), where x (p) = u (p) G p 89

91 Joint Source-Channel Decoding: soft-bits Let (without loss of generality) the k-th scalar quantizer be { 0 z 0 u 0,k = 1 z < 0 (u 1,k,...,u P,k ) = arg min z v F P k 2 P v p 2 p 1 p=1 The corresponding reconstruction function is given by Q 1 k (u k) = ( 1) u 0,k k 2 P u p,k 2 p p=1 90

92 We use a joint source-channel decoder based on Belief Propagation that estimates the posterior marginal probabilities {P(u p,k y) : p = 0,...,P, k = 1,...,K} These are used to obtain the MMSE estimator z k = E[z k y] of the transform coefficients. Assuming zero-mean quantization noise statistically independent of the channel output y, this takes on the appealing simple form z k = k 2 tanh ( λ0,k 2 ) P p=1 2 p 1 + e λ p,k where we define the a posteriori log-likelihood ratio (LLR) for symbol u p,k as λ p,k = log P(u p,k = 0 y) P(u p,k = 1 y) 91

93 Proposed JSCC Scheme s R K z u F (P+1) K 2 Linear x F N 2 W( ) Q( ) Coding Rate Selection Probability Model Estimator θ Parameters for reconstruction (header) 92

94 Optimality Theorem 1. Consider a binary source V defined by the sequence of K-dimensional joint probability distributions {P (K) V (v) : K = 1, 2,...} over F K 2. Define the sup-entropy rate H(V ) of V as the limsup in probability of the sequence of random variables 1 K log 2 P (K) V (v) that is, the infimum of all values h for which ( P 1 ) K log 2 P (K) V (v) h 0 as K. Consider a system that, for each length K, maps source sequences v into binary codewords c = vg of length N, and transmits c over a BIOS channel with capacity C. Let y denote the channel output and ψ : y v be a suitable decoder. For any δ,ǫ > 0 and sufficiently large K there exists a K N matrix G and a decoder ψ such that P( v v) ǫ and K/N C/H(V ) δ. 93

95 Consequence on Code Design Theorem 1 states that there exists a single sequence of encoding matrices increasing block length K and efficiency arbitrarily close to C/H such that the decoding error probability vanishes for all the source statistics with parameters θ Θ such that H θ (U) = H. This allows us to design one set of coding matrices {G p } for each value H [0, 1] of the source entropy rate. In practice, we define a fine quantization grid on the interval [0,1] and design a coding matrix for each quantized rate. When encoding the p-th bit-plane, we compute the conditional empirical entropy rate Ĥ(U p U p+1,...,u P ) = 1 K log 2 P (K) (u (p) u (p+1),...,u (P) ) θ and choose the corresponding (pre-designed) coding matrix. 94

96 A Specific Example: JPEG2000-like source coder We can show (too long!!) that the bit-plane probability modeler of JPEG2000 reduces to a conditionally Markov model. Bit values from upper bit-planes Context Computer K κ Random bit Generator P θ (u κ) u p,k State shift register 95

97 Validation of the modeler and parameter representation The total output length is given by B data + B model. We have optimized the number of bits necessary to represent the estimated parameter θ, and verified that we can closely approach the Rissanen MDL bound. We have compared the compression-only performance of our modeler (with arithmetic coding and including the model redundancy) with that obtained by JPEG2000. Image Proposed Algorithm JPEG2000 Encoder Goldhill Lena

98 Factor Graph The Factor Graph corresponding to the conditional Markov chain of a bitplane yields a trellis {π p,k : (p, k ) S p,1 } {π p,k : (p, k ) S p,k } π p,1 π p,2 π p,k u p,1 u p,2 u p,3 u p,k 97

99 In our case: 32-state time-varying trellis (π P, κ) (0, 0) (1, 0) (2, 0) (3, 0) (π P, κ) (0, 0) (1, 3) (2, 0) (3, 3) (π P, κ) (0, 0) (1, 3) (2, 0) (3, 3) (π P, κ) (0, 0) (1, 3) (2, 0) (3, 3) (4, 1) (4, 1) (4, 1) (4, 0) (5, 1) (5, 3) (5, 3) (5, 3) (6, 1) (7, 1) (6, 1) (7, 3) (6, 1) (7, 3) (6, 0) (7, 3) (8, 5) (8, 5) (8, 5) (8, 5) (9, 5) (9, 7) (9, 7) (9, 7) (10, 5) (11, 5) (10, 5) (11, 7) (10, 5) (11, 7) (10, 5) (11, 7) (12, 6) (12, 6) (12, 6) (12, 5) (13, 6) (14, 6) (13, 7) (14, 6) (13, 7) (14, 6) (13, 7) (14, 5) (15, 6) (15, 7) (15, 7) (15, 7) (16, 0) (16, 1) (16, 1) (16, 1) (17, 0) (18, 0) (17, 3) (18, 1) (17, 3) (18, 1) (17, 3) (18, 1) (19, 0) (19, 3) (19, 3) (19, 3) (20, 1) (20, 2) (20, 2) (20, 1) (21, 1) (21, 3) (21, 3) (21, 3) (22, 1) (22, 2) (22, 2) (22, 1) (23, 1) (23, 3) (23, 3) (23, 3) (24, 5) (24, 6) (24, 6) (24, 6) (25, 5) (25, 7) (25, 7) (25, 7) (26, 5) (26, 6) (26, 6) (26, 6) (27, 5) (27, 7) (27, 7) (27, 7) (28, 6) (28, 6) (28, 6) (28, 6) (29, 6) (29, 7) (29, 7) (29, 7) (30, 6) (30, 6) (30, 6) (30, 6) (31, 6) (31, 7) (31, 7) (31, 7) 98

100 Linear Coding using Punctured Turbo Codes Focus on the encoder of a generic p-th bit-plane and drop the index p. We consider a TC family with two identical component binary Recursive Convolutional Codes (RCC) of rate 1: x(d) = b(d) a(d) u(d). We use a tail-biting encoder, hence the the mapping (u 1,...,u K ) (x 1,...,x K ) is given by x = ua 1 B where A is the K K circulant matrix with first row (a 0,a 1,...,a µ,0, } 0,...,0 {{} ) K µ 1 and B is the circulant matrix with first row (b 0, b 1,...,b µ,0, } 0, {{...,0 } ) K µ 1 99

101 Let Π 1,Π 2 denote two K K permutation matrices (interleavers), and R 0,R 1 and R 2 denote three puncturing matrices, of dimension K n 0, K n 1 and K n 2, respectively. Then, a generator matrix for the TC with given RCC component, interleaver and puncturing is given by G = [ R 0 Π 1 A 1 BR 1 Π 2 A 1 BR 2 ] 100

102 Guidelines for Code Design Intuitively, G should mimic as closely as possible the generator matrix of a random linear code. In fact, G should map the statistically dependent and maginally non-uniform binary symbols of the input u into channel symbols x with the required (marginal) capacity-achieving uniform distribution. Lemma 1. A defined above is non-singular if and only if a(d) is not a divisor of zero in the ring F 2 [D] modulo 1 + D K. In particular, A is invertible if and only if a(d) and 1 + D K are relatively prime. Since A 1 is a circulant matrix and, for what said before, we wish that its rows look as random as possible, we shall choose a(d) to be a primitive polynomial of degree µ. The existence of A 1 is guaranteed by the following Lemma 2. A is invertible if and only if 2 µ 1 does not divide K. 101

103 By choosing a(d) primitive we have that the feedback shift register with coefficients given by a(d) in the RCC encoder generates an m-sequence of period 2 µ 1, with Hamming weight 2 µ 1. This has the following nice consequence: Lemma 3. If 2 µ 1 does not divide K, the circulant matrix A 1 has first row τ formed by the concatenation of periods of the m-sequence generated K 2 µ 1 by a(d), plus K modulo 2 µ 1 extra symbols. In particular, for K 2 µ 1 and large µ we have that the Hamming weight of τ is close to K/2. For example, consider a(d) = 1 + D 3 + D 4 (or (23) 8 ) and K = 16. The corresponding first row of A 1 is equal to τ = [1, 0, 0,0, 1,0, 0, 1,1, 0, 1,0, 1,1, 1, 1] and has Hamming weight 9, so that 9/16 = If we consider length K = 64, we would obtain τ with Hamming weight 33, so that 33/64 = , that is already quite close to 1/2. Lemma 4. For K 2 µ 1 and non-uniform i.i.d. encoder input u(d) with P(u k = 1) = ρ (0,1), the circulation state is almost uniformly distributed 102

104 over the RCC encoder state space. Furthermore, the encoder state at any position in the trellis is also almost uniformly distributed. b(d) is optimized by educated semi-exhaustive search. For given RCC generators, the permutations Π 1 and Π 2 were chosen at random, by trial and error. For K not too small, the effect of interleaver permutations on the end-toend distortion of the proposed scheme is minimal. In fact, one significant advantage of the proposed JSCC scheme is that its performance is not dominated by the error floor region. In this respect, the proposed JSCC scheme puts much less stress on the code design than a conventional SSCC scheme! 103

105 BER R c = 1.1 K n 1 +n Figure 1: Threshold effect for the case of a Bernoulli source H(U) = 0.5, transmitted over a BSC C =

106 Successive decoding of the bit-planes y (P) θ Turbo Decoder u (P) (P 1) y θ Turbo Decoder (P 1) u (P 2) y θ Turbo Decoder (P 2) u 105

107 Factor Graph of the p-th Decoder Inputs from previous layers decisions u (p+1),..., u (P) Source Markov chain Markov source states π u (p) variable nodes Turbo code permutation 1 Turbo code permutation 2 Π 1 Π 2 RCC input (permuted version of u (p) ) RCC input (permuted version of u (p) ) RCC outputs x RCC states RCC outputs x BIOS channel transition probabilities Symbols y from the channel output (x denotes punctured) 106

108 Numerical Experiments: Goldhill Test Image over BSC 45 JSCC SSCC 40 PSNR C H(U) a b c d e f η

109 Numerical Experiments: Lena Test Images over BSC 45 JSCC SSCC 40 PSNR C H(U) a b η c d e f

110 Snapshots: Goldhill, conventional SSCC a) b) c) d) e) f) 109

111 Snapshots: Goldhill, proposed JSCC a) b) c) d) e) f) 110

112 Snapshots: Lena, conventional SSCC a) b) c) d) e) f) 111

113 Snapshots: Lena, proposed JSCC a) b) c) d) e) f) 112

114 Conclusions and current/future work Other families of linear codes. In particular, fountain codes. Other families of sources. In particular, audio, speech and video... (a long way to go!). Concatenation of the proposed scheme with DUDE... exploit the random noise effect. Spectrally efficient transmission via superposition coding, mapping on highorder modulations. Incorporate the proposed JSCC scheme into various Hybrid Digital Analog (HDA) schemes as an efficent way to implement the digital component (multicasting of a common source). 113

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Sparse Regression Codes for Multi-terminal Source and Channel Coding Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder

More information

Simultaneous SDR Optimality via a Joint Matrix Decomp.

Simultaneous SDR Optimality via a Joint Matrix Decomp. Simultaneous SDR Optimality via a Joint Matrix Decomposition Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv Uni. May 26, 2011 Model: Source Multicasting over MIMO Channels z 1 H 1 y 1 Rx1 ŝ 1 s

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel

Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Efficient Use of Joint Source-Destination Cooperation in the Gaussian Multiple Access Channel Ahmad Abu Al Haija ECE Department, McGill University, Montreal, QC, Canada Email: ahmad.abualhaija@mail.mcgill.ca

More information

Approximately achieving the feedback interference channel capacity with point-to-point codes

Approximately achieving the feedback interference channel capacity with point-to-point codes Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used

More information

Rematch and Forward: Joint Source-Channel Coding for Communications

Rematch and Forward: Joint Source-Channel Coding for Communications Background ρ = 1 Colored Problem Extensions Rematch and Forward: Joint Source-Channel Coding for Communications Anatoly Khina Joint work with: Yuval Kochman, Uri Erez, Ram Zamir Dept. EE - Systems, Tel

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

MMSE estimation and lattice encoding/decoding for linear Gaussian channels. Todd P. Coleman /22/02

MMSE estimation and lattice encoding/decoding for linear Gaussian channels. Todd P. Coleman /22/02 MMSE estimation and lattice encoding/decoding for linear Gaussian channels Todd P. Coleman 6.454 9/22/02 Background: the AWGN Channel Y = X + N where N N ( 0, σ 2 N ), 1 n ni=1 X 2 i P X. Shannon: capacity

More information

The Gallager Converse

The Gallager Converse The Gallager Converse Abbas El Gamal Director, Information Systems Laboratory Department of Electrical Engineering Stanford University Gallager s 75th Birthday 1 Information Theoretic Limits Establishing

More information

Diversity-Fidelity Tradeoff in Transmission of Analog Sources over MIMO Fading Channels

Diversity-Fidelity Tradeoff in Transmission of Analog Sources over MIMO Fading Channels Diversity-Fidelity Tradeo in Transmission o Analog Sources over MIMO Fading Channels Mahmoud Taherzadeh, Kamyar Moshksar and Amir K. Khandani Coding & Signal Transmission Laboratory www.cst.uwaterloo.ca

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

ECE Information theory Final

ECE Information theory Final ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH MIMO : MIMO Theoretical Foundations of Wireless Communications 1 Wednesday, May 25, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication 1 / 20 Overview MIMO

More information

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12 Lecture 1: The Multiple Access Channel Copyright G. Caire 12 Outline Two-user MAC. The Gaussian case. The K-user case. Polymatroid structure and resource allocation problems. Copyright G. Caire 13 Two-user

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional

More information

Side-information Scalable Source Coding

Side-information Scalable Source Coding Side-information Scalable Source Coding Chao Tian, Member, IEEE, Suhas N. Diggavi, Member, IEEE Abstract The problem of side-information scalable (SI-scalable) source coding is considered in this work,

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Stefano Rini, Ernest Kurniawan and Andrea Goldsmith Technische Universität München, Munich, Germany, Stanford University,

More information

Multicoding Schemes for Interference Channels

Multicoding Schemes for Interference Channels Multicoding Schemes for Interference Channels 1 Ritesh Kolte, Ayfer Özgür, Haim Permuter Abstract arxiv:1502.04273v1 [cs.it] 15 Feb 2015 The best known inner bound for the 2-user discrete memoryless interference

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008 Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Convolutional Codes November 6th, 2008 1

More information

Performance of Polar Codes for Channel and Source Coding

Performance of Polar Codes for Channel and Source Coding Performance of Polar Codes for Channel and Source Coding Nadine Hussami AUB, Lebanon, Email: njh03@aub.edu.lb Satish Babu Korada and üdiger Urbanke EPFL, Switzerland, Email: {satish.korada,ruediger.urbanke}@epfl.ch

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18

More information

Wyner-Ziv Coding over Broadcast Channels: Digital Schemes

Wyner-Ziv Coding over Broadcast Channels: Digital Schemes Wyner-Ziv Coding over Broadcast Channels: Digital Schemes Jayanth Nayak, Ertem Tuncel, Deniz Gündüz 1 Abstract This paper addresses lossy transmission of a common source over a broadcast channel when there

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

Anatoly Khina. Joint work with: Uri Erez, Ayal Hitron, Idan Livni TAU Yuval Kochman HUJI Gregory W. Wornell MIT

Anatoly Khina. Joint work with: Uri Erez, Ayal Hitron, Idan Livni TAU Yuval Kochman HUJI Gregory W. Wornell MIT Network Modulation: Transmission Technique for MIMO Networks Anatoly Khina Joint work with: Uri Erez, Ayal Hitron, Idan Livni TAU Yuval Kochman HUJI Gregory W. Wornell MIT ACC Workshop, Feder Family Award

More information

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1159 The Duality Between Information Embedding and Source Coding With Side Information and Some Applications Richard J. Barron, Member,

More information

Capacity Theorems for Relay Channels

Capacity Theorems for Relay Channels Capacity Theorems for Relay Channels Abbas El Gamal Department of Electrical Engineering Stanford University April, 2006 MSRI-06 Relay Channel Discrete-memoryless relay channel [vm 7] Relay Encoder Y n

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

On the Rate-Limited Gelfand-Pinsker Problem

On the Rate-Limited Gelfand-Pinsker Problem On the Rate-Limited Gelfand-Pinsker Problem Ravi Tandon Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 74 ravit@umd.edu ulukus@umd.edu Abstract

More information

Distributed Arithmetic Coding

Distributed Arithmetic Coding Distributed Arithmetic Coding Marco Grangetto, Member, IEEE, Enrico Magli, Member, IEEE, Gabriella Olmo, Senior Member, IEEE Abstract We propose a distributed binary arithmetic coder for Slepian-Wolf coding

More information

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation

On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation On the Capacity and Degrees of Freedom Regions of MIMO Interference Channels with Limited Receiver Cooperation Mehdi Ashraphijuo, Vaneet Aggarwal and Xiaodong Wang 1 arxiv:1308.3310v1 [cs.it] 15 Aug 2013

More information

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,

More information

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels Representation of Correlated s into Graphs for Transmission over Broadcast s Suhan Choi Department of Electrical Eng. and Computer Science University of Michigan, Ann Arbor, MI 80, USA Email: suhanc@eecs.umich.edu

More information

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case 1 arxiv:0901.3580v1 [cs.it] 23 Jan 2009 Feedback Capacity of the Gaussian Interference Channel to Within 1.7075 Bits: the Symmetric Case Changho Suh and David Tse Wireless Foundations in the Department

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

Source-Channel Coding Techniques in the Presence of Interference and Noise

Source-Channel Coding Techniques in the Presence of Interference and Noise Source-Channel Coding Techniques in the Presence of Interference and Noise by Ahmad Abou Saleh A thesis submitted to the Department of Electrical and Computer Engineering in conformity with the requirements

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH : Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Wednesday, June 1, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication

More information

Coding on a Trellis: Convolutional Codes

Coding on a Trellis: Convolutional Codes .... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:

More information

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu

More information

Energy State Amplification in an Energy Harvesting Communication System

Energy State Amplification in an Energy Harvesting Communication System Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Multiple-Input Multiple-Output Systems

Multiple-Input Multiple-Output Systems Multiple-Input Multiple-Output Systems What is the best way to use antenna arrays? MIMO! This is a totally new approach ( paradigm ) to wireless communications, which has been discovered in 95-96. Performance

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved. Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information

More information

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 : Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1 Rayleigh Friday, May 25, 2018 09:00-11:30, Kansliet 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

On the Performance of. Golden Space-Time Trellis Coded Modulation over MIMO Block Fading Channels

On the Performance of. Golden Space-Time Trellis Coded Modulation over MIMO Block Fading Channels On the Performance of 1 Golden Space-Time Trellis Coded Modulation over MIMO Block Fading Channels arxiv:0711.1295v1 [cs.it] 8 Nov 2007 Emanuele Viterbo and Yi Hong Abstract The Golden space-time trellis

More information

arxiv: v2 [cs.it] 28 May 2017

arxiv: v2 [cs.it] 28 May 2017 Feedback and Partial Message Side-Information on the Semideterministic Broadcast Channel Annina Bracher and Michèle Wigger arxiv:1508.01880v2 [cs.it] 28 May 2017 May 30, 2017 Abstract The capacity of the

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

On the Required Accuracy of Transmitter Channel State Information in Multiple Antenna Broadcast Channels

On the Required Accuracy of Transmitter Channel State Information in Multiple Antenna Broadcast Channels On the Required Accuracy of Transmitter Channel State Information in Multiple Antenna Broadcast Channels Giuseppe Caire University of Southern California Los Angeles, CA, USA Email: caire@usc.edu Nihar

More information

arxiv: v1 [cs.it] 5 Feb 2016

arxiv: v1 [cs.it] 5 Feb 2016 An Achievable Rate-Distortion Region for Multiple Descriptions Source Coding Based on Coset Codes Farhad Shirani and S. Sandeep Pradhan Dept. of Electrical Engineering and Computer Science Univ. of Michigan,

More information

On Multiple User Channels with State Information at the Transmitters

On Multiple User Channels with State Information at the Transmitters On Multiple User Channels with State Information at the Transmitters Styrmir Sigurjónsson and Young-Han Kim* Information Systems Laboratory Stanford University Stanford, CA 94305, USA Email: {styrmir,yhk}@stanford.edu

More information

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract

More information

Network coding for multicast relation to compression and generalization of Slepian-Wolf

Network coding for multicast relation to compression and generalization of Slepian-Wolf Network coding for multicast relation to compression and generalization of Slepian-Wolf 1 Overview Review of Slepian-Wolf Distributed network compression Error exponents Source-channel separation issues

More information

Interactions of Information Theory and Estimation in Single- and Multi-user Communications

Interactions of Information Theory and Estimation in Single- and Multi-user Communications Interactions of Information Theory and Estimation in Single- and Multi-user Communications Dongning Guo Department of Electrical Engineering Princeton University March 8, 2004 p 1 Dongning Guo Communications

More information

Turbo Codes for Deep-Space Communications

Turbo Codes for Deep-Space Communications TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

EE 4TM4: Digital Communications II. Channel Capacity

EE 4TM4: Digital Communications II. Channel Capacity EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

ACOMMUNICATION situation where a single transmitter

ACOMMUNICATION situation where a single transmitter IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 9, SEPTEMBER 2004 1875 Sum Capacity of Gaussian Vector Broadcast Channels Wei Yu, Member, IEEE, and John M. Cioffi, Fellow, IEEE Abstract This paper

More information

On Compound Channels With Side Information at the Transmitter

On Compound Channels With Side Information at the Transmitter IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 52, NO 4, APRIL 2006 1745 On Compound Channels With Side Information at the Transmitter Patrick Mitran, Student Member, IEEE, Natasha Devroye, Student Member,

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 4, No. 1, June 007, 95-108 Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System A. Moulay Lakhdar 1, R. Méliani,

More information

Random Access: An Information-Theoretic Perspective

Random Access: An Information-Theoretic Perspective Random Access: An Information-Theoretic Perspective Paolo Minero, Massimo Franceschetti, and David N. C. Tse Abstract This paper considers a random access system where each sender can be in two modes of

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels

Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Submitted: December, 5 Abstract In modern communication systems,

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

Broadcasting over Fading Channelswith Mixed Delay Constraints

Broadcasting over Fading Channelswith Mixed Delay Constraints Broadcasting over Fading Channels with Mixed Delay Constraints Shlomo Shamai (Shitz) Department of Electrical Engineering, Technion - Israel Institute of Technology Joint work with Kfir M. Cohen and Avi

More information

Lossy Distributed Source Coding

Lossy Distributed Source Coding Lossy Distributed Source Coding John MacLaren Walsh, Ph.D. Multiterminal Information Theory, Spring Quarter, 202 Lossy Distributed Source Coding Problem X X 2 S {,...,2 R } S 2 {,...,2 R2 } Ẑ Ẑ 2 E d(z,n,

More information

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus Turbo Compression Andrej Rikovsky, Advisor: Pavol Hanus Abstract Turbo codes which performs very close to channel capacity in channel coding can be also used to obtain very efficient source coding schemes.

More information

The Robustness of Dirty Paper Coding and The Binary Dirty Multiple Access Channel with Common Interference

The Robustness of Dirty Paper Coding and The Binary Dirty Multiple Access Channel with Common Interference The and The Binary Dirty Multiple Access Channel with Common Interference Dept. EE - Systems, Tel Aviv University, Tel Aviv, Israel April 25th, 2010 M.Sc. Presentation The B/G Model Compound CSI Smart

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels

Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels Shuangqing Wei, Ragopal Kannan, Sitharama Iyengar and Nageswara S. Rao Abstract In this paper, we first provide

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 62, NO. 11, NOVEMBER

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 62, NO. 11, NOVEMBER IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 6, NO., NOVEMBER 04 3997 Source-Channel Coding for Fading Channels With Correlated Interference Ahmad Abou Saleh, Wai-Yip Chan, and Fady Alajaji, Senior Member,

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

CROSS LAYER CODING SCHEMES FOR BROADCASTING AND RELAYING. A Dissertation MAKESH PRAVIN JOHN WILSON

CROSS LAYER CODING SCHEMES FOR BROADCASTING AND RELAYING. A Dissertation MAKESH PRAVIN JOHN WILSON CROSS LAYER CODING SCHEMES FOR BROADCASTING AND RELAYING A Dissertation by MAKESH PRAVIN JOHN WILSON Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements

More information

On the Secrecy Capacity of Fading Channels

On the Secrecy Capacity of Fading Channels On the Secrecy Capacity of Fading Channels arxiv:cs/63v [cs.it] 7 Oct 26 Praveen Kumar Gopala, Lifeng Lai and Hesham El Gamal Department of Electrical and Computer Engineering The Ohio State University

More information