CONSIDER a lossy compression setup in which the

Size: px
Start display at page:

Download "CONSIDER a lossy compression setup in which the"

Transcription

1 I TRANSACTIONS ON INFORMATION THORY, VOL. 6, NO. 11, NOVMBR Nonasymptotic Noisy Lossy Source Coding Victoria Kostina and Sergio Verdú Abstract This paper shows new general nonasymptotic achievability and converse bounds and performs their dispersion analysis for the lossy compression problem in which the compressor observes the source through a noisy channel. While this problem is asymptotically equivalent to a noiseless lossy source coding problem with a modified distortion function, nonasymptotically there is a noticeable gap in how fast their minimum achievable coding rates approach the common ratedistortion function, as evidenced both by the refined asymptotic analysis dispersion and the numerical results. The size of the gap between the dispersions of the noisy problem and the asymptotically equivalent noiseless problem depends on the stochastic variability of the channel through which the compressor observes the source. Index Terms Achievability, converse, finite bloclength regime, lossy data compression, noisy data compression, noisy source coding, noisy sources, strong converse, dispersion, memoryless sources, Shannon theory. I. INTRODUCTION CONSIDR a lossy compression setup in which the encoder has access only to a noise-corrupted version X of a source S, and we are interested in minimizing in some stochastic sense the distortion ds, Z between the true source S and its rate-constrained representation Z see Fig. 1. This problem arises if the obect to be compressed is the result of an uncoded transmission over a noisy channel, or if the observed data is subect to errors inherent to the measurement system. xamples include speech in a noisy environment and photographs corrupted by noise introduced by the image sensor. Since we are concerned with reproducing the original noiseless source rather than preserving the noise, the distortion measure is defined with respect to the clean source. The noisy source coding setting was introduced by Dobrushin and Tsybaov, who showed that, when the goal is to minimize the average distortion, the noisy source coding problem is asymptotically equivalent to a certain surrogate noiseless source coding problem. Specifically, for a stationary Manuscript received November 13, 13; revised December 18, 14; accepted March 5, 16. Date of publication May 3, 16; date of current version October 18, 16. This wor was supported in part by the Division of Computing and Communication Foundations through the National Science Foundation under Grant CCF and in part by the Division of lectrical, Communications and Cyber Systems through the Center for Science of Information, an NSF Science and Technology Center, under Grant CCF This paper was presented at the 13 I Information Theory Worshop 1. V. Kostina is with the California Institute of Technology, asadena, CA 9115 USA vostina@caltech.edu. S. Verdú is with rinceton University, rinceton, NJ 8544 USA verdu@princeton.edu. Communicated by A. B. Wagner, Associate ditor for Shannon Theory. Color versions of one or more of the figures in this paper are available online at Digital Obect Identifier 1.119/TIT memoryless source with single-letter distribution S observed through a stationary memoryless channel with single-input transition probability ernel X S, the noisy rate-distortion function under a separable distortion measure is given by Rd min Z X : ds,z d S X Z I X; Z 1 min I X; Z, Z X : dx,z d the surrogate per-letter distortion measure is da, b ds, b X a, 3 and S X Z means that S, X and Z form a Marov chain in this order. The expression for Rd in implies that, in the limit of infinite bloclengths, the problem is equivalent to a conventional noiseless lossy source coding problem the distortion measure is the conditional average of the original distortion measure given the noisy observation of the source. Berger 3, p. 79 used the surrogate distortion measure 3 to streamline the proof of. Witsenhausen 4 explored the capability of distortion measures defined through conditional expectations such as in 3 to treat various so-called indirect rate distortion problems. Sarison 5 showed that if both the source and its noise-corrupted version tae values in a separable Hilbert space and the fidelity criterion is meansquared error, then asymptotically, an optimal code can be constructed by first obtaining a minimum mean-square estimate of the source sequence based on its noisy observation, and then compressing the estimated sequence as if it were noisefree. Wolf and Ziv 6 observed that Sarison s result holds even nonasymptotically, namely, that the minimum average distortion achievable in one-shot noisy compression of the obect S can be written as D M S S X + inf cfx S X, 4 f,c the infimum is over all encoders f: X 1,...,M and all decoders c : 1,...,M M, andx and M are the alphabets of the channel output and the decoder output, respectively. It is important to note that the choice of the mean-squared error distortion is crucial for the validity of the additive decomposition in 4. For vector quantization of a Gaussian signal corrupted by an additive independent Gaussian noise under weighted squared error distortion measure, Ayanoğlu 7 found explicit expressions for the optimum quantization rule. Wolf and Ziv s result in 4 was I. ersonal use is permitted, but republication/redistribution requires I permission. See for more information.

2 611 I TRANSACTIONS ON INFORMATION THORY, VOL. 6, NO. 11, NOVMBR 16 extended to waveform vector quantization under weighted quadratic distortion measures and to autoregressive vector quantization under the Itaura-Saito distortion measure by phraim and Gray 8, and by Fisher et al. 9, who studied a model in which both encoder and decoder have access to the history of their past input and output blocs, allowing exploitation of inter-bloc dependence. Thus, the cascade of the optimal estimator followed by the optimal compressor achieves the minimum average distortion in those settings as well. For lossy compression of a discrete memoryless source DMS corrupted by discrete memoryless noise, Weissman and Merhav 1, 11 studied the best attainable tradeoff between the exponential rates of decay of the probabilities that the codeword length and the cumulative distortion exceeds respective thresholds 1 and proposed a sequential coding scheme for sequential compression of a source sequence corrupted by noise 11. The findings in 1 imply, in particular, that the noisy excess-distortion exponent cannot be obtained from that of a clean surrogate source. Weissman 1 went on to study universally attainable error exponents in noisy source coding. Under the logarithmic loss distortion measure 13, the noisy source coding problem reduces to the information bottlenec problem 14. Indeed, in the information bottlenec method, the goal is to minimize I X; Z subect to the constraint that I S; Z exceeds a certain threshold. Both problems are equivalent because the noisy rate-distortion function under logarithmic loss is given by in which dx, Z is replaced by H S Z. The solution to the information bottlenec problem thus becomes the answer to a fundamental limit, namely, the asymptotically minimum achievable noisy source coding rate under logarithmic loss. In this paper, we give new nonasymptotic achievability and converse bounds for the noisy source coding problem, which generalize the noiseless source coding bounds in 15. We evaluate those bounds numerically in special cases to illuminate their tightness for all but very small bloclengths, a regime for which Shannon s random selection of reproduction points ceases to be near-optimal. We give a refined asymptotic analysis of those bounds. Specifically, as in 15 and 16, we fix the tolerable probability ɛ of exceeding distortion d and we study how fast the rate-distortion function can be approached as the length of the data bloc we are coding over increases. 1 This regime is different from the large deviations asymptotics studied in 1 in which a rate strictly greater than the ratedistortion function is fixed, and the excess distortion probability vanishes exponentially with. For noiseless source coding of stationary memoryless sources with separable distortion measure, we showed previously 15 see also 16 for an alternative proof in the finite alphabet case that the minimum number M of representation points compatible with a given probability ɛ of exceeding distortion threshold d at bloclength 1 We refer the reader to 15 for a discussion of differences between the fidelity criteria of excess and average distortion in the nonasymptotic regime. Fig. 1. satisfies Noisy source coding. log M, d,ɛrd + VdQ 1 ɛ+o log, 5 Vd is the rate-dispersion function, explicitly identified in 15, and Q 1 denotes the inverse of the complementary standard Gaussian cdf. In this paper, we show that for noisy source coding of a discrete stationary memoryless source over a discrete stationary memoryless channel under a separable distortion measure, Vd in 5 is replaced by the noisy ratedispersion function Ṽd, which can be expressed as Ṽd Vd + λ Var ds, Z X, Z, 6 λ R d, Vd is the rate-dispersion function of the surrogate rate-distortion setup, Z denotes the reproduction random variable that achieves the rate-distortion function, and the conditional variance of random variable U conditioned on V is defined as Var U V U U V. 7 Note that Ṽd cannot be expressed solely as a function of the source distribution and the surrogate distortion measure. Thus, the noisy coding problem is, in general, not equivalent to the noiseless coding problem with the surrogate distortion measure. ssentially, the reason is that taing the expectation in 3 dismisses the randomness introduced by the noisy channel, which cannot be neglected in the analysis of the dispersion. Indeed, as seen from 6, the difference between Vd and Ṽd is due to the stochastic variability of the channel from S to X. The rest of the paper is organized as follows. After introducing the basic definitions in Section II, we proceed to show new general nonasymptotic converse and achievability bounds in Sections III and IV, respectively, along with their asymptotic analysis in Section V. Finally, the example of a binary source observed through an erasure channel is discussed in Section VI. II. DFINITIONS Consider the one-shot setup in Fig. 1 we are given the distribution S on the alphabet M and the transition probability ernel X S : M X describing the statistics of the noise corrupting the original signal. We are also given the distortion measure d: M M, +, M is the representation alphabet. An M, d, ɛ code is a pair of random mappings U X : X 1,...,M and Z U : 1,...,M M such that ds, Z >d ɛ. Define R S,X d inf Z X : dx,z d I X; Z, 8

3 KOSTINA AND VRDÚ: NONASYMTOTIC NOISY LOSSY SOURC CODING 6113 d: X M, + is given by dx, z ds, z X x. 9 We assume that R S,X d < for some d, and we denote d min inf d : R S,X d <. 1 As in 15, assume that the infimum in 8 is achieved by some Z X such that the constraint is satisfied with equality. Noting that this assumption guarantees differentiability of R S,X d, denote λ R S,X d. 11 Definition 1 Noisy d-tilted Information: For d > d min, the noisy d-tilted information in s M given observation x X and representation z M is defined as S,X s, x, z, d ı X;Z x; z + λ ds, z λ d, 1 Z X achieves the infimum in 8, and the information density is denoted by ı X;Z x; z log d Z Xx z. 13 d Z As we will see, the intuitive meaning of the noisy d-tilted information is the number of bits required to represent s within distortion d given observation x and representation z. Taing the expectation of 1 with respect to S conditioned on X, we obtain X x, d, thed-tilted information in x for the surrogate noiseless source coding problem: X x, d ı X;Z x; z + λ dx, z λ d, holds for Z -a.e. z 17, Lemma 1.4, 18, Th..1, which is a consequence of the fact that Z achieves the minimum in the convex optimization problem 8. Comparing 1 and 14, we observe that S,X s, x, z, d X x, d + λ ds, z λ ds, z X x From 1 and 15, we obtain 15 R S,X d S,X S, X, Z, d 16 X X, d. 17 To conclude the introduction to noisy d-tilted information, we discuss its relationship to a variation in rate-distortion function due to a small variation in probability distribution SXZ. Assume that the alphabets M, X and M are finite. Fix d. In order to rigorously define a derivative of R S,X d with respect to SXZ, we consider a finite measure on M X M: Q SXZ Q S sq X S x sq Z X z x, 18 which is not necessarily a probability measure. Furthermore, using the probability measures: S s Q Ss s Q 19 Ss X S x s Q X S x s x X Q X Sx s Z X z x Q Z X z x z M Q Z Xz x, 1 we introduce the following function of Q SXZ : FQ SXZ, d I X; Z + λ S X d S, Z d, λ S X R S, d is a differentiable function of X S X S X S. In particular, F S X Z R S, X d, Z attains R S, X d. Denote the partial derivatives with respect to the coordinate at s, x, z: Ṙ S,X s, x, z, d FQ SXZ, d Q SXZ s, x, z 3 Q SXZ SXZ The following theorem demonstrates that the noisy d-tilted information exhibits properties similar to those of the d-tilted information listed in 18, Th.. reproduced in Theorem 13 in Appendix A. Theorem 1: Assume that M, X and X are finite sets, and let d > d min. Then, Ṙ S,X s, x, z, d S,X s, x, z, d R S,X d, 4 Var Ṙ S,X S, X, Z, d Var S,X S, X, Z, d. 5 the variances are with respect to S, X, Z S X S Z X. roof: See Appendix B. III. CONVRS BOUNDS Our main converse result for the nonasymptotic noisy lossy compression setting is the lower bound on the excess distortion probability as a function of the code size given in Theorem. For fixed X and an auxiliary conditional distribution X Z, denote the function f X Z s, x, z ı X Z X x; z + sup λds, z d log M, λ 6 ı X Z X x; z log d X Zz X x. 7 Our converse result can now be stated as follows. Theorem Converse: If an M, d,ɛcode exists, then the following inequality must hold: ɛ inf Z X : X M sup X Z : M X sup γ f X Z S, X, Z γ exp γ, 8 the middle supremum is over those X Z such that Radon-Niodym derivative of X Zz with respect to X at xexistsfor Z X X -a.e. z, x. roof: Let the encoder and decoder be the random transformations U X and Z U, respectively, U taes values in 1,...,M. Wehave,foranyγ, f X Z S, X, Z γ f X Z S, X, Z γ,ds, Z >d + f X Z S, X, Z γ,ds, Z d 9

4 6114 I TRANSACTIONS ON INFORMATION THORY, VOL. 6, NO. 11, NOVMBR 16 ds, Z >d + ı X Z X X; Z log M + γ,ds, Z d 3 ɛ + ı X Z X X; Z log M + γ 31 ɛ + exp γ M exp ı X Z X X; Z 3 ɛ 33 + exp γ M d Z U z u M u1 z M d X Z x z x X ɛ + exp γ, 34 3 is by direct solution for the supremum; 3 is by Marov s inequality; 33 follows by writing out the expectation on the left side with respect to X U Z and upper-bounding U X u x 1 35 for every x, u M 1,...,M. Finally, 8 follows by choosing γ and X Z that give the tightest bound and Z X that gives the weaest bound in order to obtain a code-independent converse. The following bound reduces the intractable in high dimensional spaces infimization over all conditional probability distributions Z X in Theorem to the infimization over the symbols of the output alphabet. Corollary 1: Any M, d, ɛ code must satisfy ɛ sup inf f X γ z M Z S, X, z γ X sup X Z : M X exp γ, 36 and the supremum is over those X Z such that Radon- Niodym derivative of X Zz with respect to X at x exists for every z M and X -a.e. x. roof: We weaen 8 using inf f X Z S, X, Z γ Z X inf f X Z S, X, Z γ X 37 Z X inf z M f X Z S, X, z γ X, 38 we used S X Z. In our asymptotic analysis, we will apply Corollary 1 with suboptimal choices of λ and X Z. Note that if M Sˆ, Sˆ is a finite set, and X Z is chosen so that X Zz is the same for all z of the same type, the computation of the weaened version of the bound in 36 has only polynomialin- complexity. Remar 1: If supp Z M and X S is the identity mapping so that ds, z dx, z almost surely, for every z, then Corollary 1 reduces to the noiseless converse in 15, Th. 7 by using 15 after weaening 36 with X Z X Z and λ λ. Remar : In many important special cases, the value of the probability in 36 is the same for all z, obviating the need to perform the optimization over z in 36. Indeed, weaening 36 with X Z X Z and λ λ and using 15, we re-write the expression inside the conditional on X x probability in 36 as ı X;Z x; z + λ ds, z d X x, d + λ ds, z dx, z, 39 which according to 15 holds for all z M if supp Z M. It follows that the conditional on X x distribution of 39 is uniquely determined by the conditional distribution of the zero-mean random variable ds, z dx, z. Therefore, as long as conditional on X x, for all x supp X distribution of ds, z dx, z does not depend on the choice of z M, any choice of z in 36 is as good as any other. The condition is met, for example, by lossy coding of a Gaussian memoryless source observed through an AWGN channel under the mean-squared error distortion, as well as by coding of an equiprobable source observed through a symmetric channel under a symbol error rate constraint. IV. ACHIVABILITY BOUNDS Continuing in the single-shot setup, in this section we show the existence of codes of a given size, whose probability of exceeding a given distortion level is bounded explicitly. Theorem 3 Achievability: For any Z defined on M, there exists a deterministic M, d,ɛ code with ɛ M πx, Z >t X dt, 4 X Z X Z, πx, z ds, z >d X x. 41 roof: The proof invoes a random coding argument. Given M codewords c 1,...,c M, the encoder f and decoder c achieving minimum excess distortion probability attainable with the given codeboo operate as follows. Having observed x X, the optimum encoder chooses i arg min πx, c i, 4 i with ties broen arbitrarily, so fx i and the decoder simply outputs cfx c i. The excess distortion probability achieved by the scheme is given by ds, cfx > d πx, cfx 43 We use the notation M M. πx, cfx > t dt 44 πx, cfx > t X dt. 45

5 KOSTINA AND VRDÚ: NONASYMTOTIC NOISY LOSSY SOURC CODING 6115 Now, we notice that 1 πx, cfx > t 1 min πx, c i>t i 1,...,M 46 M 1 πx, c i >t, 47 and we average 45 with respect to the codewords Z 1,...,Z M drawn i.i.d. from Z, independently of any other random variable, so that XZ1...Z M X Z... Z,to obtain 1 M πx, Z i >t X dt 48 M πx, Z >t X dt. 49 Since there must exist a codeboo achieving excess distortion probability below or equal to the average over codeboos, 4 follows. Remar 3: Notice that we have actually shown that the right side of 4 gives the exact minimum excess distortion probability of random coding, averaged over codeboos drawn i.i.d. from Z. Remar 4: In the noiseless case, S X, so almost surely πx, z 1 ds, z >d, 5 and the bound in Theorem 3 reduces to the noiseless random coding bound in 15, Th. 1. The bound in 4, which in itself can be difficult to compute or analyze, can be weaened to obtain the following result in Corollary. Corollary generalizes 15, Th. 1 which distills Shannon s method for noiseless lossy compression in the single-shot setup. Corollary : For any Z X, there exists an M, d,ɛ code with ɛ inf γ ds, Z >d + ı X;Z X; Z >log M γ + e expγ, 51 SXZ S X S Z X. roof: Fix γ and transition probability ernel Z X. Let X Z X Z i.e. Z is the marginal of X Z X, and let X Z X Z. We use the nonasymptotic covering lemma 19, Lemma 5 to establish M πx, Z >t X πx, Z >t+ ı X;Z X; Z>log M γ +e expγ. 5 Applying 5 to 49 and noticing that πx, Z >t dt πx, Z 53 ds, Z >d, 54 While an analysis of the bound in Corollary for memoryless sources leads to a dispersion term which is larger than that in 6, it is an easy-to-compute bound which also leads to a simple proof of the achievability of the asymptote in. The following weaening of Theorem 3 is tighter than that in Corollary and is amenable to a tight second-order analysis. Theorem 4 Achievability: For any Z, there exists an M, d,ɛ code with ɛ g Z X, U log γ + e M γ, 55 U is uniform on, 1, independent of X, and 3 g Z x, t inf D Z Z. 56 Z : πx,z t a.s. roof: The bound in 4 implies that for an arbitrary Z, there exists an M, d,ɛ code with ɛ M πx, Z >t X dt e M γ min 1,γ π X, Z t X dt + e M γ + 1 γ πx, Z t X + dt 57 1 γ πx, Z t X + dt, 58 a + max, a, and to obtain 58 we applied 1 p M e Mp e M γ min1,γp + 1 γ p To bound 1 γ πx, Z t X x +,welet Z be some distribution chosen individually for each x and t such that πx; Z t a.s., and we write 1 γ πx, Z t X x + 1 γ exp ı Z Z Z 1 πx, Z t X x 1 γ exp ı Z Z Z γ exp DZ Z + 6 1D Z Z >log γ, 63 6 is by the change of measure, d Z ı Z Z z log z, 64 d Z 61 is by the choice of Z, 6 is by Jensen s inequality, and 63 follows from γ exp D Z Z 1 if D Z Z log γ otherwise. 65 we obtain We understand that g Z x, t + if Z : πx, Z t a.s.

6 6116 I TRANSACTIONS ON INFORMATION THORY, VOL. 6, NO. 11, NOVMBR 16 V. ASYMTOTIC ANALYSIS In this section, we pass from the single shot setup of Sections III and IV to a bloc setting by letting the alphabets be Cartesian products M S, X A, M Sˆ, and we study the second order asymptotics in of M, d,ɛ, the minimum achievable number of representation points compatible with the excess distortion constraint ds, Z >d ɛ. We mae the following assumptions. i S X S X S... S X S and ds, z 1 ds i, z i. 66 ii The alphabets S, A, Sˆ are finite sets. iii The distortion level satisfies d min < d < d max, d min is the minimum d such that R S,X d is finite as in 1, and d max inf z Sˆ ds, z, the expectation is with respect to the unconditional distribution of S. iv Let S X S X X. In a neighborhood of X, supp Z supp Z, R S, Xd I X; Z, and R S, Xd is twice continuously differentiable as a function of X. The following result is obtained via a second order analysis of Corollary 1 Appendix C and Theorem 4 Appendix D. Theorem 5 Gaussian Approximation: For < ɛ < 1, under assumptions i iv, the minimum number of representation points compatible with a given probability ɛ of exceeding distortion d at bloclength satisfies log M, d,ɛ Rd + ṼdQ 1 ɛ + O log, 67 Ṽd Var S;X S, X, Z, d 68 is the noisy rate-dispersion function. The rate-dispersion function of the surrogate noiseless problem is given by see 15, eq. 83 Vd Var X X, d, 69 X X, d is defined in 15. The following proposition implies that Ṽd >Vd unless there is no noise. roposition 1: The noisy rate-dispersion function in 68 can be written as Ṽd Var X X, d + λ Var ds, Z X, Z. 7 roof: Note that the expression in 7 is ust an equivalent way to write the decomposition 6. To verify that the decomposition 6 indeed holds, write, using 68 and 15, Ṽd Var X X, d + λ ds, Z λ ds, Z X, Z 71 Var X X, d + λ Var ds, Z X, Z + λ Cov X X, d, ds, Z ds, Z X, Z, 7 the covariance is zero: X X, d Rd ds, Z ds, Z X, Z X X, d Rd ds, Z ds, Z X, Z X, Z Remar 5: Generalizing the observation made by Ingber and Kochman 16 in the noiseless lossy compression setting, we note using Theorem 1 that the noisy rate-dispersion function admits the following representation: Ṽd Var Ṙ S,X S, X, Z, d 75 Ṙ S,X is defined in 3. VI. XAML: RASD FAIR COIN FLIS A. rased Fair Coin Flips Let a binary equiprobable source be observed by the encoder through a binary erasure channel with erasure rate δ. The goal is to minimize the bit error rate with respect to the source. For δ d 1, the rate-distortion function is given by d δ Rd 1 δ log h, 76 1 δ h is the binary entropy function, and 76 is obtained by solving the optimization in which is achieved by Z Z 1 1 and 1 d δ b a X Z a b d δ b a? 77 δ a? a, 1,? and b, 1, so S,X S, X, Z, d ı X;Z X; Z + λ ds, Z λ d 78 log 1 + exp λ w.p. 1 δ λ d + λ w.p. δ 79 w.p. δ The rate-dispersion function is given by the variance of 79: λ Ṽd δ1 δ log cosh + δ loge 4 λ, 8 λ R d log 1 δ d d δ. 81 A tight converse result can be obtained by particularizing the bound in Corollary 1 with X Z chosen to be a product of independent copies of 77 and λ chosen to be times λ in 81. As detailed in Remar, due to the symmetry of the erased coin flips setting such a choice would eliminate the need to perform the optimization over z in the right side of 36, yielding a particularly easy-to-compute bound. In 15,

7 KOSTINA AND VRDÚ: NONASYMTOTIC NOISY LOSSY SOURC CODING 6117 we showed an even tighter converse result for the erased coin flips setting: Theorem 6 Converse, BS 15, Th. 3: In the erased fair coin flips setting, ɛ i δ 1 δ + 1 M, d,ɛ, i d i 8 i, 83 i if >. with the convention if < and The following achievability bound from 15 turns out to be ust a particularization of Theorem 3 to the erased coin flips setting with Z chosen to be equiprobable on the set of -bit binary strings. Theorem 7 Achievability, BS 15, Th. 33: In the erased fair coin flips setting, ɛ i δ 1 δ M 1,d,ɛ. 84 i d i B. rased Fair Coin Flips: Surrogate Rate-Distortion roblem According to 9, the distortion measure of the surrogate rate-distortion problem is given by d1, 1 d,, 85 d1, d, 1 1, 86 d?, 1 d?, The d-tilted information is given by taing the expectation of 79 with respect to S: log X X, d λ d exp λ w.p. 1 δ λ 88 w.p. δ Its variance is equal to λ Vd δ1 δ log cosh 89 loge Ṽd δ 4 λ. 9 Achievability and converse bounds for the ternary source with binary representation alphabet and the distortion measure in are obtained as follows. Theorem 8 Converse, Surrogate BS: Any code must satisfy ɛ d δ 1 δ 1 M, M, d,ɛ d roof: Fix a, M, d, ɛ code. While erased bits contribute to the total distortion regardless of the code, the probability that nonerased bits lie within Hamming distance l of their representation can be upper bounded using 15, Th. 15: dx, Z l no erasures in X M +. 9 l Using 9, the probability that the total distortion is within d is written as dx, Z d d erasures in X dx, Z d 1 no erasures in X d δ 1 δ min 1, M d Theorem 9 Achievability, Surrogate BS: There exists a, M, d,ɛ code such that ɛ δ 1 δ 1 M d roof: Consider the ensemble of codes with M codewords drawn i.i.d. from the equiprobable distribution on, 1. very erased symbol contributes 1 to the total distortion. The probability that the Hamming distance between the nonerased symbols and their representation exceeds l, averaged over the code ensemble is found as in 15, Th. 16: dx, CfX > l no erasures in X M 1, 95 l

8 6118 I TRANSACTIONS ON INFORMATION THORY, VOL. 6, NO. 11, NOVMBR 16 Fig.. Rate-bloclength tradeoff for the fair coin source observed through an erasure channel, as well as that for the surrogate problem, with δ.1, d.1, ɛ.1. Cm, m 1,...,M are i.i.d on, 1. Averaging over the erasure channel, we have ds, CfX > d erasures in X dx, CfX > d 1 δ 1 δ 1 M d Since there must exist at least one code whose excessdistortion probability is no larger than the average over the ensemble, there exists a code satisfying 94. The bounds in 15, Th. 3, 15, Th. 33 and the approximation in Theorem 5 with the remainder term equal to 4 and log, as well as the bounds in Theorems 8 and 9 for the surrogate rate-distortion problem together with their Gaussian approximation, are plotted in Fig.. Note that despite the fact that the asymptotically achievable rate in both problems is the same, there is a very noticeable gap between their nonasymptotically achievable rates in the displayed region of bloclengths. For example, at bloclength 1, the penalty over the rate-distortion function is 9% for erased coin flips and only 4% for the surrogate source coding problem. ANDIX A AUXILIARY RSULTS In this Appendix, we state auxiliary results that are instrumental in the proof of Theorem 5. 4 It was shown in 15, Th. 34 that the O log term for the erased coin flips lies between and log. The same bound on the remainder term can be shown for the surrogate noiseless problem, via an analysis of 91 and 94. Theorem 1 Berry-sseén CLT, e.g. 1, Ch. XVI.5 Th. : Fix a positive integer. Let W i,i 1,..., be independent. Then, for any real t V W i > μ + t Qt B, 97 μ 1 V 1 T 1 W i, 98 Var W i, 99 W i μ i 3, 1 B c T, 11 V 3/ and.497 c c <.4784 for identically distributed W i, 3. The second result deals with minimization of the cdf of a sum of independent random variables. Let D is a metric space with the metric d : D R +. Define the random variable Z on D. LetW i, i 1,..., be independent conditioned on Z. Denote μ z 1 V z 1 T z 1 W i Z z, 1 Var W i Z z, 13 W i W i 3 Z z. 14 Let l 1, l, L 1, F 1, F, V min and T max be positive constants. We assume that there exist z D and sequences μ, V such that for all z D, μ μ z l 1 d z, z l, 15 μ μ z L 1, 16 V z V F1 d z, z + F, 17 V min V z, 18 T z T max. 19 Theorem 11 18, Th. A.6.4: In the setup described above, under assumptions 15 19, for any A >, there exists a K such that, for all Al 1 T 1 3 maxv 5 min F1 and all sufficiently large : min W i μ Z z Q z D V K. 11

9 KOSTINA AND VRDÚ: NONASYMTOTIC NOISY LOSSY SOURC CODING 6119 The following two theorems summarize some crucial properties of d-tilted information. It is convenient to introduce the notation R X d R X,X d, 111 which is ust the usual noiseless rate-distortion function. Theorem 1 18, Th..1: Fix d > d min.for Z -almost every z, it holds that X x, d ı X;Z x; z + λ dx, z λ d, 11 λ R X d, and XZ X Z X. Moreover, R X d min ı X;Z X; Z + λ dx, Z λ d 113 Z X min ı X;Z X; Z + λ dx, Z λ d 114 Z X X X, d, 115 and for all z M exp λ d λ dx, z + X X, d 1, 116 with equality for Z -almost every z. The next result establishes a relationship between partial derivatives of R X d with respect to coordinates of source distribution and the d-tilted information. To properly define differentiation on the simplex, we consider R X d R X d as a function of M-dimensional probability vector X.Weextend the definition of R X d to M-dimensional nonnegative vectors Q X Q X x x M as follows: R Q X d R X d, 117 X x Q X x x M Q X x, so that X is a probability vector. For a A, denote the partial derivatives Ṙ X a, d Q X a R Q X d. 118 Q X X Theorem 13 18, Th..: Assume that X is a finite set. R X d I X; Z. Then, Ṙ X a, d X a, d R X d, 119 Var Ṙ X X, d Var X X, d. 1 Remar 6: The equality in 1 with a slightly different understanding of differentiation with respect to a probability vector coordinate was first observed in 4. ANDIX B ROOF OF THORM 1 Denote for brevity λ λ S X. Theorem 1 is obtained using the chain rule of differentiation, as shown in VI-B 15 at the bottom of this page. ANDIX C ROOF OF TH CONVRS ART OF THORM 5 The following concentration is instrumental in the proof of the converse side of the asymptotic Gaussian approximation. Lemma 1: Let X 1,...,X be independent on A and distributed according to X. For all and all γ >, it holds that type X X >γ A exp γ A, 16 denotes the uclidean norm of its A -dimensional vector argument. roof: Observe first that X : X X >γ X : a A: Xa X a > γ, 17 A is the set of all distributions on A. Foralla A, it holds by Hoeffding s inequality cf. Yu and Speed 5, eq..1 that type X X : Xa X a > γ A exp γ, 18 A and 16 follows by the union bound. Let Z X : A Sˆ be a stochastic matrix whose entries are multiples of 1. We say that the conditional type of z Q SXZ s, x, z ı X; Z X; Z + λ d S, Z λd S,X s, x, z, d R S,X d + S,X s, x, z, d R S,X d + Q SXZ s, x, z ı X; Z X; Z S X Z SXZ Q SXZ SXZ ı Q SXZ s, x, z X; Z X; Z + λ ds, Z λd Q 11 SXZSXZ Q SXZ s, x, z ı X; Z X; Z 1 S X Z SXZ Q SXZ s, x, z log Z X Z X log Z Z 13 S X Z SXZ Z X Z X X X X X S X Z SXZ Q SXZ s, x, z, d Z X Z 14 X 15

10 61 I TRANSACTIONS ON INFORMATION THORY, VOL. 6, NO. 11, NOVMBR 16 given x is equal to Z X, type z x Z X, if the number of a s in x that are mapped to b in z is equal to the number of a s in x times Z X b a, foralla, b A Ŝ. Let log M Rd + ṼdQ 1 ɛ 1 log log c A log, 19 ɛ ɛ + O log and constant c will be specified in the sequel, and denotes the set of all conditional types Sˆ A. Note that + 1 A S. We weaen the bound in 8 by choosing X Z z x 1 X Zzi x i, 13 X Z λ λx R typex d, 131 γ 1 log. 13 By virtue of Theorem, the excess distortion probability of all M, d,ɛ codes M is that in 19 must satisfy ɛ min ı X z S ˆ Z X X ; z + λx ds, z d log M + γ X exp γ. 133 It suffices to prove that the right side of 133 is lower bounded by ɛ for M defined in 19. For a given pair x, z, abbreviate type x X, 134 type z x Z X, 135 λx λ X. 136 For each x and z, denote the independent random variables W i I X; Z + λ X ds i, z i d, i 1,...,n. 137 Since ds i, z i Var ds i, z i ds, Z, 138 Var ds, Z X, Z, 139 S X Z S X X Z, in the notation of Theorem 11 z Z X is now a conditional type we have μ Z X I X; Z + λ X ds, Z d, 14 V Z X λ X Var ds, Z X, Z, 141 ds, T Z X λ 3 X Z ds, Z X, Z We define X Z through X Z X and lower-bound the sum in 13 by the term containing X Z, concluding that ı X Z X x ; z + λx ds, z d I X; Z + D X X + λ X ds i, z i d log 143 W i + D X X log 144 W i log. 145 Weaening 133 using 145, we write ɛ min W i log M + γ + log X z S ˆ exp γ. 146 We identify the typical set of channel outputs: T x A : type x X A log, 147 is the uclidean norm. Noting that by Lemma 1, X / T A, 148 we proceed to evaluate the minimum in 146 for x T. Denote the bacward conditional distribution that achieves R X d by X Z. Write μ Z X I X; Z + λ X d X, Z d 149 ı X; Z X; Z + λ X d X, Z λ X d + D X Z X Z Z 15 R Xd + D X Z X Z Z 151 R Xd + 1 X Z Z X Z Z log e, is by Theorem 1, and 15 is by inser s inequality. If V Z X, there is nothing to prove, as in that case the second term in 39 is almost surely for all z supp Z. In the sequel we assume V Z X >. Similar to the proof of 18, C.5, we conclude that the conditions of Theorem 11 are satisfied by W i, i 1,...,, with μ μ Z X and V V Z X. Therefore, abbreviating X log M + γ + log R X d, 153 M and γ were chosen in 19 and 13, we have min Z X Q λ X W i log M + γ + log type X X X V Z X K, 154

11 KOSTINA AND VRDÚ: NONASYMTOTIC NOISY LOSSY SOURC CODING 611 K > is that in Theorem 11. Noting that assumption iv implies, via 5, that d Z s x is continuously differentiable as a function of X in a neighborhood of X,we can apply a Taylor series expansion in the vicinity of X to 1 to conclude that for some scalar a and K V 1 > Z X λ X X Q λ X V Z X Q X log 1 + a 155 λ X V Z X Q X log K 1, 156 λ X V Z X to obtain 156 we used Qx + ξ Qx ξ + π, 157 with ξ log, as X O log for x T. On the other hand, by assumption iv Taylor s theorem applies to R Xd; thus, there exists c > such that write X Q λ X V Z X R X d+λ X V Z X G log M +γ +log 16 X X, d + λ X V Z X G log M + a, 163 a O log is defined as a γ + log + c A log. 164 Finally, collecting 133, 154, 156 and 163, we obtain , shown at the bottom of this page, 167 is by the union bound; 168 is by the Berry-sséen theorem Theorem 1, the choice of M in 19, the observation 7, and B is the Berry-sséen ratio; 169 substitutes 13 and 148. Now, ɛ ɛ follows by letting in 19 ɛ ɛ + B + 4 A +1 + K 1 log. 17 R Xd R X d + Xa X a Ṙ X a, d c X X a A 158 R X d + 1 Ṙ X X i, d Ṙ X X, d c X X X X, d X X, d c X X c A log, uses 119, and 161 is by the definition 147 of the typical set of x s. Therefore, introducing the random variable G N, 1 which is independent of X,wemay ANDIX D ROOF OF TH ACHIVABILITY ART OF THORM 5 The proof consists of an analysis of the random code described in Theorem 3 with M codewords drawn from the distribution Z Z... Z,Z achieves the rate-distortion function R S,X d. Theorem 4 provides a means to study the performance of that code ensemble. Namely, it implies that the averaged over the ensemble excess-distortion probability ɛ is bounded by ɛ g Z X, U >log γ + e M γ 171 g Z X, U >log γ,u I, X T + + A + e M γ, 17 ɛ min z S ˆ ɛ B W i log M + γ + log X 1 X T X / T exp γ 165 X X i, d + λ X V Z X G log M + a, X T X / T exp γ K 1 log 166 X X i, d + λ X V Z X G log M + a X log / T exp γ K X / T exp γ K 1 log 168 ɛ B + 4 A +1 + K 1 log 169

12 61 I TRANSACTIONS ON INFORMATION THORY, VOL. 6, NO. 11, NOVMBR 16 I 1, 1 1, 173 and T is the typical set of X defined in 147 so that 148 holds, and 17 is an obvious weaening. We let log M log γ + log log e, 174 log γ Rd + ṼdQ 1 ɛ + O log, 175 for a properly chosen O log. We will show that ɛ ɛ. Instead of attempting to compute the infimum in 56 we compute an upper bound to gx, t by choosing Z individually for each, t and each type X of x T. We let Z be equiprobable on the conditional type Z X that achieves R X;Z d δ, formally defined in 179, 5 V δ X Q 1 t B X, 176 and V X, B X is the normalized variance and the Berry-sseén coefficient of the sum of independent random variables ds i, z i, S i follows the distribution S Xxi.IfV X >, by virtue of the Berry-sseén theorem, πx ; z ds i, z i >d type x X 177 t. 178 If V X, 178 holds trivially because then ds i, z i d δ, almost surely. Denote the following function: R X;Z d min Z X : dx,z d D Z X Z X, 179 we used the usual notation for conditional relative entropy D Z X Z X D Z X X Z X. By the type counting argument and by Taylor s theorem, for all x T, D Z Z... Z D Z X Z X + O log 18 R X;Z d δ + O log 181 R X;Z d + λ Xδ + O log 18 X x i, d + λ X δ + O log 183 X x i, d + λ X VX Q 1 t + O log, is by type counting; 5 If the probability masses of Z X are not divisors of, we tae the type closest to Z X such that the distortion constraint is not violated. 18 λ X R X;Z d 185 is by the Taylor theorem, applicable because with finite alphabets and finite distortion measure, R X;Z d is differentiable with respect to d up to any order 6; 183 follows from the reasoning in 18, Lemma B.4 6 ; 184 is by applying a Taylor expansion to V X in the vicinity of X and to Q 1 t in the vicinity of t. Continuous differentiability of V X as a function of X follows from assumption iv via 5 and R X;Z d R X d + D Z Z, 186 Z achieves R Xd. Finally, for scalars μ, γ and for v > observe the following. 1 dt 1 μ + v QQ 1 t >γ dqt 187 μ + v Q 1 t >γ 1 π e ξ 1 μ + vξ > γ dξ 188 μ + vg >γ, 189 G N, 1. Juxtaposing 184 and 189, we have gx, U >log γ,u I, X T dt 19 X X i, d+λ X VX Q 1 t+o log log γ X X i, d + λ X VX G + O log log γ. 191 By the choice of γ in 175 and the Berry-sseén theorem Theorem 1, it follows that one can choose O log so that the right side of 189 is upper bounded by ɛ 3+ A. Upon comparison of 191 and 17, we conclude that ɛ ɛ, as desired. ACKNOWLDGMNT The authors than the anonymous reviewer for the remarably thorough review, which is reflected in the final version. RFRNCS 1 V. Kostina and S. Verdú, Nonasymptotic noisy lossy source coding, in roc. I Inf. Theory Worshop, Seville, Spain, Sep. 13, pp R. Dobrushin and B. Tsybaov, Information transmission with additional noise, IR Trans. Inf. Theory, vol. 8, no. 5, pp , Sep T. Berger, Rate-Distortion Theory. nglewood Cliffs, NJ, USA: rentice-hall, Lemma B.4 is proven for δ but the reasoning still goes through for δ O log.

13 KOSTINA AND VRDÚ: NONASYMTOTIC NOISY LOSSY SOURC CODING H. S. Witsenhausen, Indirect rate distortion problems, I Trans. Inf. Theory, vol. 6, no. 5, pp , Sep D. Sarison, Source encoding in the presence of random disturbance, I Trans. Inf. Theory, vol. 14, no. 1, pp , Jan J. K. Wolf and J. Ziv, Transmission of noisy information to a noisy receiver with minimum distortion, I Trans. Inf. Theory, vol. 16, no. 4, pp , Jul Ayanoğlu, On optimal quantization of noisy sources, I Trans. Inf. Theory, vol. 36, no. 6, pp , Nov Y. phraim and R. M. Gray, A unified approach for encoding clean and noisy sources by means of waveform and autoregressive model vector quantization, I Trans. Inf. Theory, vol. 34, no. 4, pp , Jul T. R. Fischer, J. D. Gibson, and B. Koo, stimation and noisy source coding, I Trans. Acoust., Speech Signal rocess., vol. 38, no. 1, pp. 3 34, Jan T. Weissman and N. Merhav, Tradeoffs between the excess-code-length exponent and the excess-distortion exponent in lossy source coding, I Trans. Inf. Theory, vol. 48, no., pp , Feb.. 11 T. Weissman and N. Merhav, On limited-delay lossy coding and filtering of individual sequences, I Trans. Inf. Theory, vol. 48, no. 3, pp , Mar.. 1 T. Weissman, Universally attainable error exponents for rate-distortion coding of noisy sources, I Trans. Inf. Theory, vol. 5, no. 6, pp , Jun T. A. Courtade and T. Weissman, Multiterminal source coding under logarithmic loss, I Trans. Inf. Theory, vol. 6, no. 1, pp , Jan N. Tishby, F. C. ereira, and W. Biale, The information bottlenec method, in roc. 37th Annu. Allerton Conf. Commun., Control Comput., Monticello, IL, USA, Oct., pp V. Kostina and S. Verdú, Fixed-length lossy compression in the finite bloclength regime, I Trans. Inf. Theory, vol. 58, no. 6, pp , Jun A. Ingber and Y. Kochman, The dispersion of lossy source coding, in roc. Data Compress. Conf. DCC, Snowbird, UT, USA, Mar. 11, pp I. Csiszár, On an extremum problem of information theory, Studia Scientiarum Mathematicarum Hungarica, vol. 9, no. 1, pp , Jan V. Kostina, Lossy data compression: Nonasymptotic fundamental limits, h.d. dissertation, Dept. lect. ng., rinceton University, rinceton, NJ, USA, Sep S. Verdú, Non-asymptotic achievability bounds in multiuser information theory, in roc. 5th Annu. Allerton Conf. Commun., Control, Comput., Monticello, IL, USA, Oct. 1, pp Y. olyansiy, 6.441: Information theory lecture notes, Dept. lect. ng. Comput. Sci., Massachusetts Inst. Technol., Cambridge, MA, USA, 1. 1 W. Feller, An Introduction to robability Theory and Its Applications, vol., nd ed. New Yor, NY, USA: Wiley, I. G. Shevtsova, An improvement of convergence rate estimates in the Lyapunov theorem, Dolady Math., vol. 8, no. 3, pp , 1. 3 I. G. Shevtsova. 11. On the absolute constants in the Berry sseen type inequalities for identically distributed summands. Online. Available: 4 A. Ingber, I. Leibowitz, R. Zamir, and M. Feder, Distortion lower bounds for finite dimensional oint source-channel coding, in roc. I Int. Symp. Inf. Theory, Toronto, ON, Canada, Jul. 8, pp B. Yu and T.. Speed, A rate of convergence result for a universal D-semifaithful code, I Trans. Inf. Theory, vol. 39, no. 3, pp , May H. Yang and Z. Zhang, On the redundancy of lossy source coding with abstract alphabets, I Trans. Inf. Theory, vol. 45, no. 4, pp , May Victoria Kostina oined Caltech as an Assistant rofessor of lectrical ngineering in the fall of 14. She holds a Bachelor s degree from Moscow institute of hysics and Technology 4, she was affiliated with the Institute for Information Transmission roblems of the Russian Academy of Sciences, a Master s degree from University of Ottawa 6, and a hd from rinceton University 13. Her hd dissertation on information-theoretic limits of lossy data compression received the rinceton lectrical ngineering Best Dissertation award. She is also a recipient of a Simons-Bereley research fellowship 15. Victoria Kostina s research spans information theory, coding, wireless communications and control. Sergio Verdú received the Telecommunications ngineering degree from the Universitat olitècnica de Barcelona in 198, and the h.d. degree in lectrical ngineering from the University of Illinois at Urbana-Champaign in Since 1984 he has been a member of the faculty of rinceton University, he is the ugene Higgins rofessor of lectrical ngineering, and is a member of the rogram in Applied and Computational Mathematics. Sergio Verdú is the recipient of the 7 Claude. Shannon Award, and the 8 I Richard W. Hamming Medal. He is a member of the National Academy of ngineering and the National Academy of Sciences, and was awarded a Doctorate Honoris Causa from the Universitat olitècnica de Catalunya in 5. Verdú is a recipient of several paper awards from the I: the 199 Donald Fin aper Award, the 1998 and 1 Information Theory aper Awards, an Information Theory Golden Jubilee aper Award, the Leonard Abraham rize Award, the 6 Joint Communications/Information Theory aper Award, and the 9 Stephen O. Rice rize from I Communications Society. In 1998, Cambridge University ress published his boo Multiuser Detection, for which he received the Frederic. Terman Award from the American Society for ngineering ducation. Sergio Verdú served as resident of the I Information Theory Society in 1997, and on its Board of Governors , He has also served in various editorial capacities for the I Transactions on Information Theory: Associate ditor Shannon Theory, ; Boo Reviews, -6, Guest ditor of the Special Fiftieth Anniversary Commemorative Issue published by I ress as Information Theory: Fifty years of discovery, and member of the xecutive ditorial Board He is the founding ditor-in-chief of Foundations and Trends in Communications and Information Theory.

Transmitting k samples over the Gaussian channel: energy-distortion tradeoff

Transmitting k samples over the Gaussian channel: energy-distortion tradeoff Transmitting samples over the Gaussian channel: energy-distortion tradeoff Victoria Kostina Yury Polyansiy Sergio Verdú California Institute of Technology Massachusetts Institute of Technology Princeton

More information

A new converse in rate-distortion theory

A new converse in rate-distortion theory A new converse in rate-distortion theory Victoria Kostina, Sergio Verdú Dept. of Electrical Engineering, Princeton University, NJ, 08544, USA Abstract This paper shows new finite-blocklength converse bounds

More information

Nonasymptotic noisy lossy source coding

Nonasymptotic noisy lossy source coding Nonasymptotic noisy lossy source coding 1 Victoria Kostina, Sergio Verdú Dept. of Electrical Engineering, Princeton University, NJ 08544, USA arxiv:1401.5125v1 [cs.it] 20 Jan 2014 Abstract This paper shows

More information

Channels with cost constraints: strong converse and dispersion

Channels with cost constraints: strong converse and dispersion Channels with cost constraints: strong converse and dispersion Victoria Kostina, Sergio Verdú Dept. of Electrical Engineering, Princeton University, NJ 08544, USA Abstract This paper shows the strong converse

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Afundamental component in the design and analysis of

Afundamental component in the design and analysis of IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 533 High-Resolution Source Coding for Non-Difference Distortion Measures: The Rate-Distortion Function Tamás Linder, Member, IEEE, Ram

More information

The Information Lost in Erasures Sergio Verdú, Fellow, IEEE, and Tsachy Weissman, Senior Member, IEEE

The Information Lost in Erasures Sergio Verdú, Fellow, IEEE, and Tsachy Weissman, Senior Member, IEEE 5030 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 11, NOVEMBER 2008 The Information Lost in Erasures Sergio Verdú, Fellow, IEEE, Tsachy Weissman, Senior Member, IEEE Abstract We consider sources

More information

Universality of Logarithmic Loss in Lossy Compression

Universality of Logarithmic Loss in Lossy Compression Universality of Logarithmic Loss in Lossy Compression Albert No, Member, IEEE, and Tsachy Weissman, Fellow, IEEE arxiv:709.0034v [cs.it] Sep 207 Abstract We establish two strong senses of universality

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Dispersion of the Gilbert-Elliott Channel

Dispersion of the Gilbert-Elliott Channel Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

Amobile satellite communication system, like Motorola s

Amobile satellite communication system, like Motorola s I TRANSACTIONS ON INFORMATION THORY, VOL. 45, NO. 4, MAY 1999 1111 Distributed Source Coding for Satellite Communications Raymond W. Yeung, Senior Member, I, Zhen Zhang, Senior Member, I Abstract Inspired

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

arxiv: v4 [cs.it] 17 Oct 2015

arxiv: v4 [cs.it] 17 Oct 2015 Upper Bounds on the Relative Entropy and Rényi Divergence as a Function of Total Variation Distance for Finite Alphabets Igal Sason Department of Electrical Engineering Technion Israel Institute of Technology

More information

Rate-Distortion in Near-Linear Time

Rate-Distortion in Near-Linear Time -Distortion in Near-Linear Time IIT 28, Toronto, Canada, July 6 -, 28 Anit Gupta, ergio Verdù Tsachy Weissman, Department of Electrical Engineering, Princeton University, Princeton, NJ 854, {anitg,verdu}@princeton.edu

More information

Variable-Rate Universal Slepian-Wolf Coding with Feedback

Variable-Rate Universal Slepian-Wolf Coding with Feedback Variable-Rate Universal Slepian-Wolf Coding with Feedback Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University, Houston, TX 77005 Abstract

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

Subset Universal Lossy Compression

Subset Universal Lossy Compression Subset Universal Lossy Compression Or Ordentlich Tel Aviv University ordent@eng.tau.ac.il Ofer Shayevitz Tel Aviv University ofersha@eng.tau.ac.il Abstract A lossy source code C with rate R for a discrete

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

Fixed-Length-Parsing Universal Compression with Side Information

Fixed-Length-Parsing Universal Compression with Side Information Fixed-ength-Parsing Universal Compression with Side Information Yeohee Im and Sergio Verdú Dept. of Electrical Eng., Princeton University, NJ 08544 Email: yeoheei,verdu@princeton.edu Abstract This paper

More information

On MMSE Estimation from Quantized Observations in the Nonasymptotic Regime

On MMSE Estimation from Quantized Observations in the Nonasymptotic Regime On MMSE Estimation from Quantized Observations in the Nonasymptotic Regime Jaeho Lee, Maxim Raginsky, and Pierre Moulin Abstract This paper studies MMSE estimation on the basis of quantized noisy observations

More information

On Competitive Prediction and Its Relation to Rate-Distortion Theory

On Competitive Prediction and Its Relation to Rate-Distortion Theory IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 12, DECEMBER 2003 3185 On Competitive Prediction and Its Relation to Rate-Distortion Theory Tsachy Weissman, Member, IEEE, and Neri Merhav, Fellow,

More information

Randomized Quantization and Optimal Design with a Marginal Constraint

Randomized Quantization and Optimal Design with a Marginal Constraint Randomized Quantization and Optimal Design with a Marginal Constraint Naci Saldi, Tamás Linder, Serdar Yüksel Department of Mathematics and Statistics, Queen s University, Kingston, ON, Canada Email: {nsaldi,linder,yuksel}@mast.queensu.ca

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Information Embedding meets Distributed Control

Information Embedding meets Distributed Control Information Embedding meets Distributed Control Pulkit Grover, Aaron B Wagner and Anant Sahai Abstract We consider the problem of information embedding where the encoder modifies a white Gaussian host

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

Asymptotic Filtering and Entropy Rate of a Hidden Markov Process in the Rare Transitions Regime

Asymptotic Filtering and Entropy Rate of a Hidden Markov Process in the Rare Transitions Regime Asymptotic Filtering and Entropy Rate of a Hidden Marov Process in the Rare Transitions Regime Chandra Nair Dept. of Elect. Engg. Stanford University Stanford CA 94305, USA mchandra@stanford.edu Eri Ordentlich

More information

On the Capacity of Free-Space Optical Intensity Channels

On the Capacity of Free-Space Optical Intensity Channels On the Capacity of Free-Space Optical Intensity Channels Amos Lapidoth TH Zurich Zurich, Switzerl mail: lapidoth@isi.ee.ethz.ch Stefan M. Moser National Chiao Tung University NCTU Hsinchu, Taiwan mail:

More information

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality 0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California

More information

IN this paper, we study the problem of universal lossless compression

IN this paper, we study the problem of universal lossless compression 4008 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 9, SEPTEMBER 2006 An Algorithm for Universal Lossless Compression With Side Information Haixiao Cai, Member, IEEE, Sanjeev R. Kulkarni, Fellow,

More information

An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel

An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012 2427 An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel Ersen Ekrem, Student Member, IEEE,

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Soft Covering with High Probability

Soft Covering with High Probability Soft Covering with High Probability Paul Cuff Princeton University arxiv:605.06396v [cs.it] 20 May 206 Abstract Wyner s soft-covering lemma is the central analysis step for achievability proofs of information

More information

THIS paper addresses the quadratic Gaussian two-encoder

THIS paper addresses the quadratic Gaussian two-encoder 1938 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 5, MAY 2008 Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem Aaron B Wagner, Member, IEEE, Saurabha Tavildar, Pramod Viswanath,

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Sequential prediction with coded side information under logarithmic loss

Sequential prediction with coded side information under logarithmic loss under logarithmic loss Yanina Shkel Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Maxim Raginsky Department of Electrical and Computer Engineering Coordinated Science

More information

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

An Achievable Error Exponent for the Mismatched Multiple-Access Channel An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of

More information

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE 4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 List Decoding of Biorthogonal Codes the Hadamard Transform With Linear Complexity Ilya Dumer, Fellow, IEEE, Grigory Kabatiansky,

More information

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 Source Coding With Limited-Look-Ahead Side Information at the Decoder Tsachy Weissman, Member, IEEE, Abbas El Gamal, Fellow,

More information

Pointwise Redundancy in Lossy Data Compression and Universal Lossy Data Compression

Pointwise Redundancy in Lossy Data Compression and Universal Lossy Data Compression Pointwise Redundancy in Lossy Data Compression and Universal Lossy Data Compression I. Kontoyiannis To appear, IEEE Transactions on Information Theory, Jan. 2000 Last revision, November 21, 1999 Abstract

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE

More information

An Extended Fano s Inequality for the Finite Blocklength Coding

An Extended Fano s Inequality for the Finite Blocklength Coding An Extended Fano s Inequality for the Finite Bloclength Coding Yunquan Dong, Pingyi Fan {dongyq8@mails,fpy@mail}.tsinghua.edu.cn Department of Electronic Engineering, Tsinghua University, Beijing, P.R.

More information

DISCRETE sources corrupted by discrete memoryless

DISCRETE sources corrupted by discrete memoryless 3476 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 8, AUGUST 2006 Universal Minimax Discrete Denoising Under Channel Uncertainty George M. Gemelos, Student Member, IEEE, Styrmir Sigurjónsson, and

More information

Lossy data compression: nonasymptotic fundamental limits

Lossy data compression: nonasymptotic fundamental limits Lossy data compression: nonasymptotic fundamental limits Victoria Kostina A Dissertation Presented to the Faculty of Princeton University in Candidacy for the Degree of Doctor of Philosophy Recommended

More information

SOURCE coding problems with side information at the decoder(s)

SOURCE coding problems with side information at the decoder(s) 1458 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 3, MARCH 2013 Heegard Berger Cascade Source Coding Problems With Common Reconstruction Constraints Behzad Ahmadi, Student Member, IEEE, Ravi Ton,

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

On Lossless Coding With Coded Side Information Daniel Marco, Member, IEEE, and Michelle Effros, Fellow, IEEE

On Lossless Coding With Coded Side Information Daniel Marco, Member, IEEE, and Michelle Effros, Fellow, IEEE 3284 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 7, JULY 2009 On Lossless Coding With Coded Side Information Daniel Marco, Member, IEEE, Michelle Effros, Fellow, IEEE Abstract This paper considers

More information

Channel Polarization and Blackwell Measures

Channel Polarization and Blackwell Measures Channel Polarization Blackwell Measures Maxim Raginsky Abstract The Blackwell measure of a binary-input channel (BIC is the distribution of the posterior probability of 0 under the uniform input distribution

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

Channel Dispersion and Moderate Deviations Limits for Memoryless Channels

Channel Dispersion and Moderate Deviations Limits for Memoryless Channels Channel Dispersion and Moderate Deviations Limits for Memoryless Channels Yury Polyanskiy and Sergio Verdú Abstract Recently, Altug and Wagner ] posed a question regarding the optimal behavior of the probability

More information

ONE approach to improving the performance of a quantizer

ONE approach to improving the performance of a quantizer 640 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 Quantizers With Unim Decoders Channel-Optimized Encoders Benjamin Farber Kenneth Zeger, Fellow, IEEE Abstract Scalar quantizers

More information

YAMAMOTO [1] considered the cascade source coding

YAMAMOTO [1] considered the cascade source coding IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 58, NO 6, JUNE 2012 3339 Cascade Triangular Source Coding With Side Information at the First Two Nodes Haim H Permuter, Member, IEEE, Tsachy Weissman, Senior

More information

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 12, DECEMBER 2010 1005 Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko Abstract A new theorem shows that

More information

RECENT advances in technology have led to increased activity

RECENT advances in technology have led to increased activity IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 49, NO 9, SEPTEMBER 2004 1549 Stochastic Linear Control Over a Communication Channel Sekhar Tatikonda, Member, IEEE, Anant Sahai, Member, IEEE, and Sanjoy Mitter,

More information

Rényi Information Dimension: Fundamental Limits of Almost Lossless Analog Compression

Rényi Information Dimension: Fundamental Limits of Almost Lossless Analog Compression IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 8, AUGUST 2010 3721 Rényi Information Dimension: Fundamental Limits of Almost Lossless Analog Compression Yihong Wu, Student Member, IEEE, Sergio Verdú,

More information

Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear Systems

Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear Systems 6332 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 58, NO 10, OCTOBER 2012 Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear

More information

Graph Coloring and Conditional Graph Entropy

Graph Coloring and Conditional Graph Entropy Graph Coloring and Conditional Graph Entropy Vishal Doshi, Devavrat Shah, Muriel Médard, Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge,

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

1590 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE Source Coding, Large Deviations, and Approximate Pattern Matching

1590 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE Source Coding, Large Deviations, and Approximate Pattern Matching 1590 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE 2002 Source Coding, Large Deviations, and Approximate Pattern Matching Amir Dembo and Ioannis Kontoyiannis, Member, IEEE Invited Paper

More information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Jialing Liu liujl@iastate.edu Sekhar Tatikonda sekhar.tatikonda@yale.edu Nicola Elia nelia@iastate.edu Dept. of

More information

Coding for Discrete Source

Coding for Discrete Source EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively

More information

UC Riverside UC Riverside Previously Published Works

UC Riverside UC Riverside Previously Published Works UC Riverside UC Riverside Previously Published Works Title Soft-decision decoding of Reed-Muller codes: A simplied algorithm Permalink https://escholarship.org/uc/item/5v71z6zr Journal IEEE Transactions

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1159 The Duality Between Information Embedding and Source Coding With Side Information and Some Applications Richard J. Barron, Member,

More information

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN A Thesis Presented to The Academic Faculty by Bryan Larish In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

SUCCESSIVE refinement of information, or scalable

SUCCESSIVE refinement of information, or scalable IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 1983 Additive Successive Refinement Ertem Tuncel, Student Member, IEEE, Kenneth Rose, Fellow, IEEE Abstract Rate-distortion bounds for

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Gaussian Estimation under Attack Uncertainty

Gaussian Estimation under Attack Uncertainty Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

1 Background on Information Theory

1 Background on Information Theory Review of the book Information Theory: Coding Theorems for Discrete Memoryless Systems by Imre Csiszár and János Körner Second Edition Cambridge University Press, 2011 ISBN:978-0-521-19681-9 Review by

More information

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels

Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Optimal Natural Encoding Scheme for Discrete Multiplicative Degraded Broadcast Channels Bike ie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract Certain degraded broadcast channels

More information

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet. EE376A - Information Theory Final, Monday March 14th 216 Solutions Instructions: You have three hours, 3.3PM - 6.3PM The exam has 4 questions, totaling 12 points. Please start answering each question on

More information

Duality Between Channel Capacity and Rate Distortion With Two-Sided State Information

Duality Between Channel Capacity and Rate Distortion With Two-Sided State Information IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE 2002 1629 Duality Between Channel Capacity Rate Distortion With Two-Sided State Information Thomas M. Cover, Fellow, IEEE, Mung Chiang, Student

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

Remote Source Coding with Two-Sided Information

Remote Source Coding with Two-Sided Information Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

Coding into a source: an inverse rate-distortion theorem

Coding into a source: an inverse rate-distortion theorem Coding into a source: an inverse rate-distortion theorem Anant Sahai joint work with: Mukul Agarwal Sanjoy K. Mitter Wireless Foundations Department of Electrical Engineering and Computer Sciences University

More information

19. Channel coding: energy-per-bit, continuous-time channels

19. Channel coding: energy-per-bit, continuous-time channels 9. Channel coding: energy-per-bit, continuous-time channels 9. Energy per bit Consider the additive Gaussian noise channel: Y i = X i + Z i, Z i N ( 0, ). (9.) In the last lecture, we analyzed the maximum

More information

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes 1 Zheng Wang, Student Member, IEEE, Jie Luo, Member, IEEE arxiv:0808.3756v1 [cs.it] 27 Aug 2008 Abstract We show that

More information

SPATIAL point processes are natural models for signals

SPATIAL point processes are natural models for signals I TRANSACTIONS ON INFORMATION THORY, VOL. 45, NO. 1, JANUARY 1999 177 Asymptotic Distributions for the erformance Analysis of Hypothesis Testing of Isolated-oint-enalization oint rocesses Majeed M. Hayat,

More information

THE capacity of the exponential server timing channel

THE capacity of the exponential server timing channel IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2697 Capacity of Queues Via Point-Process Channels Rajesh Sundaresan, Senior Member, IEEE, and Sergio Verdú, Fellow, IEEE Abstract A conceptually

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Guesswork Subject to a Total Entropy Budget

Guesswork Subject to a Total Entropy Budget Guesswork Subject to a Total Entropy Budget Arman Rezaee, Ahmad Beirami, Ali Makhdoumi, Muriel Médard, and Ken Duffy arxiv:72.09082v [cs.it] 25 Dec 207 Abstract We consider an abstraction of computational

More information

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

Optimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood

Optimality of Walrand-Varaiya Type Policies and. Approximation Results for Zero-Delay Coding of. Markov Sources. Richard G. Wood Optimality of Walrand-Varaiya Type Policies and Approximation Results for Zero-Delay Coding of Markov Sources by Richard G. Wood A thesis submitted to the Department of Mathematics & Statistics in conformity

More information

Convexity/Concavity of Renyi Entropy and α-mutual Information

Convexity/Concavity of Renyi Entropy and α-mutual Information Convexity/Concavity of Renyi Entropy and -Mutual Information Siu-Wai Ho Institute for Telecommunications Research University of South Australia Adelaide, SA 5095, Australia Email: siuwai.ho@unisa.edu.au

More information

EE-597 Notes Quantization

EE-597 Notes Quantization EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both

More information

THE information capacity is one of the most important

THE information capacity is one of the most important 256 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 1, JANUARY 1998 Capacity of Two-Layer Feedforward Neural Networks with Binary Weights Chuanyi Ji, Member, IEEE, Demetri Psaltis, Senior Member,

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 5177 The Distributed Karhunen Loève Transform Michael Gastpar, Member, IEEE, Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli,

More information

WE study the capacity of peak-power limited, single-antenna,

WE study the capacity of peak-power limited, single-antenna, 1158 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 3, MARCH 2010 Gaussian Fading Is the Worst Fading Tobias Koch, Member, IEEE, and Amos Lapidoth, Fellow, IEEE Abstract The capacity of peak-power

More information

FEEDBACK does not increase the capacity of a discrete

FEEDBACK does not increase the capacity of a discrete 1 Sequential Differential Optimization of Incremental Redundancy Transmission Lengths: An Example with Tail-Biting Convolutional Codes Nathan Wong, Kasra Vailinia, Haobo Wang, Sudarsan V. S. Ranganathan,

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Joint Universal Lossy Coding and Identification of Stationary Mixing Sources With General Alphabets Maxim Raginsky, Member, IEEE

Joint Universal Lossy Coding and Identification of Stationary Mixing Sources With General Alphabets Maxim Raginsky, Member, IEEE IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 1945 Joint Universal Lossy Coding and Identification of Stationary Mixing Sources With General Alphabets Maxim Raginsky, Member, IEEE Abstract

More information

Lecture 6 Channel Coding over Continuous Channels

Lecture 6 Channel Coding over Continuous Channels Lecture 6 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 9, 015 1 / 59 I-Hsiang Wang IT Lecture 6 We have

More information