Fundamental Limits of Compressed Sensing under Optimal Quantization

Size: px
Start display at page:

Download "Fundamental Limits of Compressed Sensing under Optimal Quantization"

Transcription

1 Fundamental imits of Compressed Sensing under Optimal Quantization Alon Kipnis, Galen Reeves, Yonina C. Eldar and Andrea J. Goldsmith Department of Electrical Engineering, Stanford University Department of Electrical and Computer Engineering, Department of Statistical Science, Duke University Department of Electrical Engineering, Technion Israel Institute of Technology Abstract We consider the problem of recovering a sparse vector from a quantized or a lossy compressed version of its noisy random linear projections. Under the replica symmetry postulation, we derive a single-letter expression for the minimal mean squared error (MMSE as a function of the sampling ratio, the sparsity ratio, the noise intensity and the total number of bits in the quantized representation. This expression describes the excess distortion incurred in encoding the source vector from its noisy random linear projections in lieu of the full source information. I. INTRODUCTION Starting from [] and [2], a tremendous amount of attention has been given to the compressed sensing (CS setting in which a sparse vector is recovered from its noisy random linear projections. The main principle in CS is that a relatively small number of random linear projections are enough to represent the source, provided it has only few non-zero entries in some basis [3]. The fact that sparse sources possess such a low-dimensional representation justifies the compressed part in the term CS. Nevertheless, reducing dimension does not yet provide compression in the information theoretic sense, since it is still required to quantize this low-dimensional representation, i.e., to map it to a finite alphabet set. Arguably, any practical digital implementation of a system based on CS is subject to this quantization constraint. This paper considers the MMSE that can be attained in CS using any form of quantization of the noisy projections, subject only to a bit per source dimension constraint. Previous works addressing effects of quantization in CS are usually limited to particular forms of quantization [4], [5]. Consequently, these results do not consider the fundamental tradeoffs between the system parameters and the overall number of bits in the resulting quantized representation. Other approaches which model quantization as an additive random noise [6], [7], [8] lack theoretic justification and disregard the structure existing in quantization techniques [9]. In this paper we consider the MMSE in estimating a sparse n dimensional source vector X n from a quantized, encoded, or a lossy compressed version of its observation vector Y m, where the relation between the two is given by Y m = γhx n +W m. ( Here the random entries of the sampling matrix matrix H are taken from a zero mean i.i.d distribution of variance /n, and W m is a unit variance white Gaussian noise vector. Therefore, the signal-to-noise ratio (SNR in the channel ( equals γ. We are interested in the minimal MSE distortion that can be attained using any recovery technique from any form of quantization, subject only to the bit constraint R. We focus on the limiting case where m and n go to infinity while m/n converges to a constant sampling ratio ρ. This limiting situation is henceforth referred to as the large system limit. It is moreover assumed that each entry of the source is taken from a Bernoulli-Gauss distribution with P(X i = 0 = p, where p is denoted as the sparsity ratio. While most of the results in this paper can be extended to any mixture of discrete and absolutely continuous distributions, this relatively simple Bernoulli-Gauss model captures many of the interesting phenomena that arise by deviating from the fully Gaussian setting. We analyze the MMSE in the setting above in the large system limit under the replica method [0]. One of longstanding problems with analysis based on the replica method has been the fact that it relies on certain key assumptions, most notably the assumption of replica symmetry, that are unproven in the context of compressed sensing. However, recent work has made significant progress in showing often via very different methods that many of the properties predicted using the replica technique are correct [], [2], [3], [4], [5]. In particular, [4] characterizes the asymptotic mutual information and MMSE under very mild technical conditions. Shortly after [4] appeared, a similar result was obtained in [5] using a very different proof technique. Beyond the precise characterization of the mutual information and MMSE obtained in [4], the current paper requires two further properties, namely the asymptotic decoupling of the posterior distribution and its description by a Gaussian channel. While these properties currently rely on the assumptions of the replica method [6], a weak form of decoupling is proved in [4] and there is hope that this result can be strengthened to the form of decoupling needed in this paper. Our main result shows that under the aforementioned assumed properties, the MMSE in CS under optimal quantization is characterized by a single-letter expression, which is a function of ρ, p, γ and R. This expression is equivalent to the MMSE distortion in estimating X n from any rate R encoded

2 version of its observation through a scalar Gaussian channel. This distortion is known as the indirect distortion-rate function (idrf of X n given the scalar channel output, and can be obtained by optimization over joint probability distributions subject to a mutual information constraint [7, Ch. 3.5],[8]. As the code rate R goes to infinity, the idrf converges monotonically to the expression for the MMSE in estimating X n from Y m derived by the replica method in [6], and which is known to be correct for Gaussian sampling matrices [4]. We note that in the case where the SNR is high and ρ is such that X n can be recovered from Y m with high probability, the optimal quantizer may first recover X n and quantize it in an optimal way. As a result, the MMSE in this case coincides with the (direct DRF of X n [9]. It it therefore only interesting to consider the optimal quantization problem when the SNR is low or when the sampling ratio does not permit exact recovery before quantization is taken into account. The critical sampling ratio ρ that allows exact recovery in the noiseless case, or leads to a bounded noise sensitivity in the noisy case, is known to be the Rényi information dimension of the input vector [20], [2]. An interesting question that arises from our quantized CS setting is whether this critical sampling ratio changes under the constraint of quantization at rate R. Namely, whether, as the SNR goes to infinity, the DRF of the source at rate R can be attained by using a sampling ratio smaller than the Rényi information dimension. If proven to be right, then the new critical sampling ratio would be the CS equivalent of the DRF-attaining sub-nyquist sampling rate derived in [22]. We will consider this question in our future work. The rest of this paper is organized as follows. In Section II we define our source coding problem. Our main results are given in Sections III. Concluding remarks are provided in Section IV. II. PROBEM FORMUATION We consider the source coding problem described in Fig. : each entry of the source vector X n is taken from the distribution P X of density P X (x = δ 0 (x( p + pφ(x, where δ 0 is the Dirac distribution of unit mass concentrated at the origin, and φ(x is the standard normal density function. The observations vector Y n is a noisy random linear projected version of the source as in (. We further assume that the observations vector Y m R m is mapped by an encoder or a quantizer to an element U in the set {0,} nr. The decoder or the estimator, upon receiving U, provides a source reconstruction sequence X n R n. The distortion between the source realization X n and its reconstruction X n is the square of the normalized Euclidean norm of their difference. Given a specific encoding scheme g : R m {0,} nr, denote by D g (R the expected distortion in recovering X n from H R m n W m X n + Y m {0,} nr Enc Dec ˆX n Fig. : Source coding system model: recovering X n from a compressed version of its noisy random linear projections. The dashed line indicates that the sampling matrix is available both to the encoder and decoder. g(y m as a function of the code-rate R: D g (R E X n E[X n g(y m ] 2 = E(X i E[X i g(y m ] 2. n i= (2 The problem we consider is the minimal value of D g (R taken over all rate R encoders g. This problem corresponds to the indirect or remote source coding problem of X n from Y m [7, Ch. 3.5]. The minimal distortion (2 is denoted the indirect DRF, defined by n D X n Y m(r inf D g(r, g where the minimization is over all encoders g of the form R m {0,} nr and decoders {0,} nr R n. III. OPTIMA SOURCE CODING In this section we characterize the idrf of X n given Y m by providing positive and negative coding statements with respect to a particular single-letter expression. These statements are based on the following two predictions of the replica method from [6]: (A Single letter posterior: The conditional pdf of the ith coordinate of X n, given the vector of observations Y m, in the large system limit satisfies P Xi Y m P X Z. Here P X Z is the conditional distribution of a random variable X distributed according to P X given Z = γηx +W, (3 where W N (0, is independent of X. The parameter η (0, satisfies the following fixed-point equation where η = + γ mmse(γη, (4 ρ mmse(γ E(X E[X γx +W] 2 (5 is the minimal MSE in estimating X under a scalar AWGN channel of SNR γ. In the case of multiple solutions to (4, η is chosen to minimize I(P X,Z + ρ (η logη. 2

3 (A2 Decoupling: For an arbitrary but fixed number of input elements X n,...,x n, in the large system limit we have P Xn,...,X n Y m P X Z, where X and Z are distributed as in (A. A. A Single-etter Expression In order to characterize D X n Y m(r, we consider the scalar Gaussian channel (3 and denote by D X Z (R the minimal value of the following problem: inf ( I P Z, X R ( 2 E X X, (6 where the minimization is over all joint probability distributions of Z and X whose mutual information does not exceed R, and the marginal of Z coincides with the distribution at the output of the channel (3 with input distributed as P X. The function D X Z (R is denoted as the (information idrf of the process X n given Z n [7, Ch. 3.5], where the latter is obtained by n independent uses of the channel (3. Our main result asserts that the behavior of D X n Y m(r in the large system limit is described by the function D X Z (R. The precise statement of this result is given by the following two theorems: Theorem 3. (achiveability: Under (A and (A2, for any ε > 0 there exists n large enough and an encoder g : R m {0,} nr, such that E X n E[X n g(y n ] 2 does not exceed D X Z (R + ε. Sketch of Proof: The existence of the encoder g is shown using a random coding argument, where the codebook is generated according to the joint scalar distribution which attains (6. It follows from (A2 that in the large system limit, a random code designed with respect to P X Z asymptotically leads to the same distortion even if it operates on observations generated according to P X Y m. The transition from length blocks to the entire source realization X n is trivial since the same code is valid for all length blocks. The details are given in Appendix B. Theorem 3.2 (converse: Under (A and (A2, for any,k N, deterministic encoder g : R m {0,} R and ε > 0, there exists n 0 such that [ E Xk +k E Xk +k g(y ] m 2 > DX Z (R ε for all n > n 0. In words, the average distortion in estimating any block of length of the source from the observation vector Y m is bounded from below by the single-letter expression D X Z (R, provided n is large enough. Sketch of proof: The main idea of the proof is to map the distortion over each length block to a particular distortion measure defined only in terms of length m sequences Y m. We then use Shannon s source coding converse to obtain a lower bound for this distortion expressed in terms of joint probability distributions over m blocks. Once this lower bound is established, we use (A2 to conclude that the aforementioned lower bound converges to D X Z (R in the large system limit. The full proof can be found in Appendix B. Before proceeding to discuss Theorems 3. and 3.2, we first provide a procedure for evaluating the single-letter expression D X Z (R. In the fully Gaussian case of p =, the function D X Z (R can be obtain in a closed form as [23, Eq. 3] D X Z (R = + γη G + γη G + γη G 2 2R, where η G = η G (ρ,γ is the unique solution to (4, and can be found in [2, Eq. 22]. Aside from this degenerate case, it is in general impossible to obtain D X Z (R in a closed form, and we therefore turn to derive a procedure for evaluating it numerically. It is well-known [24], [25], that an alternative representation of the minimization problem (6 can be obtained by first introducing the distortion measure d(z, x = E [ (X x 2 Z = z ], (7 and then considering the minimization of E d(z, X subject to the same mutual information constraint. The latter is a standard DRF with respect to an i.i.d source distributed as Z. This DRF can be evaluated, after alphabet discretization, using the Blahut-Arimato algorithm [26], [27]. B. Discussion Theorems 3. and 3.2 establish the function D X Z (R as the minimal distortion that can be obtained in estimating any block of the source from a quantized version of its noisy random linear projections. The achievability theorem is relatively standard and can be anticipated based on the single-letter expression for the posterior under (A. The converse part, however, only guarantees a lower bound on the distortion in estimating sub-blocks of a particular length of the source, and only at the large system limit as the posterior distribution decouples. In fact, Thm. 3.2 leaves the possibility that for systems in which P X Y m is far from P X Z, there exists a quantization scheme that attains distortion smaller than D X Z (R. We also note that the encoder that attains the idrf, as described in the proof of Thm. 3., first forms the MMSE estimation of a finite block of the source from Y m and only then uses random coding to quantize this estimate. Arguably, this estimation before encoding may be impractical in applications due to a lack of computational resources or in knowledge of the sampling matrix H at the encoder. The minimal excess distortion incurred only due to quantization in CS can be studied by comparing D X Z (R to the MMSE in estimating X n from Y m without quantization. Under (A, the latter is given by mmse(γη [6]. By comparing D X Z (R to the DRF of X n at rate R, i.e., the minimal distortion in direct encoding of X n, we observe the additional distortion only due to the noise and the random linear projections. This comparison is illustrated in Fig. 2.

4 REFERENCES mmse(γη R = 0.25 R = 0.75 Fig. 2: Normalized idrf D X Z (R/p versus the sparsity ratio ρ for R = 0.25 and R = 0.75 bits per source dimension, p = 0.3, and γ = 00. The dashed curve is the MMSE without quantization and the doted horizontal line is the direct DRF of the sparse source. IV. CONCUSIONS We considered the problem of recovering a Bernoulli-Gauss vector from a quantized or a lossy compressed version of its noisy random linear projections under MSE distortion. Based on two assumptions that follow from the replica method, we provided a single letter expression for the minimal distortion in the large system limit using any form of encoding of the noisy projections subject to a bit constraint. This single-letter expression therefore provides the minimal distortion in encoding a Bernoulli-Gauss vector to a prescribed number of bits by observing its noisy random projections. In particular, this expression describes the excess distortion incurred in encoding the source vector from its noisy random linear projections in lieu of the full source information. Our results leaves a few open questions which will be addressed in our future work. First, the lack of an analytic expression for the minimal distortion does not allow the analysis of the effect of quantization on the optimal phase transition in CS. In addition, the encoding strategy presented in our achievability proof estimates each entry of the source before encoding it, and may be impractical in some CS applications where estimation before quantization is impossible. Finally, our converse result leaves the possibility that some encoding strategies that estimate source blocks of size increasing with the system dimensions perform better the the expression derived in this paper. V. ACKNOWEDGMENTS The authors would like to thank to U. Erez and Y. Kochman for helpful discussions regarding the problem formulation. This work is supported in part by... [] E. Candes, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, vol. 52, no. 2, pp , Feb [2] D.. Donoho, Compressed sensing, IEEE Transactions on information theory, vol. 52, no. 4, pp , [3] Y. C. Eldar and G. Kutyniok, Compressed sensing: theory and applications. Cambridge University Press, 202. [4] R. Baraniuk, S. Foucart, D. Needell, Y. Plan, and M. Wootters, Exponential decay of reconstruction error from binary measurements of sparse signals, arxiv preprint arxiv: , 204. [5] P. T. Boufounos and R. G. Baraniuk, -bit compressive sensing, in Information Sciences and Systems, CISS nd Annual Conference on. IEEE, 2008, pp [6] V. K. Goyal, A. K. Fletcher, and S. Rangan, Compressive sampling and lossy compression, IEEE Signal Process. Mag., vol. 25, no. 2, pp , [7] A. K. Fletcher, S. Rangan, and V. K. Goyal, On the rate-distortion performance of compressed sensing, in Acoustics, Speech and Signal Processing, ICASSP IEEE International Conference on, vol. 3. IEEE, 2007, pp. III 885. [8] G. Coluccia, A. Roumy, and E. Magli, Operational rate-distortion performance of single-source and distributed compressed sensing, IEEE Transactions on Communications, vol. 62, no. 6, pp , 204. [9] R. M. Gray and D.. Neuhoff, Quantization, IEEE Trans. Inf. Theory, vol. 44, no. 6, pp , 998. [0] M. Mézard and A. Montanari, Information, Physics, and Computation, ser. Oxford Graduate Texts. OUP Oxford, [] S. B. Korada and N. Macris, Tight bounds on the capacity of binary input random cdma systems, IEEE Transactions on Information Theory, vol. 56, no., pp , 200. [2] D.. Donoho, A. Javanmard, and A. Montanari, Informationtheoretically optimal compressed sensing via spatial coupling and approximate message passing, in Information Theory Proceedings (ISIT, 202 IEEE International Symposium on. IEEE, 202, pp [3] W. Huleihel and N. Merhav, Asymptotic MMSE analysis under sparse representation modeling, CoRR, vol. abs/32.347, 203. [4] G. Reeves and H. D. Pfister, The replica-symmetric prediction for compressed sensing with gaussian matrices is exact, CoRR, vol. abs/ , 206. [Online]. Available: [5] J. Barbier, M. Dia, N. Macris, and F. Krzakala, The mutual information in random linear estimation, CoRR, vol. abs/ , 206. [Online]. Available: [6] D. Guo and S. Verdu, Randomly spread CDMA: asymptotics via statistical physics, IEEE Trans. Inf. Theory, vol. 5, no. 6, pp , June [7] T. Berger, Rate-distortion theory: A mathematical basis for data compression. Englewood Cliffs, NJ: Prentice-Hall, 97. [8] R. Dobrushin and B. Tsybakov, Information transmission with additional noise, IRE Trans. Inform. Theory, vol. 8, no. 5, pp , 962. [9] C. Weidmann and M. Vetterli, Rate distortion behavior of sparse sources, IEEE Transactions on information theory, vol. 58, no. 8, pp , 202. [20] Y. Wu and S. Verdú, Rényi information dimension: Fundamental limits of almost lossless analog compression, IEEE Trans. Inf. Theory, vol. 56, no. 8, pp , 200. [2] Y. Wu and S. Verdu, Optimal phase transitions in compressed sensing, IEEE Trans. Inf. Theory, vol. 58, no. 0, pp , Oct 202. [22] A. Kipnis, Y. C. Eldar, and A. J. Goldsmith, Fundamental distortion limits of analog-to-digital compression, 206. [23] A. Kipnis, A. J. Goldsmith, Y. C. Eldar, and T. Weissman, Distortion rate function of sub-nyquist sampled Gaussian sources, IEEE Trans. Inf. Theory, vol. 62, no., pp , Jan 206. [24] J. Wolf and J. Ziv, Transmission of noisy information to a noisy receiver with minimum distortion, IEEE Trans. Inf. Theory, vol. 6, no. 4, pp , 970. [25] H. Witsenhausen, Indirect rate distortion problems, IEEE Trans. Inf. Theory, vol. 26, no. 5, pp , 980. [26] R. Blahut, Computation of channel capacity and rate-distortion functions, IEEE Trans. Inf. Theory, vol. 8, no. 4, pp , Jul 972.

5 [27] S. Arimoto, An algorithm for computing the capacity of arbitrary discrete memoryless channels, IEEE Trans. Inf. Theory, vol. 8, no., pp. 4 20, Jan 972. [28] A. apidoth, On the role of mismatch in rate distortion theory, Information Theory, IEEE Transactions on, vol. 43, no., pp , 997. [29] A. Perez, Extensions of shannon-mcmillan s limit theorem to more general stochastic processes, in Trans. Third Prague Conference on Information Theory, Statistical Decision Functions and Random Processes, 964, pp

6 APPENDIX A In this Appendix we derive various expressions that are required in order to evaluate D X Z (R and D CE (R. Marginal Distribution of Z: The marginal p(z of Z in (3 is given by p(z = = p(z xd p(x φ(z γxd p(x = pφ(z + pφ(z, + γ where φ(x,σ 2 is the centered normal density function with variance σ 2. MMSE of a Bernoulli-Gauss Vector in AWGN: The posterior p(x z is given by This leads to p(x z = p(z xp(x p(z = φ(z γx( pδ 0 (x + pφ(x. p(z E[X Z = z] = xd(p(x z = pz γ +γ φ(z, + γ pφ(z + pφ(z, + γ γ = z + γ + p p + γe 2, γ +γ z2 and finally mmse(x Z = E[X 2 ] E [ E 2 [X Z] ] = p p 2 z 2 γ (+γ 2 (φ(z, + γ 2 pφ(z + pφ(z, + γ dz. Conditional Second Moment of X given Z: The conditional second moment of X given Z is also of importance to evaluate the amended distortion. This is given as follows: E [ X 2 Z = z ] = x 2 d(p(x z = + γ + z2 γ ( + γ 2 φ(z, + γ pφ(z + pφ(z, + γ Amended Distortion Measure d: The amended distortion d can now be evaluated as d(z, x = E [ X 2 Z = z ] 2 xe[x Z = z] + x 2 = + γ( + z2 φ(z, + γ ( + γ 2 pφ(z + pφ(z, + γ γ z 2 x + γ + p + p + γe γ x2. 2 +γ z2 APPENDIX B PROOFS In this Appendix we provide proofs for Theorems 3., 3.2 and??.

7 A. Proof of Thm. 3. Fix ε > 0. We show that for and n large enough, there exists an encoder g : R m {0,} nr, such that for any k = 0,,..., where X k+ k l= ( 2 E X k+l X k+l DX Z (R + ε, = E [ Xk k+ g n (Y m ]. Showing the latter would imply that the distortion over the entire n-length block satisfies X n X n 2 = n n i= ( 2 E X i X i = n k=0,,2,...,n where without loss of generality we assumed that n/ is an integer. l= ( X l+k X l+k 2 DX Z (R + ε. For convenient we assume k = and therefore only consider the distortion in reconstructing the block X. The generalization from k = to an arbitrary k is straightforward by setting X X k+ k and repeating the arguments below for X for each k. Denote by X z the minimal MSE estimator of X from Z = ηγx +W, namely X z = E[X Z]. Denote by P the joint X z, X z distribution of X z and X z that attains the information DRF of the i.i.d source X z with respect to the MSE distortion. Namely, P attains the minimal value of X z, X z ( 2 D Xz (R inf E X z X z, P Xz, Xz subject to I (P Xz R., Xz Define the typical set A ε as the set of length sequences (xz, x z R R that satisfies P (x log X z, X z z, x z PX z (xz P X ( x z R ε. z We now generate a random codebook adjusted to the source coding problem with respect to X z by drawing 2 nr times from the distribution P X = l= P X, which is the marginal of X with respect to P X z, X z. To each realization x z we assign an index u i, i =,...,2 nr. We now describe the encoding and decoding procedures. Encoding: upon receiving Y m, the decoder forms the best estimate of X based on Y m and the encoding matrix H. We denote this estimate by X y, namely Xy = E [ X Y m,h ]. ( The encoder then looks for a sequence X z such that Xy, X z A ε. If there is no such sequence the encoder sends the index u. If there is more than one such sequence the encoder picks the one with the smallest index u. The encoder then transmit the index associated with the selected X z. Decoding: the decoder declares X z (u as its estimate for X. We now analyze the expected distortion in this encoding and decoding scheme taken over all realizations of the random source, sampling matrix and codebook generation. Note that properties of the conditional expectation implies E X X z (U 2 = E X X y 2 + E X y X z (U 2 = mmse(x Y m + E X y X z (U 2. (8 We now analyze the second term in the RHS of (8. Denote by E the event (Xy, X z / A ε and by E c its complement. Consider [ ] [ E Xy X z (U 2 = E Xy X z (U 2 E P(E + E Xy X z (U 2 E c] P(E c [ ] [ E Xy X z (U 2 E + E Xy X z (U 2 E c] P(E c (a ( 2 E P X z X z + ε + P(E c Xz, Xz (b D Xz (R + ε + 4E[X 2 ]P(E c, l= ( 2 E X y,l X z,l (u

8 where t(a follows from the standard source coding proof [7], and (b is because E X 2 z EX 2 y EX 2. So far we have shown that E X X z (U 2 mmse(x Z + D Xz (R + ε + 4E[X 2 ]P(E c, where the expectation is with respect to all realizations of the source sequence, the sampling matrix, the noise and the codebook generation. Since D X Z (R = mmse(x Z + D Xz (R, in order to complete the proof it is enough to show that P(E c goes to zero in the large system limit. Showing this would imply that there exists at least one codebook constructed in the form above which attains D X Z (R. Consequently, we prove the following emma: emma B.: In the large system limit we have P(E c 0. Proof: (of ( emma B. et δ > 0. We first choose to be large enough such that sequences generated according to PX z satisfy P (Xz, X z / A ε < δ(ε. The existence of such follows from the asymptotic equipartition theorem, where a version of this theorem for uncountable alphabets can be found in [29]. We next use (A2 to choose n large enough such that the probability of the event (Xy, X z / A ε under P X Y mp X z is δ close to its probability under PX Z P X z. We also note that since the distribution of any block is identical, so the convergence in emma B. is uniform in k. This property implies that a single choice of n large enough would be good for all disjoint blocks of the form X k+ k. B. Proof of Thm. 3.2 In order to simplify notation, we assume that k =, so that we only consider the block X. The generalization of the proof to any length k block Xk +k is trivial by symmetry and is therefore omitted from the proof. Define the following distortion measure on R m R : d y (y m, x = E [ X x 2 Y m = y m]. (9 Consider now the (standard source coding problem with information source obtained as q draws from the distribution P Y m { with respect to the } distortion measure d y. Denote by D y (r the information DRF with respect to the i.i.d m-block source (Y m q, q =,2,... and distortion d y at rate r (bits per m-block source. This function is the minimum of Ed y (Y m, X, taken over all joint probability distributions P Y m, X subject to the mutual information constraint I(P Y m, X r. The converse to Shannon s source coding theorem implies that any estimator ϕ : {0,} r X, cannot attain distortion with respect to d y smaller than D y (r. We thus have E X ϕ (g(y m 2 = Ed y (Y m,ϕ (g(y m D y (R. The proof would follow by showing that for any ε > 0, there exists n 0 such that for all n > n 0, D y (R D X Z (R + ε. Fix any joint probability distribution Q Y m, X with mutual information I(Q Y m, X R whose marginal with respect to Y m is Q Y m = P Y m, and without loss of generality, its marginal with respect to X has a finite second moment. Note that we have the following Markov chain Z X Y m X defined by the distributions PZ X P X Y mq Y m, X. The expected distortion d y with respect to the distribution Q Y m, X is bounded from below as follows: ( E Q d y Y m, X = x x 2 P X R +m+ Y m(dx,y m Q Y m, X (dy m,d x = x x 2 P X R +m++ Y m(dx,y m Q Y m, X (dy m,d x P Z (dz = R +m++ (x l x l 2 P X Z (dx,z P Z (dz Q Y m, X (dy m,d x (0 l= { } + x x 2 P X R +m++ Y m(dx,y m P X Z (dx,z P Z (dz Q Y m, X (dy m,d x. (

9 By eliminating y m from (0 we get R ++ l= = a l= l= ( b D X Z (x l x l 2 P X,Z (dx,dz Q X (d x E PX,Z Q X ( X X 2 D X Z (I(P Zl, X l l= I(P Zl, X l c D X Z ( I(P Z, X d D X Z (R, where (a follows from the definition of the (information idrf X given Z, (b follows by convexity of the idrf [7], (c follows from the chain rule of mutual information and since conditioning reduces entropy, and (d follows from the data processing inequality since I(P Z, X I(Q Y m, X R and since D X Z (R is non-increasing. We now consider the term (: we expand the square x x 2, take the absolute value and marginalize over all variables not appearing in any of the terms. This procedure leads to ( 2 x 2 P R R m X Y m(dx,y m P Y m(dy m P R X Z (dx,z P Z (dz + 0, (2 where the summation over x led to 0. Note that the last expression equals to E E [ X 2 Y m] E [ X 2 Z ]. Convergence of the posteriors in distributions under (A2 also implies convergence in second moment, hence the last expression goes to zero as n goes to infinity.

Compressed Sensing under Optimal Quantization

Compressed Sensing under Optimal Quantization Compressed Sensing under Optimal Quantization Alon Kipnis, Galen Reeves, Yonina C. Eldar and Andrea J. Goldsmith Department of Electrical Engineering, Stanford University Department of Electrical and Computer

More information

Mismatched Estimation in Large Linear Systems

Mismatched Estimation in Large Linear Systems Mismatched Estimation in Large Linear Systems Yanting Ma, Dror Baron, Ahmad Beirami Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 7695, USA Department

More information

Performance Regions in Compressed Sensing from Noisy Measurements

Performance Regions in Compressed Sensing from Noisy Measurements Performance egions in Compressed Sensing from Noisy Measurements Junan Zhu and Dror Baron Department of Electrical and Computer Engineering North Carolina State University; aleigh, NC 27695, USA Email:

More information

Performance Trade-Offs in Multi-Processor Approximate Message Passing

Performance Trade-Offs in Multi-Processor Approximate Message Passing Performance Trade-Offs in Multi-Processor Approximate Message Passing Junan Zhu, Ahmad Beirami, and Dror Baron Department of Electrical and Computer Engineering, North Carolina State University, Email:

More information

Single-letter Characterization of Signal Estimation from Linear Measurements

Single-letter Characterization of Signal Estimation from Linear Measurements Single-letter Characterization of Signal Estimation from Linear Measurements Dongning Guo Dror Baron Shlomo Shamai The work has been supported by the European Commission in the framework of the FP7 Network

More information

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

Measurements vs. Bits: Compressed Sensing meets Information Theory

Measurements vs. Bits: Compressed Sensing meets Information Theory Measurements vs. Bits: Compressed Sensing meets Information Theory Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University, Houston, TX

More information

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality 0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Quantization for Distributed Estimation

Quantization for Distributed Estimation 0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

arxiv: v1 [cs.it] 20 Jan 2018

arxiv: v1 [cs.it] 20 Jan 2018 1 Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits Alon Kipnis, Yonina C. Eldar and Andrea J. Goldsmith fs arxiv:181.6718v1 [cs.it] Jan 18 X(t) sampler smp sec encoder R[ bits

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

Shannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, and Vahid Tarokh, Fellow, IEEE

Shannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, and Vahid Tarokh, Fellow, IEEE 492 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 1, JANUARY 2010 Shannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, Vahid Tarokh, Fellow, IEEE Abstract

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

The Minimax Noise Sensitivity in Compressed Sensing

The Minimax Noise Sensitivity in Compressed Sensing The Minimax Noise Sensitivity in Compressed Sensing Galen Reeves and avid onoho epartment of Statistics Stanford University Abstract Consider the compressed sensing problem of estimating an unknown k-sparse

More information

Information Dimension

Information Dimension Information Dimension Mina Karzand Massachusetts Institute of Technology November 16, 2011 1 / 26 2 / 26 Let X would be a real-valued random variable. For m N, the m point uniform quantized version of

More information

Subset Universal Lossy Compression

Subset Universal Lossy Compression Subset Universal Lossy Compression Or Ordentlich Tel Aviv University ordent@eng.tau.ac.il Ofer Shayevitz Tel Aviv University ofersha@eng.tau.ac.il Abstract A lossy source code C with rate R for a discrete

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

On the Capacity of the Interference Channel with a Relay

On the Capacity of the Interference Channel with a Relay On the Capacity of the Interference Channel with a Relay Ivana Marić, Ron Dabora and Andrea Goldsmith Stanford University, Stanford, CA {ivanam,ron,andrea}@wsl.stanford.edu Abstract Capacity gains due

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

Risk and Noise Estimation in High Dimensional Statistics via State Evolution

Risk and Noise Estimation in High Dimensional Statistics via State Evolution Risk and Noise Estimation in High Dimensional Statistics via State Evolution Mohsen Bayati Stanford University Joint work with Jose Bento, Murat Erdogdu, Marc Lelarge, and Andrea Montanari Statistical

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Decoupling of CDMA Multiuser Detection via the Replica Method

Decoupling of CDMA Multiuser Detection via the Replica Method Decoupling of CDMA Multiuser Detection via the Replica Method Dongning Guo and Sergio Verdú Dept. of Electrical Engineering Princeton University Princeton, NJ 08544, USA email: {dguo,verdu}@princeton.edu

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

Sparse Superposition Codes for the Gaussian Channel

Sparse Superposition Codes for the Gaussian Channel Sparse Superposition Codes for the Gaussian Channel Florent Krzakala (LPS, Ecole Normale Supérieure, France) J. Barbier (ENS) arxiv:1403.8024 presented at ISIT 14 Long version in preparation Communication

More information

Remote Source Coding with Two-Sided Information

Remote Source Coding with Two-Sided Information Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State

More information

Distributed Functional Compression through Graph Coloring

Distributed Functional Compression through Graph Coloring Distributed Functional Compression through Graph Coloring Vishal Doshi, Devavrat Shah, Muriel Médard, and Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology

More information

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Minimax MMSE Estimator for Sparse System

Minimax MMSE Estimator for Sparse System Proceedings of the World Congress on Engineering and Computer Science 22 Vol I WCE 22, October 24-26, 22, San Francisco, USA Minimax MMSE Estimator for Sparse System Hongqing Liu, Mandar Chitre Abstract

More information

Coding for Discrete Source

Coding for Discrete Source EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Interactions of Information Theory and Estimation in Single- and Multi-user Communications

Interactions of Information Theory and Estimation in Single- and Multi-user Communications Interactions of Information Theory and Estimation in Single- and Multi-user Communications Dongning Guo Department of Electrical Engineering Princeton University March 8, 2004 p 1 Dongning Guo Communications

More information

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Sparse Regression Codes for Multi-terminal Source and Channel Coding Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

much more on minimax (order bounds) cf. lecture by Iain Johnstone

much more on minimax (order bounds) cf. lecture by Iain Johnstone much more on minimax (order bounds) cf. lecture by Iain Johnstone http://www-stat.stanford.edu/~imj/wald/wald1web.pdf today s lecture parametric estimation, Fisher information, Cramer-Rao lower bound:

More information

SUCCESSIVE refinement of information, or scalable

SUCCESSIVE refinement of information, or scalable IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 1983 Additive Successive Refinement Ertem Tuncel, Student Member, IEEE, Kenneth Rose, Fellow, IEEE Abstract Rate-distortion bounds for

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

An Information Theoretic Approach to Analog-to-Digital Compression

An Information Theoretic Approach to Analog-to-Digital Compression 1 An Information Theoretic Approach to Analog-to-Digital Compression Processing, storing, and communicating information that originates as an analog phenomenon involve conversion of the information to

More information

An Information Theoretic Approach to Analog-to-Digital Compression

An Information Theoretic Approach to Analog-to-Digital Compression 1 An Information Theoretic Approach to Analog-to-Digital Compression Processing, storing, and communicating information that originates as an analog phenomenon involve conversion of the information to

More information

An Overview of Multi-Processor Approximate Message Passing

An Overview of Multi-Processor Approximate Message Passing An Overview of Multi-Processor Approximate Message Passing Junan Zhu, Ryan Pilgrim, and Dror Baron JPMorgan Chase & Co., New York, NY 10001, Email: jzhu9@ncsu.edu Department of Electrical and Computer

More information

Graph Coloring and Conditional Graph Entropy

Graph Coloring and Conditional Graph Entropy Graph Coloring and Conditional Graph Entropy Vishal Doshi, Devavrat Shah, Muriel Médard, Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge,

More information

Wiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error

Wiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error Wiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error Jin Tan, Student Member, IEEE, Dror Baron, Senior Member, IEEE, and Liyi Dai, Fellow, IEEE Abstract Consider the estimation of a

More information

Functional Properties of MMSE

Functional Properties of MMSE Functional Properties of MMSE Yihong Wu epartment of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú epartment of Electrical Engineering

More information

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1159 The Duality Between Information Embedding and Source Coding With Side Information and Some Applications Richard J. Barron, Member,

More information

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Afundamental component in the design and analysis of

Afundamental component in the design and analysis of IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 533 High-Resolution Source Coding for Non-Difference Distortion Measures: The Rate-Distortion Function Tamás Linder, Member, IEEE, Ram

More information

An Overview of Compressed Sensing

An Overview of Compressed Sensing An Overview of Compressed Sensing Nathan Schneider November 18, 2009 Abstract In a large number of applications, the system will be designed to sample at a rate equal to at least the frequency bandwidth

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

Coding over Interference Channels: An Information-Estimation View

Coding over Interference Channels: An Information-Estimation View Coding over Interference Channels: An Information-Estimation View Shlomo Shamai Department of Electrical Engineering Technion - Israel Institute of Technology Information Systems Laboratory Colloquium

More information

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 28-30, 20 A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 Source Coding With Limited-Look-Ahead Side Information at the Decoder Tsachy Weissman, Member, IEEE, Abbas El Gamal, Fellow,

More information

Multiuser Successive Refinement and Multiple Description Coding

Multiuser Successive Refinement and Multiple Description Coding Multiuser Successive Refinement and Multiple Description Coding Chao Tian Laboratory for Information and Communication Systems (LICOS) School of Computer and Communication Sciences EPFL Lausanne Switzerland

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University) Performance-based Security for Encoding of Information Signals FA9550-15-1-0180 (2015-2018) Paul Cuff (Princeton University) Contributors Two students finished PhD Tiance Wang (Goldman Sachs) Eva Song

More information

Energy State Amplification in an Energy Harvesting Communication System

Energy State Amplification in an Energy Harvesting Communication System Energy State Amplification in an Energy Harvesting Communication System Omur Ozel Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland College Park, MD 20742 omur@umd.edu

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

Duality Between Channel Capacity and Rate Distortion With Two-Sided State Information

Duality Between Channel Capacity and Rate Distortion With Two-Sided State Information IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE 2002 1629 Duality Between Channel Capacity Rate Distortion With Two-Sided State Information Thomas M. Cover, Fellow, IEEE, Mung Chiang, Student

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

Compressed Sensing Using Bernoulli Measurement Matrices

Compressed Sensing Using Bernoulli Measurement Matrices ITSchool 11, Austin Compressed Sensing Using Bernoulli Measurement Matrices Yuhan Zhou Advisor: Wei Yu Department of Electrical and Computer Engineering University of Toronto, Canada Motivation Motivation

More information

Cut-Set Bound and Dependence Balance Bound

Cut-Set Bound and Dependence Balance Bound Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems

More information

Rematch and Forward: Joint Source-Channel Coding for Communications

Rematch and Forward: Joint Source-Channel Coding for Communications Background ρ = 1 Colored Problem Extensions Rematch and Forward: Joint Source-Channel Coding for Communications Anatoly Khina Joint work with: Yuval Kochman, Uri Erez, Ram Zamir Dept. EE - Systems, Tel

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Distributed Lossless Compression. Distributed lossless compression system

Distributed Lossless Compression. Distributed lossless compression system Lecture #3 Distributed Lossless Compression (Reading: NIT 10.1 10.5, 4.4) Distributed lossless source coding Lossless source coding via random binning Time Sharing Achievability proof of the Slepian Wolf

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

x log x, which is strictly convex, and use Jensen s Inequality:

x log x, which is strictly convex, and use Jensen s Inequality: 2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Gaussian Estimation under Attack Uncertainty

Gaussian Estimation under Attack Uncertainty Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary

More information

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106

More information

Block 2: Introduction to Information Theory

Block 2: Introduction to Information Theory Block 2: Introduction to Information Theory Francisco J. Escribano April 26, 2015 Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 1 / 51 Table of contents 1 Motivation

More information

Information-theoretically Optimal Sparse PCA

Information-theoretically Optimal Sparse PCA Information-theoretically Optimal Sparse PCA Yash Deshpande Department of Electrical Engineering Stanford, CA. Andrea Montanari Departments of Electrical Engineering and Statistics Stanford, CA. Abstract

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

Compressed Sensing with Shannon-Kotel nikov Mapping in the Presence of Noise

Compressed Sensing with Shannon-Kotel nikov Mapping in the Presence of Noise 19th European Signal Processing Conference (EUSIPCO 011) Barcelona, Spain, August 9 - September, 011 Compressed Sensing with Shannon-Kotel nikov Mapping in the Presence of Noise Ahmad Abou Saleh, Wai-Yip

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

Soft Covering with High Probability

Soft Covering with High Probability Soft Covering with High Probability Paul Cuff Princeton University arxiv:605.06396v [cs.it] 20 May 206 Abstract Wyner s soft-covering lemma is the central analysis step for achievability proofs of information

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

Generalized Approximate Message Passing for Unlimited Sampling of Sparse Signals

Generalized Approximate Message Passing for Unlimited Sampling of Sparse Signals Generalized Approximate Message Passing for Unlimited Sampling of Sparse Signals Osman Musa, Peter Jung and Norbert Goertz Communications and Information Theory, Technische Universität Berlin Institute

More information