Compressed Sensing under Optimal Quantization
|
|
- Sheryl May
- 6 years ago
- Views:
Transcription
1 Compressed Sensing under Optimal Quantization Alon Kipnis, Galen Reeves, Yonina C. Eldar and Andrea J. Goldsmith Department of Electrical Engineering, Stanford University Department of Electrical and Computer Engineering, Department of Statistical Science, Duke University Department of Electrical Engineering, Technion Israel Institute of Technology Abstract We consider the problem of recovering a sparse vector from a quantized or a lossy compressed version of its noisy random linear projections. We characterize the minimal distortion in this recovery as a function of the sampling ratio, the sparsity rate, the noise intensity and the total number of bits in the quantized representation. We first derive a singe-letter expression that can be seen as the indirect distortion-rate function of the sparse source observed through a Gaussian channel whose signal-to-noise ratio is derived from these parameters. Under the replica symmetry postulation, we prove that there exists a quantization scheme that attains this expression in the asymptotic regime of large system dimensions. In addition, we prove a converse demonstrating that the MMSE in estimating any fixed sub-block of the source from the quantized measurements at a fixed number of bits does not exceed this expression as the system dimensions go to infinity. Thus, under these conditions, the expression we derive describes the excess distortion incurred in encoding the source vector from its noisy random linear projections in lieu of the full source information. I. INTRODUCTION The pioneering work of [] and [2], initiated much work in compressed sensing CS, where a sparse vector is recovered from its noisy random linear projections. The main principle in CS is that a relatively small number of random linear projections are enough to represent the source, provided it has only few non-zero entries in some basis [3]. The fact that sparse sources possess such a low-dimensional representation justifies the compressed part in the term CS. Nevertheless, reducing dimension does not yet provide compression in the information theoretic sense, since it is still required to quantize this low-dimensional representation, i.e., to map it to a finite alphabet set. Arguably, any practical digital implementation of a system based on CS is subject to this quantization constraint. This paper considers the MMSE that can be attained in CS using quantization of the noisy projections, subject only to a bit per source dimension constraint. Previous works addressing effects of quantization in CS have focused on particular forms of quantization [4], [5]. Consequently, these results do not consider the fundamental tradeoffs between the system parameters and the overall number of bits in the resulting quantized representation. Other approaches that model quantization as an additive random noise [6], [7], [8] lack theoretic justification and disregard the structure existing in quantization techniques [9]. In this paper we consider the MMSE in estimating a sparse n dimensional source vector X n from a quantized or a lossy compressed version of its observation vector Y m, where the relation between the two is given by Y m = γhx n +W m. Here the random entries of the sampling matrix H are taken from a zero mean i.i.d distribution of variance /n, and W m is a unit variance white Gaussian noise vector. Therefore, the signal-to-noise ratio SNR in the channel equals γ. We are interested in the minimal MSE distortion that can be attained using any recovery technique from any form of quantization, subject only to the bit constraint R. We focus on the limiting case where m and n go to infinity while m/n converges to a constant sampling ratio ρ. This limiting situation is henceforth referred to as the large system limit. It is moreover assumed that each entry of the source is taken from a Bernoulli-Gauss distribution with PX i = 0 = p, where p is denoted as the sparsity rate. We analyze the MMSE in the setting above in the large system limit using the replica method [0]. One of longstanding problems with analysis based on the replica method has been the fact that it relies on certain key assumptions, most notably the assumption of replica symmetry. Unfortunately, these key assumptions are currently unproven in the context of CS. However, recent work has made significant progress in showing often via various different techniques that many of the properties predicted using the replica method are correct [], [2], [3], [4], [5]. In particular, [4] characterizes the asymptotic mutual information and MMSE under very mild technical conditions. Shortly after [4] appeared, a similar result was obtained in [5] using a very different proof technique. Beyond the properties used to obtain a precise characterization of the mutual information and MMSE in [4], the current paper requires two further properties, namely the asymptotic decoupling of the posterior distribution and its description by a Gaussian channel. While these properties currently rely on the assumptions of the replica method [6], a weak form of decoupling is proved in [4] and there is hope that this result can be strengthened to the form of decoupling needed in this paper. We characterize the MMSE in CS under quantization with a single-letter expression, which is a function of ρ, p, γ and R. This expression is equivalent to the MMSE distortion in estimating X n from any rate R encoded version of its observation through a scalar Gaussian channel. This distortion is known as the indirect distortion-rate function idrf of X n given the scalar channel output, and can be obtained by optimization over joint probability distributions subject
2 to a mutual information constraint [7, Ch. 3.5],[8]. Since this idrf is associated with the posterior distribution of X n predicted by the replica method [6], we denote it as the replica posterior idrf. H R m n W m X n + Y m {0,} nr Enc Dec ˆX n The main results of this paper are achievability and converse theorems with respect to the replica posterior idrf. Specifically, we show that under the asymptotic decoupling and posterior distribution assumptions, there exists a quantization scheme that attains an MMSE as close as desired to the replica posterior idrf. Our converse result says that under the same assumptions and in the large system limit, the MMSE in estimating any fixed sub-block of the source from the quantized measurements at a fixed number of bits does not exceed the replica posterior idrf. This weak form of the converse leaves the possibility that without the restriction to a finite block and a fixed number of bits, there exists a quantization scheme that attains MSE distortion lower than the replica posterior idrf. Nevertheless, as the code rate R goes to infinity, the replica posterior idrf converges monotonically to the expression for the MMSE in estimating X n from Y m derived by the replica method in [6], which is known to be correct for Gaussian sampling matrices [4]. Therefore, our converse result is tight in the limit of a large code rate. We note that in the case where the SNR is high and ρ is such that X n can be recovered from Y m with high probability, the optimal quantizer may first recover X n and quantize it in an optimal way. As a result, the MMSE in this case coincides with the direct DRF of X n [9]. Therefore, we restrict our attention to settings in which the source cannot be recovered exactly from the non-quantized CS measurements. The critical sampling ratio ρ that allows exact recovery in the noiseless case, or leads to a bounded noise sensitivity in the noisy case, is known to be the Rényi information dimension of the input vector [20], [2]. For a finite SNR, the conditions for attaining a prescribed support recovery error level were studied in [22], although many of the ideas there can be extended to the MMSE metric. The rest of this paper is organized as follows. In Section II we define our source coding problem. Our main results are given in Section III. Concluding remarks are provided in Section IV. II. PROBEM FORMUATION We consider the source coding problem described in Figure : each entry of the source vector X n is taken from the distribution P X defined as P X x = δ 0 x p + pφx, where δ 0 is the Dirac distribution of unit mass concentrated at the origin, and φx is the standard normal density function. The observation vector Y n is a noisy random linear projected version of the source as in. We further assume that the observation vector Y m R m is mapped by an encoder or a quantizer to an element U in the set {0,} nr. The Fig. : Source coding system model: recovering X n from a compressed version of its noisy random linear projections. The dashed line indicates that the sampling matrix is available both to the encoder and the decoder. decoder or the estimator, upon receiving U, provides a source reconstruction sequence X n R n. We further assume that the reconstruction X n is obtained by MMSE estimation of X n given the output of the encoder. Specifically, given an encoding scheme g : R m {0,} nr, the expected distortion in recovering X n as a function of the code-rate R is defined by: D g R E X n E[X n gy m ] 2 = EX i E[X i gy m ] 2. n i= 2 The problem we consider is the minimal value of D g R taken over all rate R encoders g. This problem corresponds to the indirect or remote source coding problem of X n from Y m [7, Ch. 3.5]. The minimal distortion 2 is referred to as the indirect DRF, defined by n D X n Y mr inf D gr, 3 g where the minimization is over all encoders g of the form R m {0,} nr and decoders {0,} nr R n. In what follows, we characterize the function D X n Y mr in terms of a particular single-letter expression. III. OPTIMA SOURCE CODING Our characterization of 3 is based on the following two predictions of the replica method from [6]: A Single letter posterior: The conditional distribution of the ith coordinate of X n, given the vector of observations Y m, in the large system limit satisfies P Xi Y m P X Z, weakly in probability. Here P X Z is the conditional distribution of a random variable X distributed according to P X given Z = γηx +W, 4 where W N 0, is independent of X. The parameter η 0, satisfies the following fixed-point equation where η = ρ + γ mmseγη, 5 ρ mmseγ EX E[X γx +W] 2 6
3 is the minimal MSE in estimating X under a scalar AWGN channel of SNR γ. In the case of multiple solutions to 5, η is chosen to minimize the free energy IP X,Z + ρ 2 η ρ log η, 7 ρ and we further assume that the global minimizer of 7 is unique. The conditional distribution P X Z is referred to as the replica posterior. A2 Decoupling: For an arbitrary but fixed number of input elements X n,...,x n, in the large system limit we have P Xn,...,X n Y m P X Z, weakly in probability, where X and Z are distributed as in A. A. A Single-etter Expression In order to characterize D X n Y mr, we consider the scalar Gaussian channel 4 and denote by D X Z R the minimal value of the following problem: D X Z R = inf I P Z, X R 2 E X X, 8 where the minimization is over all joint probability distributions of Z and X whose mutual information does not exceed R, and the marginal of Z coincides with the distribution at the output of the channel 4 with input distributed as P X. The function D X Z R is known as the information idrf of the process X n given Z n [7, Ch. 3.5], where the latter is obtained by n independent uses of the channel 4. In this paper we refer to D X Z R as the replica posterior idrf, since it can be seen as the idrf associated with the Gaussian channel defined by the replica posterior P X Z. Our main result is the characterization of D X n Y mr in the large system limit in terms of the replica posterior idrf. The precise statement of this result is given by the following two theorems: Theorem 3. achiveability: Under A and A2, for any ε > 0 there exists n large enough and an encoder g : R m {0,} nr such that E X n E[X n gy n ] 2 does not exceed D X Z R + ε. Sketch of Proof: The existence of the encoder g is shown using a random coding argument, where the codebook is generated according to the joint scalar distribution which attains 8. It follows from A2 that in the large system limit, a random code designed with respect to P X Z asymptotically leads to the same distortion even if it operates on observations generated according to P X Y m. The transition from length blocks to the entire source realization X n is trivial since the same code is valid for all length blocks. The details are given in the Appendix. When the free energy functional 7 has more than one global minimum, the limiting distribution is not well-defined and the system is said to be in a phase transition. Theorem 3.2 converse: Under A and A2, for any,k N, deterministic encoder g : R m {0,} R and ε > 0, there exists n 0 such that [ E Xk +k E Xk +k gy ] m 2 > DX Z R ε for all n > n 0. In words, the average distortion in estimating any block of length of the source from the observation vector Y m is bounded from below by the single-letter expression D X Z R, provided n is large enough. Sketch of proof: The main idea of the proof is to map the distortion over each length block to a particular distortion measure defined only in terms of length m sequences Y m. We then use Shannon s source coding converse to obtain a lower bound for this distortion expressed in terms of joint probability distributions over m blocks. Once this lower bound is established, we use A2 to conclude that the aforementioned lower bound converges to D X Z R in the large system limit. The full proof can be found in the Appendix. Before proceeding to discuss Theorems 3. and 3.2, we first provide a procedure for evaluating the the replica posterior idrf D X Z R. In the fully Gaussian case of p =, the function D X Z R can be obtained in closed form as [24, Eq. 3] D X Z R = + γη G + γη G + γη G 2 2R, where η G = η G ρ,γ is the unique solution to 5, and can be found in 2 [2, Eq. 22]. Aside from this degenerate case, a closed form expression for D X Z R is unknown in general. We therefore turn to derive a procedure for evaluating it numerically. It is well-known [25], [26], that an alternative representation of the minimization problem 8 can be obtained by first introducing the distortion measure dz, x = E [ X x 2 Z = z ], 9 and then considering the minimization of E dz, X subject to the same mutual information constraint. The latter is a standard DRF with respect to an i.i.d source distributed as Z. This DRF can be evaluated, after alphabet discretization, using the Blahut-Arimato algorithm [27]. Summarizing all the steps above, we evaluate D X Z R as follows: i Compute the SNR attenuation factor η by solving 5. ii Obtain an expression for the distortion dˆ of 9. ii Evaluate the DRF of Z under the distortion measure dˆ using the Blahut-Arimato algorithm. B. Discussion The achievability result in Theorem 3. is relatively standard and can be anticipated based on the single-letter expression for the posterior under A. The converse part, however, only 2 In the notation of [2]: σ 2 = γ, R = ρ and η σ R = η G.
4 R = 0.25 Rp = mmseγη mmseγη R = 0.75 Rp = 2 Fig. 2: Normalized replica posterior idrf D X Z R/p versus the sampling ratio ρ for R = 0.25 and R = 0.75 bits per source dimension, p = 0.3, and γ = 00. The dashed curve is the MMSE without quantization and the doted horizontal line is the direct DRF of the sparse source. Fig. 3: Normalized replica posterior idrf D X Z R/p versus the sparsity rate p for ρ = 0.5, γ = 00, and two values of the number of bits per non-zero source entry Rp. The dashed and dotted curves correspond to the MMSE without quantization and the direct DRF of the sparse source at the same rate, respectively. guarantees a lower bound on the distortion in estimating subblocks of a particular length of the source with the number of quantization bits adjusted to this length, and only in the large system limit as the posterior distribution decouples. In fact, Theorem 3.2 leaves open the possibility that quantization schemes whose number of bits increased with the system dimension attain distortion smaller than D X Z R. We also note that the encoder that attains D X Z R, as described in the proof of Theorem 3., first forms the MMSE estimation of a finite block of the source from Y m and only then uses random coding to quantize this estimate. Arguably, this estimation before encoding may be impractical in applications due to a lack of computational resources or in knowledge of the sampling matrix H at the encoder. The excess distortion incurred only due to quantization in CS can be studied by comparing D X Z R to the MMSE in estimating X n from Y m without quantization. Under A, the latter is given by mmseγη [6]. By comparing D X Z R to the DRF of X n at rate R, i.e., the minimal distortion in direct encoding of X n, we observe the additional distortion only due to the noise and the random linear projections. This comparison is illustrated in Figure 2. Figure 3 illustrates the dependency of D X Z R on the source sparsity ratio p, under a fixed number of bits per nonzero source entry, sampling ratio and SNR. Also shown is the direct DRF of the source X n at the same coding rate. It follows from Figure 3 that the limiting factor in encoding sparse sources is the bit constraint, rather then the reduced dimension and the noise. The latter two factors become dominant as the source becomes dense. IV. CONCUSIONS We considered the problem of recovering a Bernoulli-Gauss vector from a quantized version of its noisy random linear projections under MSE distortion. In order to characterize the minimal MSE in this recovery, we derived a singleletter expression given in terms of the sampling ratio, noise intensity and source s sparsity. Based on two assumptions that follow from the replica method, we showed that this singeletter expression is achievable. In addition, we showed that by restricting the quantizer to a fixed number of bits while the system dimension goes to infinity, the MSE in recovering any sub-block of the source is bounded from below by the aforementioned expression. Our results leaves a few open questions which will be addressed in our future work. First, the encoding strategy presented in our achievability proof estimates each entry of the source before encoding it, and may be impractical in some CS applications where estimation before quantization is impossible. In addition, our converse result leaves the possibility that some encoding strategies that estimate source blocks of size increasing with the system dimensions perform
5 better than the expression derived in this paper. Another point worth investigating is the study of the sampling rate allowing optimal reconstruction under rate limit quantization. Indeed, it was shown in [2] that Rényi information dimension is the critical sampling ratio leading to a bounded noise sensitivity under any sampler, i.e., not necessarily under a random linear sampler as considered in this work. Therefore, an interesting question that arises is whether the critical sampling ratio of [2] changes under the constraint of quantization at rate R. Namely, whether there exists a sampler of sampling ratio smaller than the Rényi information dimension of P X, such that the MMSE converges to the DRF of X n as the SNR goes to infinity. The minimal sampling ratio for which such a sampler exists is the CS equivalent of the DRF-attaining sub-nyquist sampling rate derived in [28].
6 V. PROOFS In this Appendix we provide proofs for Theorems 3. and 3.2. A. Proof of Thm. 3. Fix ε > 0. We show that for and n large enough, there exists an encoder g : R m {0,} nr, such that for any k = 0,,..., where X k+ k 2 E X k+l X k+l DX Z R + ε, = E [ Xk k+ g n Y m ]. Showing the latter would imply that the distortion over the entire n-length block satisfies X n X n 2 = n n i= 2 E X i X i = n k=0,,2,...,n where without loss of generality we assumed that n/ is an integer. X l+k X l+k 2 DX Z R + ε. For convenience we assume k = and therefore only consider the distortion in reconstructing the block X. The generalization from k = to an arbitrary k is straightforward by setting X Xk k+ and repeating the arguments below for X for each k. Denote by X z the minimal MSE estimator of X from Z = ηγx +W, namely X z = E[X Z]. Denote by P the joint X z, X z distribution of X z and X z that attains the information DRF of the i.i.d source X z with respect to the MSE distortion. Namely, P attains the minimal value of X z, X z 2 D Xz R inf E X z X z, P Xz, Xz subject to I P Xz R., Xz Define the typical set A ε as the set of length sequences x z, x z R R that satisfies log P X z, X z x z, x z P X z x z P x X z R ε, z and x z x z D Xz R < ε. We now generate a random codebook adjusted to the source coding problem with respect to X z by drawing 2 nr times from the distribution P X = P X, which is the marginal of X with respect to P X z, X z. To each realization x z we assign an index u i, i =,...,2 nr. We now describe the encoding and decoding procedures. Encoding: upon receiving Y m, the decoder forms the best estimate of X based on Y m and the encoding matrix H. We denote this estimate by X y, namely Xy = E [ X Y m,h ]. The encoder then looks for a sequence X z such that Xy, X z A ε. If there is no such sequence the encoder sends the index u. If there is more than one such sequence the encoder picks the one with the smallest index u. The encoder then transmit the index associated with the selected X z. Decoding: the decoder declares X z u as its estimate for X. We now analyze the expected distortion in this encoding and decoding scheme taken over all realizations of the random source, sampling matrix and codebook generation. Note that properties of the conditional expectation implies E X X z U 2 = E X X y 2 + E X y X z U 2 mmsex Y m + E X y X z U 2. 0
7 We on the second term in the RHS of 0. Fix ε > 0 and denote by E the event that Xy is such there is no x z in the codebook for which Xy, x z A ε. Denote by E c the complement of E. Consider [ E Xz X z U 2 = E Xz X z U 2 E c] [ ] PE c + E Xz X z U 2 E PE [ E Yz X z U 2 E c] [ ] + E Yz X z U 2 E PE a 2 D Xz R + ε + PE E X y,l X z,l u b D Xz R + ε + 4E[X 2 ]PE, where a follows from the definition of the typical set and b is because, for all l =,..., E X 2 z,l EX 2 and EX 2 y,l EX 2. So far we have shown that E X X z U 2 mmsex Z + D Xz R + ε + 4E[X 2 ]PE c, where the expectation is with respect to all realizations of the source sequence, the sampling matrix, the noise, and the codebook generation. Since D X Z R = mmsex Z+D Xz R, in order to complete the proof it is enough to show that PE goes to zero in the large system limit. Showing this would imply that there exists at least one codebook, constructed according to the procedure above, that attains distortion D X Z R. The result follows from the following lemma: emma 5.: Fix ε > 0. Then there exists n 0 large enough such that PE < ε, for all n n 0. Proof: of emma 5. - sketch. We first choose to be large enough such that sequences generated according to PX z satisfy P Xz, X z / A ε < ε /2. The existence of such follows from the definition of the typical set and the law of large number [7]. Next, we use A2 to choose n large enough such that the probability that Xy is such that it has no matching jointly typical X z in A ε under P X Y mp X z is ε /2 close to its probability under PX Z P X z. Since typicality properties are defined only with respect to the distribution, the rest of the proof follows from the standard source coding achievability arguments [7]. We also note that the convergence in emma 5. is uniform in k since the distribution of any block is identical. Therefore, a single choice of and n large enough in emma 5. would be good for all disjoint blocks of the form X k+ k. B. Proof of Thm. 3.2 In order to simplify notation, we assume that k =, so that we only consider the block X. The generalization of the proof to any length k block Xk +k is trivial by symmetry and is therefore omitted from the proof. Define the following distortion measure on R m R : d y y m, x = E [ X x 2 Y m = y m]. Consider now the standard source coding problem with information source obtained as q draws from the distribution P Y m { with respect to the } distortion measure d y. Denote by D y r the information DRF with respect to the i.i.d m-block source Y m q, q =,2,... and distortion d y at rate r bits per m-block source. This function is the minimum of Ed y Y m, X, taken over all joint probability distributions P Y m, X subject to the mutual information constraint IP Y m, X r. The converse to Shannon s source coding theorem implies that any estimator ϕ : {0,} r X, cannot attain distortion with respect to d y smaller than D y r. We thus have E X ϕ gy m 2 = Ed y Y m,ϕ gy m D y R. The proof would follow by showing that for any ε > 0, there exists n 0 such that for all n > n 0, D y R D X Z R + ε. Fix any joint probability distribution Q Y m, X with mutual information IQ Y m, X R whose marginal with respect to Y m is Q Y m = P Y m, and without loss of generality, its marginal with respect to X has a finite second moment. Note that we have the
8 following Markov chain Z X Y m X defined by the distributions PZ X P X Y mq Y m, X. The expected distortion d y with respect to the distribution Q Y m, X is bounded from below as follows: E Q d y Y m, X = x x 2 P X R +m+ Y mdx,y m Q Y m, X dy m,d x = x x 2 P X R +m++ Y mdx,y m Q Y m, X dy m,d x P Z dz = R +m++ x l x l 2 P X Z dx,z P Z dz Q Y m, X dy m,d x 2 { } + x x 2 P X R +m++ Y mdx,y m P X Z dx,z P Z dz Q Y m, X dy m,d x. 3 By eliminating y m from 2 we get R ++ = a b D X Z x l x l 2 P X,Z dx,dz Q X d x E PX,Z Q X X X 2 D X Z IP Zl, X l IP Zl, X l c D X Z IP Z, X d D X Z R, where a follows from the definition of the information idrf X given Z, b follows by convexity of the idrf [7], c follows from the chain rule of mutual information and since conditioning reduces entropy, and d follows from the data processing inequality since IP Z, X IQ Y m, X R and since D X Z R is non-increasing. We now consider the term 3: we expand the square x x 2, take the absolute value and marginalize over all variables not appearing in any of the terms. This procedure leads to 2 x 2 P R R m X Y mdx,y m P Y mdy m P R X Z dx,z P Z dz + 0, 4 where the summation over x led to 0. Note that the last expression equals to E E [ X 2 Y m] E [ X 2 Z ]. Since the posterior P X Y m is uniformly bounded, convergence of the posterior in distribution in A2 also implies convergence in second moment, hence the last expression goes to zero as n goes to infinity. VI. ACKNOWEDGMENTS The authors would like to thank to U. Erez and Y. Kochman for helpful discussions regarding the problem formulation. This work is supported in parts by the National Science Foundation NSF and the United States-Israel Binational Science Foundation BSF under Grant No REFERENCES [] E. Candes, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, vol. 52, no. 2, pp , Feb [2] D.. Donoho, Compressed sensing, IEEE Transactions on information theory, vol. 52, no. 4, pp , [3] Y. C. Eldar and G. Kutyniok, Compressed sensing: theory and applications. Cambridge University Press, 202. [4] R. Baraniuk, S. Foucart, D. Needell, Y. Plan, and M. Wootters, Exponential decay of reconstruction error from binary measurements of sparse signals, arxiv preprint arxiv: , 204. [5] P. T. Boufounos and R. G. Baraniuk, -bit compressive sensing, in Information Sciences and Systems, CISS nd Annual Conference on. IEEE, 2008, pp. 6 2.
9 [6] V. K. Goyal, A. K. Fletcher, and S. Rangan, Compressive sampling and lossy compression, IEEE Signal Process. Mag., vol. 25, no. 2, pp , [7] A. K. Fletcher, S. Rangan, and V. K. Goyal, On the rate-distortion performance of compressed sensing, in Acoustics, Speech and Signal Processing, ICASSP IEEE International Conference on, vol. 3. IEEE, 2007, pp. III 885. [8] G. Coluccia, A. Roumy, and E. Magli, Operational rate-distortion performance of single-source and distributed compressed sensing, IEEE Transactions on Communications, vol. 62, no. 6, pp , 204. [9] R. Gray and D. Neuhoff, Quantization, IEEE Trans. Inf. Theory, vol. 44, no. 6, pp , Oct 998. [0] M. Mézard and A. Montanari, Information, Physics, and Computation, ser. Oxford Graduate Texts. OUP Oxford, [] S. B. Korada and N. Macris, Tight bounds on the capacity of binary input random CDMA systems, IEEE Transactions on Information Theory, vol. 56, no., pp , 200. [2] D.. Donoho, A. Javanmard, and A. Montanari, Information-theoretically optimal compressed sensing via spatial coupling and approximate message passing, in Information Theory Proceedings ISIT, 202 IEEE International Symposium on. IEEE, 202, pp [3] W. Huleihel, N. Merhav, and S. S. Shitz, On compressive sensing in coding problems: a rigorous approach, IEEE Transactions on Information Theory, vol. 6, no. 0, pp , 205. [4] G. Reeves and H. D. Pfister, The replica-symmetric prediction for compressed sensing with gaussian matrices is exact, CoRR, vol. abs/ , 206. [Online]. Available: [5] J. Barbier, M. Dia, N. Macris, and F. Krzakala, The mutual information in random linear estimation, CoRR, vol. abs/ , 206. [Online]. Available: [6] D. Guo and S. Verdu, Randomly spread CDMA: asymptotics via statistical physics, IEEE Trans. Inf. Theory, vol. 5, no. 6, pp , June [7] T. Berger, Rate-distortion theory: A mathematical basis for data compression. Englewood Cliffs, NJ: Prentice-Hall, 97. [8] R. Dobrushin and B. Tsybakov, Information transmission with additional noise, IRE Trans. Inform. Theory, vol. 8, no. 5, pp , 962. [9] C. Weidmann and M. Vetterli, Rate distortion behavior of sparse sources, IEEE Transactions on information theory, vol. 58, no. 8, pp , 202. [20] Y. Wu and S. Verdú, Rényi information dimension: Fundamental limits of almost lossless analog compression, IEEE Trans. Inf. Theory, vol. 56, no. 8, pp , 200. [2] Y. Wu and S. Verdu, Optimal phase transitions in compressed sensing, IEEE Trans. Inf. Theory, vol. 58, no. 0, pp , Oct 202. [22] G. Reeves and M. Gastpar, The sampling rate-distortion tradeoff for sparsity pattern recovery in compressed sensing, IEEE Trans. Inf. Theory, vol. 58, no. 5, pp , 202. [23] A. Kipnis, G. Reeves, Y. C. Eldar, and A. J. Goldsmith, Fundamental limits of compressed sensing under optimal quantization, in Information Theory ISIT, 207 IEEE International Symposium on, June 207. [Online]. Available: [24] A. Kipnis, A. J. Goldsmith, Y. C. Eldar, and T. Weissman, Distortion rate function of sub-nyquist sampled Gaussian sources, IEEE Trans. Inf. Theory, vol. 62, no., pp , Jan 206. [25] J. Wolf and J. Ziv, Transmission of noisy information to a noisy receiver with minimum distortion, IEEE Trans. Inf. Theory, vol. 6, no. 4, pp , 970. [26] H. Witsenhausen, Indirect rate distortion problems, IEEE Trans. Inf. Theory, vol. 26, no. 5, pp , 980. [27] S. Arimoto, An algorithm for computing the capacity of arbitrary discrete memoryless channels, IEEE Trans. Inf. Theory, vol. 8, no., pp. 4 20, Jan 972. [28] A. Kipnis, Y. C. Eldar, and A. J. Goldsmith, Fundamental distortion limits of analog-to-digital compression, 206.
Fundamental Limits of Compressed Sensing under Optimal Quantization
Fundamental imits of Compressed Sensing under Optimal Quantization Alon Kipnis, Galen Reeves, Yonina C. Eldar and Andrea J. Goldsmith Department of Electrical Engineering, Stanford University Department
More informationMismatched Estimation in Large Linear Systems
Mismatched Estimation in Large Linear Systems Yanting Ma, Dror Baron, Ahmad Beirami Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 7695, USA Department
More informationPerformance Regions in Compressed Sensing from Noisy Measurements
Performance egions in Compressed Sensing from Noisy Measurements Junan Zhu and Dror Baron Department of Electrical and Computer Engineering North Carolina State University; aleigh, NC 27695, USA Email:
More informationSingle-letter Characterization of Signal Estimation from Linear Measurements
Single-letter Characterization of Signal Estimation from Linear Measurements Dongning Guo Dror Baron Shlomo Shamai The work has been supported by the European Commission in the framework of the FP7 Network
More informationCapacity of the Discrete Memoryless Energy Harvesting Channel with Side Information
204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener
More informationPerformance Trade-Offs in Multi-Processor Approximate Message Passing
Performance Trade-Offs in Multi-Processor Approximate Message Passing Junan Zhu, Ahmad Beirami, and Dror Baron Department of Electrical and Computer Engineering, North Carolina State University, Email:
More informationA Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission
Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization
More informationMMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only
MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationLecture 5 Channel Coding over Continuous Channels
Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More informationarxiv: v1 [cs.it] 21 Feb 2013
q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto
More informationA New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality
0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California
More informationELEC546 Review of Information Theory
ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random
More informationGeneralized Writing on Dirty Paper
Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland
More informationarxiv: v1 [cs.it] 20 Jan 2018
1 Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits Alon Kipnis, Yonina C. Eldar and Andrea J. Goldsmith fs arxiv:181.6718v1 [cs.it] Jan 18 X(t) sampler smp sec encoder R[ bits
More informationThe Minimax Noise Sensitivity in Compressed Sensing
The Minimax Noise Sensitivity in Compressed Sensing Galen Reeves and avid onoho epartment of Statistics Stanford University Abstract Consider the compressed sensing problem of estimating an unknown k-sparse
More informationLower Bounds on the Graphical Complexity of Finite-Length LDPC Codes
Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International
More information18.2 Continuous Alphabet (discrete-time, memoryless) Channel
0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not
More informationDesign of Optimal Quantizers for Distributed Source Coding
Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305
More informationShannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, and Vahid Tarokh, Fellow, IEEE
492 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 1, JANUARY 2010 Shannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, Vahid Tarokh, Fellow, IEEE Abstract
More informationCompressed Sensing Using Bernoulli Measurement Matrices
ITSchool 11, Austin Compressed Sensing Using Bernoulli Measurement Matrices Yuhan Zhou Advisor: Wei Yu Department of Electrical and Computer Engineering University of Toronto, Canada Motivation Motivation
More informationMultipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationOn Common Information and the Encoding of Sources that are Not Successively Refinable
On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa
More informationLattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function
Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,
More informationSparse Superposition Codes for the Gaussian Channel
Sparse Superposition Codes for the Gaussian Channel Florent Krzakala (LPS, Ecole Normale Supérieure, France) J. Barbier (ENS) arxiv:1403.8024 presented at ISIT 14 Long version in preparation Communication
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationChapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University
Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission
More informationThe Poisson Channel with Side Information
The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH
More informationDigital Communications III (ECE 154C) Introduction to Coding and Information Theory
Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More information5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006
5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 Source Coding With Limited-Look-Ahead Side Information at the Decoder Tsachy Weissman, Member, IEEE, Abbas El Gamal, Fellow,
More informationMeasurements vs. Bits: Compressed Sensing meets Information Theory
Measurements vs. Bits: Compressed Sensing meets Information Theory Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University, Houston, TX
More informationCut-Set Bound and Dependence Balance Bound
Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationPerformance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)
Performance-based Security for Encoding of Information Signals FA9550-15-1-0180 (2015-2018) Paul Cuff (Princeton University) Contributors Two students finished PhD Tiance Wang (Goldman Sachs) Eva Song
More informationThe Duality Between Information Embedding and Source Coding With Side Information and Some Applications
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1159 The Duality Between Information Embedding and Source Coding With Side Information and Some Applications Richard J. Barron, Member,
More informationRemote Source Coding with Two-Sided Information
Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State
More informationSubset Universal Lossy Compression
Subset Universal Lossy Compression Or Ordentlich Tel Aviv University ordent@eng.tau.ac.il Ofer Shayevitz Tel Aviv University ofersha@eng.tau.ac.il Abstract A lossy source code C with rate R for a discrete
More informationAppendix B Information theory from first principles
Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes
More informationSoft Covering with High Probability
Soft Covering with High Probability Paul Cuff Princeton University arxiv:605.06396v [cs.it] 20 May 206 Abstract Wyner s soft-covering lemma is the central analysis step for achievability proofs of information
More informationSuperposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels
Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:
More informationChapter 9 Fundamental Limits in Information Theory
Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For
More informationSUCCESSIVE refinement of information, or scalable
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 1983 Additive Successive Refinement Ertem Tuncel, Student Member, IEEE, Kenneth Rose, Fellow, IEEE Abstract Rate-distortion bounds for
More informationOn the Capacity of the Interference Channel with a Relay
On the Capacity of the Interference Channel with a Relay Ivana Marić, Ron Dabora and Andrea Goldsmith Stanford University, Stanford, CA {ivanam,ron,andrea}@wsl.stanford.edu Abstract Capacity gains due
More informationECE Information theory Final (Fall 2008)
ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1
More informationSoft-Output Trellis Waveform Coding
Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca
More informationQuantization for Distributed Estimation
0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu
More information6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011
6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student
More informationA Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing
Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 28-30, 20 A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing
More informationDistributed Functional Compression through Graph Coloring
Distributed Functional Compression through Graph Coloring Vishal Doshi, Devavrat Shah, Muriel Médard, and Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology
More informationAn Information Theoretic Approach to Analog-to-Digital Compression
1 An Information Theoretic Approach to Analog-to-Digital Compression Processing, storing, and communicating information that originates as an analog phenomenon involve conversion of the information to
More informationAn Information Theoretic Approach to Analog-to-Digital Compression
1 An Information Theoretic Approach to Analog-to-Digital Compression Processing, storing, and communicating information that originates as an analog phenomenon involve conversion of the information to
More informationFunctional Properties of MMSE
Functional Properties of MMSE Yihong Wu epartment of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú epartment of Electrical Engineering
More informationInformation Theory. Coding and Information Theory. Information Theory Textbooks. Entropy
Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is
More informationInformation Dimension
Information Dimension Mina Karzand Massachusetts Institute of Technology November 16, 2011 1 / 26 2 / 26 Let X would be a real-valued random variable. For m N, the m point uniform quantized version of
More informationMinimax MMSE Estimator for Sparse System
Proceedings of the World Congress on Engineering and Computer Science 22 Vol I WCE 22, October 24-26, 22, San Francisco, USA Minimax MMSE Estimator for Sparse System Hongqing Liu, Mandar Chitre Abstract
More informationMultiuser Successive Refinement and Multiple Description Coding
Multiuser Successive Refinement and Multiple Description Coding Chao Tian Laboratory for Information and Communication Systems (LICOS) School of Computer and Communication Sciences EPFL Lausanne Switzerland
More informationAfundamental component in the design and analysis of
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 533 High-Resolution Source Coding for Non-Difference Distortion Measures: The Rate-Distortion Function Tamás Linder, Member, IEEE, Ram
More informationRisk and Noise Estimation in High Dimensional Statistics via State Evolution
Risk and Noise Estimation in High Dimensional Statistics via State Evolution Mohsen Bayati Stanford University Joint work with Jose Bento, Murat Erdogdu, Marc Lelarge, and Andrea Montanari Statistical
More informationDuality Between Channel Capacity and Rate Distortion With Two-Sided State Information
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE 2002 1629 Duality Between Channel Capacity Rate Distortion With Two-Sided State Information Thomas M. Cover, Fellow, IEEE, Mung Chiang, Student
More informationDecoupling of CDMA Multiuser Detection via the Replica Method
Decoupling of CDMA Multiuser Detection via the Replica Method Dongning Guo and Sergio Verdú Dept. of Electrical Engineering Princeton University Princeton, NJ 08544, USA email: {dguo,verdu}@princeton.edu
More informationAn Overview of Multi-Processor Approximate Message Passing
An Overview of Multi-Processor Approximate Message Passing Junan Zhu, Ryan Pilgrim, and Dror Baron JPMorgan Chase & Co., New York, NY 10001, Email: jzhu9@ncsu.edu Department of Electrical and Computer
More informationSparse Regression Codes for Multi-terminal Source and Channel Coding
Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder
More informationOn Scalable Coding in the Presence of Decoder Side Information
On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,
More informationArimoto Channel Coding Converse and Rényi Divergence
Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code
More informationShannon s noisy-channel theorem
Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for
More informationFeedback Capacity of a Class of Symmetric Finite-State Markov Channels
Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada
More informationExercises with solutions (Set D)
Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationLecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122
Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel
More information5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010
5958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010 Capacity Theorems for Discrete, Finite-State Broadcast Channels With Feedback and Unidirectional Receiver Cooperation Ron Dabora
More informationWiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error
Wiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error Jin Tan, Student Member, IEEE, Dror Baron, Senior Member, IEEE, and Liyi Dai, Fellow, IEEE Abstract Consider the estimation of a
More informationIN this paper, we consider the capacity of sticky channels, a
72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion
More informationGaussian Estimation under Attack Uncertainty
Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary
More informationDegrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages
Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationVariable-Rate Universal Slepian-Wolf Coding with Feedback
Variable-Rate Universal Slepian-Wolf Coding with Feedback Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University, Houston, TX 77005 Abstract
More informationInteractions of Information Theory and Estimation in Single- and Multi-user Communications
Interactions of Information Theory and Estimation in Single- and Multi-user Communications Dongning Guo Department of Electrical Engineering Princeton University March 8, 2004 p 1 Dongning Guo Communications
More informationLecture 3: Channel Capacity
Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.
More informationNotes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel
Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic
More informationSignal Estimation in Gaussian Noise: A Statistical Physics Perspective
Signal Estimation in Gaussian Noise: A Statistical Physics Perspective Neri Merhav Electrical Engineering Dept. Technion Israel Inst. of Tech. Haifa 3000, Israel Email: merhav@ee.technion.ac.il Dongning
More informationError Exponent Region for Gaussian Broadcast Channels
Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI
More informationOn the Duality between Multiple-Access Codes and Computation Codes
On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch
More informationMidterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016
Midterm Exam Time: 09:10 12:10 11/23, 2016 Name: Student ID: Policy: (Read before You Start to Work) The exam is closed book. However, you are allowed to bring TWO A4-size cheat sheet (single-sheet, two-sided).
More informationOn convergence of Approximate Message Passing
On convergence of Approximate Message Passing Francesco Caltagirone (1), Florent Krzakala (2) and Lenka Zdeborova (1) (1) Institut de Physique Théorique, CEA Saclay (2) LPS, Ecole Normale Supérieure, Paris
More informationx log x, which is strictly convex, and use Jensen s Inequality:
2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and
More informationAn Overview of Compressed Sensing
An Overview of Compressed Sensing Nathan Schneider November 18, 2009 Abstract In a large number of applications, the system will be designed to sample at a rate equal to at least the frequency bandwidth
More informationEE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions
EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where
More informationCoding over Interference Channels: An Information-Estimation View
Coding over Interference Channels: An Information-Estimation View Shlomo Shamai Department of Electrical Engineering Technion - Israel Institute of Technology Information Systems Laboratory Colloquium
More informationProblemsWeCanSolveWithaHelper
ITW 2009, Volos, Greece, June 10-12, 2009 ProblemsWeCanSolveWitha Haim Permuter Ben-Gurion University of the Negev haimp@bgu.ac.il Yossef Steinberg Technion - IIT ysteinbe@ee.technion.ac.il Tsachy Weissman
More informationBounds on Mutual Information for Simple Codes Using Information Combining
ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar
More informationApproximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery
Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns
More informationInformation-theoretically Optimal Sparse PCA
Information-theoretically Optimal Sparse PCA Yash Deshpande Department of Electrical Engineering Stanford, CA. Andrea Montanari Departments of Electrical Engineering and Statistics Stanford, CA. Abstract
More informationCompressed Sensing with Shannon-Kotel nikov Mapping in the Presence of Noise
19th European Signal Processing Conference (EUSIPCO 011) Barcelona, Spain, August 9 - September, 011 Compressed Sensing with Shannon-Kotel nikov Mapping in the Presence of Noise Ahmad Abou Saleh, Wai-Yip
More informationReliable Computation over Multiple-Access Channels
Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationLossy Distributed Source Coding
Lossy Distributed Source Coding John MacLaren Walsh, Ph.D. Multiterminal Information Theory, Spring Quarter, 202 Lossy Distributed Source Coding Problem X X 2 S {,...,2 R } S 2 {,...,2 R2 } Ẑ Ẑ 2 E d(z,n,
More informationMultiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets
Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,
More informationGraph Coloring and Conditional Graph Entropy
Graph Coloring and Conditional Graph Entropy Vishal Doshi, Devavrat Shah, Muriel Médard, Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge,
More information