Q(X) Q (X) Q(X) X Q(X)-Q (X) Origin Q(X)-Q (X) Origin

Size: px
Start display at page:

Download "Q(X) Q (X) Q(X) X Q(X)-Q (X) Origin Q(X)-Q (X) Origin"

Transcription

1 Lattice Quantization with Side Information Sergio D. Servetto Laboratoire de Communications Audiovisuelles Ecole Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland. URL: Abstract We consider the design of lattice vector quantizers for the problem of coding Gaussian sources with uncoded side information available only at the decoder. The design of such quantizers can be reduced to the problem of finding an appropriate sublattice of a given lattice codebook. We study the performance of the resulting quantizers in the limit as the encoding rate becomes high, and we evaluate these asymptotics for three lattices of interest: the hexagonal lattice A 2, the Gosset lattice E 8, and the Leech lattice Λ 24. We also verify these asymptotics numerically, via computer simulations based on the lattice A 2. Surprisingly, the lattice E 8 achieves the best performance of all cases considered. 1 Introduction 1.1 Rate Distortion with Side Information Let f(x n ;Y n )g 1 n=1 be a sequence of independent drawings of a pair of dependent random variables X and Y, and let D(x; ^x) denote a single-letter distortion measure. The problem of rate distortion with side information at the decoder asks the question of how many bits are required to encode the sequence fx n g under the constraint that ED(x; ^x)» d, assuming the side information fy n g is available to the decoder but not to the encoder [4, Ch. 14.9]. This problem, first considered by Wyner and Ziv in [13], is a special case of the general problem of coding correlated information sources considered by Slepian and Wolf [11], in that one of the sources (fy n g) is available uncoded at the decoder. But it also generalizes the setup of [11], in that coding is with respect to a fidelity criterion rather than noiseless. In [12, 13], Wyner and Ziv derive the rate/distortion function R Λ (d) for this problem, for general sources and general (single letter) distortion metrics. Under similar assumptions, a more general function R(d 1 ;d 2 )was considered by Heegard and Berger in [7], for the case when there is uncertainty on whether side information is available at the decoder or not. In this work however we restrict our attention only to Gaussian sources (in which Y n = X n +Z n, where Z is also Gaussian and independent ofx), and mean squared error (MSE) distortion. This case is of special interest because, under these conditions, it happens that R Λ (d) = R XjY (d), the conditional rate/distortion 1

2 function assuming Y is available at the encoder [13]. We are intrigued by the fact that there exist coding methods which can perform as well as if they had access to the side information at the encoder, even though they don't. Our main goal in this paper then is to construct a family of quantizers which realizes these promised gains. The design of quantizers for the problem of rate distortion with side information also referred to in this work as Wyner-Ziv Encoding was first considered recently by Zamir, Verdú and Shamai, where they present design criteria for two different cases: Bernoulli sources with Hamming metric, and jointly Gaussian sources with mean squared error metric [10, 15]. The key contribution presented in that work is a constructive mechanism for, given a codebook, using the side information at the decoder to reduce the amount of information that needs to be encoded to identify codewords, while at the same time achieving essentially the distortion of the given codebook. However, the authors do not present any concrete examples on the application of their technique to a particular codebook, the first of which is then worked out by Pradhan and Ramchandran in [8], where they design and thoroughly analyze the performance of trellis codes based on the codebook partitioning ideas of [10, 15]. 1.2 Lattice Quantizers for Wyner-Ziv Encoding High-rate quantization theory provides much of the motivation to consider lattices [6]. Under an assumption of fine quantization, the performance of an n-dimensional quantizer Λ whose Voronoi cells are all congruent to a polytope P is given by d = C(P ) e 2(H(Λ;p X ) h(p X )) ; (1) where p X is the joint source distribution in n dimensions, H is the discrete entropy induced on the codebook Λ by quantization of the source p X, h is the differential entropy, and R 1 n P C(P )= jjx ^xjj2 dx ( R P dx) 1+ 2 n is the normalized second moment of P (using MSE as a distortion measure) [5, 14]. In the problem of rate distortion with side information, for Gaussian sources and MSE distortion, the goal is to attain a distortion value d using R XjY (d) < R X (d) nats/sample. In (1) this means that, at fixed bit rate R 0,wewant to design quantizers that achieve distortion d 0 ß c n e 2(R 0 h(p X jy )) when coding X, where c n» C(P ) is the coefficient of quantization in n dimensions [5]. But since we do not have access to Y (we only know p XjY ), using classical quantizers we can only attain a distortion value d ß c n e 2(R 0 h(p X )) >d 0 (because h(xjy ) <h(x)), or equivalently, we need to use some extra rate ρ ß R X such that R XjY d 0 ß c n e 2(R 0+ρ h(p X )) : 2

3 What makes this problem interesting is that we are only allowed to use R 0 nats/sample, not R 0 + ρ. One way to do that has been proposed by Shamai, Verdú and Zamir in [10, 15], which consists of: (a) taking a codebook with roughly 2 n(r 0+ρ) codewords and distortion d 0, (b) partitioning this codebook into 2 nr 0 sets of size 2 nρ each, (c) encoding only enough information to identify each one of the 2 nr 0 sets, and (d) using the side information Y to discriminate among the 2 nρ codewords collapsed into each set. In that construction however, two problems need to be dealt with. First, the codebook partitions need to be designed in a way such that the ability of the decoder to discriminate among codewords within each set is maximized. Second, the fact that partitions which maximize discriminability" result in a roughly uniform distribution for the symbols to be encoded (we will see later why), thus sacrificing whatever gains may bepossible due to entropy coding. Lattice structures provide an intuitively appealing framework in which to design quantizers for the Wyner-Ziv problem, for a number of reasons: ffl Certain interesting lattices have an algebraic structure which is useful in the design of the sought codebook partitions, as well as fast encoding algorithms which make them attractive from an implementation viewpoint [3, Ch. 20]. ffl The Asymptotic Equipartition Property (AEP) for stationary ergodic sources implies the existence of typical sets [4, Ch. 3]. These are sets of sequences of source symbols, which contain almost all the probability mass, and in which all sequences are roughly equally likely. This is particularly relevant for us, since it provides a way to cope with our inability to take advantage of whatever entropy coding gains may be possible for this particular source: since lattices are vector quantizers, we are effectively coding points in the typical set, for which no entropy coding is needed anyway due to their uniform distribution. ffl Since we are only changing the encoding procedure to take advantage of the side information at the decoder, but not the shape of the quantization cells, we still need quantizers with cells having good second moment properties. This is a condition met by many known lattices [3, Ch. 2 & 21]. ffl As pointed out in [15], good codes for Wyner-Ziv encoding must be simultaneously good quantizers as well as good channel codes. And in many cases, the best known lattice quantizer is also the densest known sphere packing, which under certain conditions is equivalent to the channel coding problem [3, Ch. 3]. We should also mention among the reasons to consider lattices our wish to answer a challenge recently posed by Zamir and Shamai in [15]. They present an encoding procedure very closely related to the one we propose here, they argue the existence of good lattices to use with that procedure, they study their distortion performance, but they do not present any examples of concrete constructions: their paper concludes by saying that beyond the question of existence, it would be nice to find specific constructions of good nested codes" (sic). Finding those specific constructions is one of the original contributions in this work. 3

4 1.3 Main Contributions and Organization of the Paper In this paper we construct lattice quantizers for the problem of rate distortion with side information, for jointly Gaussian sources with MSE distortion. Our main contribution is that of presenting specific examples of pairs of nested lattices, the computation of their resulting distortion at high rates, and numerical simulations to verify the computed asymptotics in one simple case. Since our goal is to present a systematic and constructive approach, we have chosen to restrict our attention to lattices with a rich algebraic structure, like those studied in [3]. To design these quantizers we search for an appropriate sublattice of a given lattice, and in this search the algebraic structure of the lattice is helpful. The rest of this paper is organized as follows. In Section 2 we define and give some intution on the structure of the proposed quantizers. In Section 3 we study the asymptotic performance of these quantizers at high rates, and we present some experimental results consistent with these asymptotics. Finally, we present conclusions in Section 4. 2 Structure of Wyner-Ziv Lattice Quantizers 2.1 Definitions A Wyner-Ziv Lattice Vector Quantizer (WZ-LVQ) is a triplet Q =(Λ;»;s), where: ffl Λ is a lattice. ffl» : R n! R n is a linear operator such that»u»v = c u v, and such that Λ 0 =»(Λ) Λ. Essentially,» defines a similar sublattice of Λ. 1 ffl s 2 (0; 1) is a scale factor that expands (or shrinks) Λ and Λ 0. Intuitively, the lattice Λ is the fine codebook, the one whose codewords are to be partitioned into equivalence classes. We choose to implement this partition by considering a sublattice Λ 0 Λ, and then considering the resulting quotient group Λ=Λ 0. Since the fine lattice is partitioned into jλ=λ 0 j equivalence classes, the rate of the resulting quantizer is 1 n ln(jλ=λ0 j) nats/sample (n is the dimension of the lattice), irrespective of the scale factor s. And s is a constant that multiplies the generator matrices of the lattices considered, which is to be adjusted as a function of the correlation between the source X and the side information Y : small s leads to fine quantization but poor ability to discriminate among codewords within each partition of Λ, large s leads to good discrimination ability but coarse quantization. The question of the existence of similar sublattices arose recently in connection with another vector quantization problem [9], and also in the study of symmetries of 1 Two lattices Λ 1 and Λ 2 (with generator matrices M 1 and M 2 ) are said to be similar when there exist a constant c 6= 0, a matrix U with integer entries and jdet(u )j = 1, and a real matrix B with BB > = I, such that M 2 = cum 1 B [3]. Intuitively, similar lattices look the same", up to a rotation, a reflection, and a change of scale. 4

5 quasicrystals [1]. The subject is thoroughly covered in [2], where necessary (and in some cases sufficient) conditions are given for their existence. 2.2 Encoding/Decoding For a lattice Λ of dimension n, let Q Λ : R n! Λ be its nearest neighbor map (i.e., Q Λ (x) = arg min 2Λ jjx jj 2 ), and let Λ=Λ 0 denote the corresponding quotient group. Let X denote a block of n source samples, and Y a block of n side information samples. The encoder and decoder are maps f : R n! Λ=Λ 0 and g :Λ=Λ 0! Λ, defined by ~X = f(x) =Q Λ (X) Q Λ 0(X); ^X = g( X)=QΛ0 ~ + X ~ (Y ); whose operation is illustrated in Fig. 1, with an example based on the lattice A 2. Y Y Q (X) Q(X) X Q(X) X Origin Q(X)-Q (X) Origin Q(X)-Q (X) Figure 1: To illustrate the mechanics of the proposed quantizers (left: encoding, right: decoding). A sublattice similiar to the base lattice is chosen (circled points), matched to how far X and Y are expected to be: in this example, with high probability X and Y are in neighboring Voronoi cells of the fine lattice. Then X is quantized twice (using the fine and the coarse lattice), and the difference between these two is sent to the decoder, as a representative of the set of all codewords collapsed into the same equivalence class. At the decoder, the entire class is recreated (all the points with a thick arrow in the right picture), and among these, the point closest to the side information Y is declared to be the original quantized value for X. Note that there is always a chance that a particular realization of the noise process may take Y too far away from X, in which case a decoding error occurs. Now it should become clear why earlier we said that the proposed encoding procedure results in uniformly distributed symbols at the output of the encoder. Under a fine quantization assumption, the Voronoi cells of the sublattice will be small, and therefore the source density will be roughly constant within these cells (assuming a continuous source density, true in the Gaussian case considered here). But our partition of the fine lattice into equivalence classes results in a split of each one of these large cells into jλ=λ 0 j smaller cells of equal area, and therefore equally likely. 5

6 3 Asymptotics of Wyner-Ziv Lattice Quantizers In this section we study the distortion performance of the proposed quantizers, asymptotically as their encoding rate becomes high. 3.1 Derivation of the Distortion Equation When applied to a WZ quantizer (Λ; Λ 0 ;s), equation (24) of [5] takes the form: where: D s = h (1 p e (Λ; Λ 0 ;s;ff Z ))C(Λ) + p e (Λ; Λ 0 ;s;ff Z )ff 2 Xi e 2[H(sΛ;p X ) h(p X )] ; (2) ffl X n ο N (0;ff 2 X) is the source sequence, Z n ο N (0;ff 2 Z) is a noise sequence (independent ofx n ), and Y n = X n + Z n is the side information at the decoder. ffl p e is the probability of a decoding error: p e (Λ; Λ 0 1 ; 1;ff Z )= p (ff Z 2ß) n Z e x x=2ff2 Z V (Λ 0 ) ψ! fi dx = 2 erfc ρe R p ff Z 2 (3) where fi is the kissing number of Λ, and ρ is half the norm of a shortest vector in Λ [3, Ch. 3.1]. 2;3 By a simple change of variables we also get, for general s, p e (Λ; Λ 0 ;s;ff Z )=p e Λ; Λ 0 ; 1; ff Z s. ffl C(Λ) is the normalized (and hence scale-independent) second moment of the Voronoi cells of the lattice Λ. ffl h(p X ) is the differential entropy of the source to encode. ffl H(sΛ;p X )isthe discrete entropy induced on the points in sλ when this one is used as a vector quantizer for the source X. Under a fine quantization assumption, we have that H(sΛ) ß h(p X ) 1 ln (Vol(sΛ)) n Equation (2) has a simple interpretation: if no decoding error is made, the MSE is that of the fine codebook; else, the error is bounded by the source variance. And using the above simplifications, it can be rewritten as: (" ψ D s = ff 2 Z e 2R 1 fi 2 erfc ρse R ff Z p 2!# C(Λ) + " fi 2 erfc ψ!# ρse R ff Z p 2 ff 2 X )ψ se R ff Z! 2 det(λ) 1 n (4) 2 The index jλ=λ 0 j is given by c n=2, where c is the norm of the similarity» [2]. Hence, half the length of a shortest vector in Λ 0 is ρ p c = ρjλ=λ 0 j 1 n = ρe R, and where R = 1 n ln (jλ=λ0 j) is the rate of the quantizer. 3 The last equality holds only under the assumption that ff Z fi ρ p jλ=λ 0 j. 6

7 Equation (4) is a high-snr approximation of equation (2), obtained by replacing the integral in (3) by its closed form expression under the assumption of small noise variance. Written in this form, issues that make the problem of rate distortion with side information interesting (and different from the classical rate distortion problem) become clear: ffl Whereas in the classical lattice quantization problem specification of the coding rate uniquely determines the volume of the Voronoi cells of the lattice quantizer, in this problem it does not, it only specifies the index jλ=λ 0 j. This means that the scale parameter s remains to be determined. ffl In determining the best value of s, two contradicting goals are pursued. On one hand, we want to make s as large as possible, because in this way p e will be small and the distortion will be mostly due to quantization noise: that is, the construction must be a good channel code. On the other hand, we want to make s as small as possible, to achieve low quantization noise: that is, the construction must be a good source code. 3.2 Scale Selection To complete our design procedure, we still need to specify criteria for the selection of the scale factor s, for which a natural choice is the minimization of D s as a function of s. At this point, we could proceed simply by = 0 and checking the appropriate sign-inversion condition around the solution found. But since we have not been able to do this analytically (only numerically), this solution is not particularly useful in terms of gaining understanding on this problem. To get some intuition then, we first plot D s as defined by eqn. (4), for three lattices of interest: A 2, E 8 and Λ 24. The resulting plot is shown in Fig Performance of WZ Lattice Quantizers 10 x 10 3 Zoom A 2 E 8 Λ 24 A 2 E 8 Λ 24 Λ 24 (wrong τ) D s (MSE) D s (MSE) Scale factor s Scale factor s Figure 2: Distortion of the proposed WZ quantizers, for a fixed rate of 1 nat/sample and for a source with ff 2 = 1 and X ff2 Z =0:01, as a function of the scale factor s. The curve denoted Λ 24 (wrong fi)" in the right plot refers to the performance that would be attained by the WZ quantizer based on the Leech lattice, if the kissing number of this lattice were 240 (like in the case of E 8 ) instead of All lattices are normalized to have determinant 1. 7

8 We observe some interesting properties of the MSE function: ffl In all cases, there is a local minimum of the error. This minimum corresponds to a point s Λ where, if s < s Λ, the MSE is dominated by a high probability of error, whereas if s>s Λ it is dominated by quantization noise. ffl In this problem, (1 p e )C(Λ) + p e ff 2 X plays a role analogous to that of the coefficient of quantization c n in the classical problem. A remarkable property of c n is that it depends only on the geometry of the quantizer, but is independent of the encoding rate and of the shape of the source [5, 14]. However, that property does not hold in the presence of side information, since p e depends essentially on the noise variance ffz. 2 ffl e2r ff 2 Z D s (R) depends on s, R and ff Z only through se R =ff Z. Therefore, knowledge of the location and value of the local minimum s Λ for one fixed value of R uniquely determines its location and value at all rates and noise levels. 4 Based on the above considerations, we define the distortion of a WZ lattice quantizer D(R) = min s D s (R), and the optimal s Λ as that value of the scale factor s which attains the minimum in the definition of D. D(R) is plotted in Fig. 3, where it is compared to other applicable performance bounds Comparison between WZ Lattice Quantizers and applicable bounds D X A & Λ E 8 D X Y 10 2 log 10 (MSE) Coding rate (bits/sample) Figure 3: Predicted performance of the proposed lattice quantizers: (a) D X : Shannon's distortion/rate function for X; (b) evaluation of D for the lattices A 2 and Λ 24 (negligible difference among these two); (c) evaluation of D for E 8 ; (d) D XjY : Wyner-Ziv's distortion/rate function. This plot is for ff 2 =1,and X ff2 Z =0:01. All lattices are normalized to have determinant 1 (which in the case of A 2 involves scaling it so that its shortest vectors have norm p 2= 3). A surprising result is that the quantizer based on the lattice E 8 comes much closer to the Wyner-Ziv bound than the 24-dimensional Leech lattice does, whose performance is a negligible improvement over that of the two-dimensional hexagonal lattice. This result can be explained however in terms of the kissing numbers of these lattices. The equivalence between the channel coding problem and the sphere 4 This is particularly helpful in the numerical minimization of D s, since it means the minimization needs to be done only once, for a fixed rate and noise variance. 8

9 packing problem for lattices depends heavily on a high SNR assumption: when the variance of the noise is not negligible compared to the minimum separation between lattice codewords, the probability of error is dominated by the kissing number of the lattice. But in our minimization of D s, we essentially search for the smallest scale of the shortest lattice vectors which does not result in too high a probability of error, i.e., we reduce the lattice SNR as much as we can. Under these circumstances the Leech lattice, having many more neighbors at close distance than the Gosset lattice does, necessarily performs worse. This is further corroborated by the fact that, if the kissing number of Λ 24 were identical to that of E 8, then it would indeed attain the best performance, as it follows from the fact that the curve denoted Λ 24 (wrong fi)" in Fig. 2 has the lowest local minimum; at higher SNRs though, the Leech lattice attains the least MSE, as was to be expected. 3.3 Numerical Simulations based on the Hexagonal Lattice To conclude this section, we present preliminary results obtained in a computer implementation of the proposed WZ quantizer, based on the hexagonal lattice. Although we are currently working on implementing these quantizers using other lattices, we feel it is important for us to back our asymptotic analysis with at least one case of experimental validation, and these results shown in Fig. 4 are relatively simple to obtain using the hexagonal lattice. Numerical estimation of the distortion performance for A D s (Equation 2) N=7 Computer implementation 10 3 N=57 log 10 (MSE) 10 4 N= N= Coding rate (bits/sample) Figure 4: Computation of the distortion performance by Monte Carlo methods: 10 8 samples are quantized and dequantized, at bit rates in the range of about 1-6 bits/sample. Observe how the experimental curve approaches the theoretical curve as the bit rate increases. C code implementing the WZ hexagonal quantizer, figures for a few large index sublattices of the hexagonal lattice, and the latest results are available from our webpage, at 4 Conclusions In this paper we presented an overview of our work on the design of lattice quantizers for the problem of rate distortion with side information. We showed how the nested codes studied by Zamir and Shamai in [15] can be implemented as a a standard lattice 9

10 and a similar sublattice. We studied the asymptotic behavior of the distortion that results from applying our construction to three lattices, and we presented numerical simulations whose results are consistent with the predicted distortion at high rates. Perhaps the most important conclusion that follows from our results is related to those properties which make certain lattices better than others for this problem. Specifically, we find that dense lattice packings with high kissing numbers (such as the Leech lattice) are not well suited to this problem. Instead, good quantizers with small kissing numbers are to be preferred. An interesting question that still requires further work is that of finding a concise way of presenting the performance of our quantizers at high rates. For classical quantizers, one such representation is given by e 2R D = C(Λ)e 2h, since premultiplication by e 2R makes the distortion a rate-independent quantity. But this trick does not work for WZ quantizers, since we saw in eqn. (4) that the optimal scale factor also depends on the encoding rate. We hope to be able to find one such short description by taking advantage of the fact that D s (R) depends on s, R and ff Z only through se R =ff Z (i.e., rate changes result only in a change of scale for the error function). Acknowledgements. I would like to thank V. A. Vaishampayan and N. J. A. Sloane, for a number of (very educational) conversations on lattices and vector quantization; to S. Pradhan and K. Ramchandran, for an accessible explanation of their work which served as my introduction to this problem [8]; and to an anonymous reviewer, who referred me to the related information-theoretic work of Heegard and Berger [7]. References [1] M. Baake and R. V. Moody. Similarity Submodules and Semigroups. Preprint. Available from [2] J. H. Conway, E. M. Rains, and N. J. A. Sloane. On the Existence of Similar Sublattices. Canad. J. Math., To appear. Available from [3] J. H. Conway and N. J. A. Sloane. Sphere Packings, Lattices and Groups. Springer Verlag, 3rd edition, [4] T. Cover and J. Thomas. Elements of Information Theory. John Wiley and Sons, Inc., [5] A. Gersho. Asymptotically Optimal Block Quantization. IEEE Trans. Inform. Theory, IT-25(4): , [6] R. M. Gray and D. L. Neuhoff. Quantization. IEEE Trans. Inform. Theory, 44(6): , [7] C. Heegard and T. Berger. Rate Distortion when Side Information May Be Absent. IEEE Trans. Inform. Theory, IT-31(6): , [8] S. Pradhan and K. Ramchandran. Distributed Source Coding Using Syndromes (DISCUS): Design and Construction. In Proc. IEEE Data Compression Conf. (DCC), Snowbird, UT, [9] S. D. Servetto, V. A. Vaishampayan, and N. J. A. Sloane. Multiple Description Lattice Vector Quantization. In Proc. IEEE Data Compression Conf. (DCC), Snowbird, UT, [10] S. Shamai, S. Verdú, and R. Zamir. Systematic Lossy Source/Channel Coding. IEEE Trans. Inform. Theory, 44(2): , [11] D. Slepian and J. K. Wolf. Noiseless Coding of Correlated Information Sources. IEEE Trans. Inform. Theory, IT-19(4): , [12] A. D. Wyner. The Rate-Distortion Function for Source Coding with Side Information at the Decoder-II: General Sources. Inform. Contr., 38:60 80, [13] A. D. Wyner and J. Ziv. The Rate-Distortion Function for Source Coding with Side Information at the Decoder. IEEE Trans. Inform. Theory, IT-22(1):1 10, [14] P. Zador. Asymptotic Quantization Error of Continuous Signals and the Quantization Dimension. IEEE Trans. Inform. Theory, IT-28: , [15] R. Zamir and S. Shamai. Nested Linear/Lattice Codes for Wyner-Ziv Encoding. In Proc. IEEE Inform. Theory Workshop, Killarney, Ireland,

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications

The Duality Between Information Embedding and Source Coding With Side Information and Some Applications IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1159 The Duality Between Information Embedding and Source Coding With Side Information and Some Applications Richard J. Barron, Member,

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

Network Distributed Quantization

Network Distributed Quantization Network Distributed uantization David Rebollo-Monedero and Bernd Girod Information Systems Laboratory, Department of Electrical Engineering Stanford University, Stanford, CA 94305 {drebollo,bgirod}@stanford.edu

More information

On the Existence of Similar Sublattices

On the Existence of Similar Sublattices On the Existence of Similar Sublattices J. H. Conway Mathematics Department Princeton University Princeton, NJ 08540 E. M. Rains and N. J. A. Sloane Information Sciences Research AT&T Shannon Lab Florham

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon

More information

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Sparse Regression Codes for Multi-terminal Source and Channel Coding Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder

More information

Lattices for Communication Engineers

Lattices for Communication Engineers Lattices for Communication Engineers Jean-Claude Belfiore Télécom ParisTech CNRS, LTCI UMR 5141 February, 22 2011 Nanyang Technological University - SPMS Part I Introduction Signal Space The transmission

More information

Lattices and Lattice Codes

Lattices and Lattice Codes Lattices and Lattice Codes Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology Hyderabad

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003

SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SOURCE CODING WITH SIDE INFORMATION AT THE DECODER (WYNER-ZIV CODING) FEB 13, 2003 SLEPIAN-WOLF RESULT { X i} RATE R x ENCODER 1 DECODER X i V i {, } { V i} ENCODER 0 RATE R v Problem: Determine R, the

More information

Distributed Arithmetic Coding

Distributed Arithmetic Coding Distributed Arithmetic Coding Marco Grangetto, Member, IEEE, Enrico Magli, Member, IEEE, Gabriella Olmo, Senior Member, IEEE Abstract We propose a distributed binary arithmetic coder for Slepian-Wolf coding

More information

MMSE estimation and lattice encoding/decoding for linear Gaussian channels. Todd P. Coleman /22/02

MMSE estimation and lattice encoding/decoding for linear Gaussian channels. Todd P. Coleman /22/02 MMSE estimation and lattice encoding/decoding for linear Gaussian channels Todd P. Coleman 6.454 9/22/02 Background: the AWGN Channel Y = X + N where N N ( 0, σ 2 N ), 1 n ni=1 X 2 i P X. Shannon: capacity

More information

Multiuser Successive Refinement and Multiple Description Coding

Multiuser Successive Refinement and Multiple Description Coding Multiuser Successive Refinement and Multiple Description Coding Chao Tian Laboratory for Information and Communication Systems (LICOS) School of Computer and Communication Sciences EPFL Lausanne Switzerland

More information

Generalized Writing on Dirty Paper

Generalized Writing on Dirty Paper Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland

More information

Multimedia Communications. Scalar Quantization

Multimedia Communications. Scalar Quantization Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing

More information

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels Nan Liu and Andrea Goldsmith Department of Electrical Engineering Stanford University, Stanford CA 94305 Email:

More information

One special case of interest is when we have a periodic partition of R k, i.e., a partition composed of translates of a fundamental unit consisting of

One special case of interest is when we have a periodic partition of R k, i.e., a partition composed of translates of a fundamental unit consisting of On the Potential Optimality of the Weaire-Phelan Partition avin Kashyap David. euho Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor, MI 4809-222 USA fnkashyap,neuhoffg@eecs.umich.edu

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

A proof of the existence of good nested lattices

A proof of the existence of good nested lattices A proof of the existence of good nested lattices Dinesh Krithivasan and S. Sandeep Pradhan July 24, 2007 1 Introduction We show the existence of a sequence of nested lattices (Λ (n) 1, Λ(n) ) with Λ (n)

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE

More information

Proceedings of the DATA COMPRESSION CONFERENCE (DCC 02) /02 $ IEEE

Proceedings of the DATA COMPRESSION CONFERENCE (DCC 02) /02 $ IEEE Compression with Side Information Using Turbo Codes Anne Aaron and Bernd Girod Information Systems Laboratory Stanford Universit y,stanford, CA 94305 amaaron@stanford.e du, bgir o d@stanford.e du Abstract

More information

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional

More information

Phase Precoded Compute-and-Forward with Partial Feedback

Phase Precoded Compute-and-Forward with Partial Feedback Phase Precoded Compute-and-Forward with Partial Feedback Amin Sakzad, Emanuele Viterbo Dept. Elec. & Comp. Sys. Monash University, Australia amin.sakzad,emanuele.viterbo@monash.edu Joseph Boutros, Dept.

More information

EE368B Image and Video Compression

EE368B Image and Video Compression EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer

More information

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is

More information

On Side-Informed Coding of Noisy Sensor Observations

On Side-Informed Coding of Noisy Sensor Observations On Side-Informed Coding of Noisy Sensor Observations Chao Yu and Gaurav Sharma C Dept, University of Rochester, Rochester NY 14627 ABSTRACT In this paper, we consider the problem of side-informed coding

More information

Performance of Polar Codes for Channel and Source Coding

Performance of Polar Codes for Channel and Source Coding Performance of Polar Codes for Channel and Source Coding Nadine Hussami AUB, Lebanon, Email: njh03@aub.edu.lb Satish Babu Korada and üdiger Urbanke EPFL, Switzerland, Email: {satish.korada,ruediger.urbanke}@epfl.ch

More information

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality 0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Leech Constellations of Construction-A Lattices

Leech Constellations of Construction-A Lattices Leech Constellations of Construction-A Lattices Joseph J. Boutros Talk at Nokia Bell Labs, Stuttgart Texas A&M University at Qatar In collaboration with Nicola di Pietro. March 7, 2017 Thanks Many Thanks

More information

SUCCESSIVE refinement of information, or scalable

SUCCESSIVE refinement of information, or scalable IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 1983 Additive Successive Refinement Ertem Tuncel, Student Member, IEEE, Kenneth Rose, Fellow, IEEE Abstract Rate-distortion bounds for

More information

Lecture 20: Quantization and Rate-Distortion

Lecture 20: Quantization and Rate-Distortion Lecture 20: Quantization and Rate-Distortion Quantization Introduction to rate-distortion theorem Dr. Yao Xie, ECE587, Information Theory, Duke University Approimating continuous signals... Dr. Yao Xie,

More information

Binary Rate Distortion With Side Information: The Asymmetric Correlation Channel Case

Binary Rate Distortion With Side Information: The Asymmetric Correlation Channel Case Binary Rate Distortion With Side Information: The Asymmetric Correlation Channel Case Andrei Sechelea*, Samuel Cheng, Adrian Munteanu*, and Nikos Deligiannis* *Department of Electronics and Informatics,

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

Quantization Index Modulation using the E 8 lattice

Quantization Index Modulation using the E 8 lattice 1 Quantization Index Modulation using the E 8 lattice Qian Zhang and Nigel Boston Dept of Electrical and Computer Engineering University of Wisconsin Madison 1415 Engineering Drive, Madison, WI 53706 Email:

More information

A Practical and Optimal Symmetric Slepian-Wolf Compression Strategy Using Syndrome Formers and Inverse Syndrome Formers

A Practical and Optimal Symmetric Slepian-Wolf Compression Strategy Using Syndrome Formers and Inverse Syndrome Formers A Practical and Optimal Symmetric Slepian-Wolf Compression Strategy Using Syndrome Formers and Inverse Syndrome Formers Peiyu Tan and Jing Li (Tiffany) Electrical and Computer Engineering Dept, Lehigh

More information

On Optimum Conventional Quantization for Source Coding with Side Information at the Decoder

On Optimum Conventional Quantization for Source Coding with Side Information at the Decoder On Optimum Conventional Quantization for Source Coding with Side Information at the Decoder by Lin Zheng A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the

More information

Side-information Scalable Source Coding

Side-information Scalable Source Coding Side-information Scalable Source Coding Chao Tian, Member, IEEE, Suhas N. Diggavi, Member, IEEE Abstract The problem of side-information scalable (SI-scalable) source coding is considered in this work,

More information

Minimum Expected Distortion in Gaussian Source Coding with Uncertain Side Information

Minimum Expected Distortion in Gaussian Source Coding with Uncertain Side Information Minimum Expected Distortion in Gaussian Source Coding with Uncertain Side Information Chris T. K. Ng, Chao Tian, Andrea J. Goldsmith and Shlomo Shamai (Shitz) Dept. of Electrical Engineering, Stanford

More information

Pattern Matching and Lossy Data Compression on Random Fields

Pattern Matching and Lossy Data Compression on Random Fields Pattern Matching and Lossy Data Compression on Random Fields I. Kontoyiannis December 16, 2002 Abstract We consider the problem of lossy data compression for data arranged on twodimensional arrays (such

More information

New Trellis Codes Based on Lattices and Cosets *

New Trellis Codes Based on Lattices and Cosets * New Trellis Codes Based on Lattices and Cosets * A. R. Calderbank and N. J. A. Sloane Mathematical Sciences Research Center AT&T Bell Laboratories Murray Hill, NJ 07974 ABSTRACT A new technique is proposed

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 4, APRIL

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 4, APRIL IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 4, APRIL 2010 1737 A Universal Scheme for Wyner Ziv Coding of Discrete Sources Shirin Jalali, Student Member, IEEE, Sergio Verdú, Fellow, IEEE, and

More information

Graph Coloring and Conditional Graph Entropy

Graph Coloring and Conditional Graph Entropy Graph Coloring and Conditional Graph Entropy Vishal Doshi, Devavrat Shah, Muriel Médard, Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge,

More information

EE5585 Data Compression May 2, Lecture 27

EE5585 Data Compression May 2, Lecture 27 EE5585 Data Compression May 2, 2013 Lecture 27 Instructor: Arya Mazumdar Scribe: Fangying Zhang Distributed Data Compression/Source Coding In the previous class we used a H-W table as a simple example,

More information

Optimal matching in wireless sensor networks

Optimal matching in wireless sensor networks Optimal matching in wireless sensor networks A. Roumy, D. Gesbert INRIA-IRISA, Rennes, France. Institute Eurecom, Sophia Antipolis, France. Abstract We investigate the design of a wireless sensor network

More information

Structured interference-mitigation in two-hop networks

Structured interference-mitigation in two-hop networks tructured interference-mitigation in two-hop networks Yiwei ong Department of Electrical and Computer Eng University of Illinois at Chicago Chicago, IL, UA Email: ysong34@uicedu Natasha Devroye Department

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

ONE approach to improving the performance of a quantizer

ONE approach to improving the performance of a quantizer 640 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 Quantizers With Unim Decoders Channel-Optimized Encoders Benjamin Farber Kenneth Zeger, Fellow, IEEE Abstract Scalar quantizers

More information

Afundamental component in the design and analysis of

Afundamental component in the design and analysis of IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 533 High-Resolution Source Coding for Non-Difference Distortion Measures: The Rate-Distortion Function Tamás Linder, Member, IEEE, Ram

More information

( 1 k "information" I(X;Y) given by Y about X)

( 1 k information I(X;Y) given by Y about X) SUMMARY OF SHANNON DISTORTION-RATE THEORY Consider a stationary source X with f (x) as its th-order pdf. Recall the following OPTA function definitions: δ(,r) = least dist'n of -dim'l fixed-rate VQ's w.

More information

Distributed Source Coding Using LDPC Codes

Distributed Source Coding Using LDPC Codes Distributed Source Coding Using LDPC Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete May 29, 2010 Telecommunications Laboratory (TUC) Distributed Source Coding

More information

The Leech Lattice. Balázs Elek. November 8, Cornell University, Department of Mathematics

The Leech Lattice. Balázs Elek. November 8, Cornell University, Department of Mathematics The Leech Lattice Balázs Elek Cornell University, Department of Mathematics November 8, 2016 Consider the equation 0 2 + 1 2 + 2 2 +... + n 2 = m 2. How many solutions does it have? Okay, 0 2 + 1 2 = 1

More information

Sphere Packings, Coverings and Lattices

Sphere Packings, Coverings and Lattices Sphere Packings, Coverings and Lattices Anja Stein Supervised by: Prof Christopher Smyth September, 06 Abstract This article is the result of six weeks of research for a Summer Project undertaken at the

More information

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

Rate-adaptive turbo-syndrome scheme for Slepian-Wolf Coding

Rate-adaptive turbo-syndrome scheme for Slepian-Wolf Coding Rate-adaptive turbo-syndrome scheme for Slepian-Wolf Coding Aline Roumy, Khaled Lajnef, Christine Guillemot 1 INRIA- Rennes, Campus de Beaulieu, 35042 Rennes Cedex, France Email: aroumy@irisa.fr Abstract

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 5177 The Distributed Karhunen Loève Transform Michael Gastpar, Member, IEEE, Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli,

More information

Fractal Dimension and Vector Quantization

Fractal Dimension and Vector Quantization Fractal Dimension and Vector Quantization [Extended Abstract] Krishna Kumaraswamy Center for Automated Learning and Discovery, Carnegie Mellon University skkumar@cs.cmu.edu Vasileios Megalooikonomou Department

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS Parvathinathan Venkitasubramaniam, Gökhan Mergen, Lang Tong and Ananthram Swami ABSTRACT We study the problem of quantization for

More information

Lecture 11: Continuous-valued signals and differential entropy

Lecture 11: Continuous-valued signals and differential entropy Lecture 11: Continuous-valued signals and differential entropy Biology 429 Carl Bergstrom September 20, 2008 Sources: Parts of today s lecture follow Chapter 8 from Cover and Thomas (2007). Some components

More information

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H. Problem sheet Ex. Verify that the function H(p,..., p n ) = k p k log p k satisfies all 8 axioms on H. Ex. (Not to be handed in). looking at the notes). List as many of the 8 axioms as you can, (without

More information

Quantization for Distributed Estimation

Quantization for Distributed Estimation 0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu

More information

Vivek K Goyal, 1 Jelena Kovacevic, 1 and Martin Vetterli 2 3. Abstract. Quantized frame expansions are proposed as a method for generalized multiple

Vivek K Goyal, 1 Jelena Kovacevic, 1 and Martin Vetterli 2 3. Abstract. Quantized frame expansions are proposed as a method for generalized multiple Quantized Frame Expansions as Source{Channel Codes for Erasure Channels Vivek K Goyal, 1 Jelena Kovacevic, 1 and artin Vetterli 2 3 1 Bell Laboratories, urray Hill, New Jersey 2 Ecole Polytechnique Federale

More information

Rate-Distortion Performance of Sequential Massive Random Access to Gaussian Sources with Memory

Rate-Distortion Performance of Sequential Massive Random Access to Gaussian Sources with Memory Rate-Distortion Performance of Sequential Massive Random Access to Gaussian Sources with Memory Elsa Dupraz, Thomas Maugey, Aline Roumy, Michel Kieffer To cite this version: Elsa Dupraz, Thomas Maugey,

More information

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Jonathan Scarlett jonathan.scarlett@epfl.ch Laboratory for Information and Inference Systems (LIONS)

More information

Lattice Coding I: From Theory To Application

Lattice Coding I: From Theory To Application Dept of Electrical and Computer Systems Engineering Monash University amin.sakzad@monash.edu Oct. 2013 1 Motivation 2 Preliminaries s Three examples 3 Problems Sphere Packing Problem Covering Problem Quantization

More information

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1

More information

X 1 Q 3 Q 2 Y 1. quantizers. watermark decoder N S N

X 1 Q 3 Q 2 Y 1. quantizers. watermark decoder N S N Quantizer characteristics important for Quantization Index Modulation Hugh Brunk Digimarc, 19801 SW 72nd Ave., Tualatin, OR 97062 ABSTRACT Quantization Index Modulation (QIM) has been shown to be a promising

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 Source Coding With Limited-Look-Ahead Side Information at the Decoder Tsachy Weissman, Member, IEEE, Abbas El Gamal, Fellow,

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels

Representation of Correlated Sources into Graphs for Transmission over Broadcast Channels Representation of Correlated s into Graphs for Transmission over Broadcast s Suhan Choi Department of Electrical Eng. and Computer Science University of Michigan, Ann Arbor, MI 80, USA Email: suhanc@eecs.umich.edu

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 2015 Abstract In information theory, Shannon s Noisy-Channel Coding Theorem states that it is possible to communicate over a noisy

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.

More information

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18

More information

Multiterminal Source Coding with an Entropy-Based Distortion Measure

Multiterminal Source Coding with an Entropy-Based Distortion Measure Multiterminal Source Coding with an Entropy-Based Distortion Measure Thomas Courtade and Rick Wesel Department of Electrical Engineering University of California, Los Angeles 4 August, 2011 IEEE International

More information

Lossy Compression Coding Theorems for Arbitrary Sources

Lossy Compression Coding Theorems for Arbitrary Sources Lossy Compression Coding Theorems for Arbitrary Sources Ioannis Kontoyiannis U of Cambridge joint work with M. Madiman, M. Harrison, J. Zhang Beyond IID Workshop, Cambridge, UK July 23, 2018 Outline Motivation

More information

Information Theory in Intelligent Decision Making

Information Theory in Intelligent Decision Making Information Theory in Intelligent Decision Making Adaptive Systems and Algorithms Research Groups School of Computer Science University of Hertfordshire, United Kingdom June 7, 2015 Information Theory

More information

ECE Information theory Final

ECE Information theory Final ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M. Kurkoski kurkoski@ice.uec.ac.jp University of Electro-Communications Tokyo, Japan ICC 2011 IEEE International Conference on Communications

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

arxiv: v1 [cs.it] 5 Sep 2008

arxiv: v1 [cs.it] 5 Sep 2008 1 arxiv:0809.1043v1 [cs.it] 5 Sep 2008 On Unique Decodability Marco Dalai, Riccardo Leonardi Abstract In this paper we propose a revisitation of the topic of unique decodability and of some fundamental

More information

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria Source Coding Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Asymptotic Equipartition Property Optimal Codes (Huffman Coding) Universal

More information

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016 Midterm Exam Time: 09:10 12:10 11/23, 2016 Name: Student ID: Policy: (Read before You Start to Work) The exam is closed book. However, you are allowed to bring TWO A4-size cheat sheet (single-sheet, two-sided).

More information

arxiv: v1 [eess.sp] 22 Oct 2017

arxiv: v1 [eess.sp] 22 Oct 2017 A Binary Wyner-Ziv Code Design Based on Compound LDGM-LDPC Structures Mahdi Nangir 1, Mahmoud Ahmadian-Attari 1, and Reza Asvadi 2,* arxiv:1710.07985v1 [eess.sp] 22 Oct 2017 1 Faculty of Electrical Engineering,

More information

Subset Universal Lossy Compression

Subset Universal Lossy Compression Subset Universal Lossy Compression Or Ordentlich Tel Aviv University ordent@eng.tau.ac.il Ofer Shayevitz Tel Aviv University ofersha@eng.tau.ac.il Abstract A lossy source code C with rate R for a discrete

More information

Lossy Distributed Source Coding

Lossy Distributed Source Coding Lossy Distributed Source Coding John MacLaren Walsh, Ph.D. Multiterminal Information Theory, Spring Quarter, 202 Lossy Distributed Source Coding Problem X X 2 S {,...,2 R } S 2 {,...,2 R2 } Ẑ Ẑ 2 E d(z,n,

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

Performance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal

Performance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal Performance Bounds for Joint Source-Channel Coding of Uniform Memoryless Sources Using a Binary ecomposition Seyed Bahram ZAHIR AZAMI*, Olivier RIOUL* and Pierre UHAMEL** epartements *Communications et

More information

An Alternative Proof of the Rate-Distortion Function of Poisson Processes with a Queueing Distortion Measure

An Alternative Proof of the Rate-Distortion Function of Poisson Processes with a Queueing Distortion Measure An Alternative Proof of the Rate-Distortion Function of Poisson Processes with a Queueing Distortion Measure Todd P. Coleman, Negar Kiyavash, and Vijay G. Subramanian ECE Dept & Coordinated Science Laboratory,

More information

CONSIDER a joint stationary and memoryless process

CONSIDER a joint stationary and memoryless process 4006 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 55, NO 9, SEPTEMBER 2009 On the Duality Between Slepian Wolf Coding and Channel Coding Under Mismatched Decoding Jun Chen, Member, IEEE, Da-ke He, and

More information

at Some sort of quantization is necessary to represent continuous signals in digital form

at Some sort of quantization is necessary to represent continuous signals in digital form Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for

More information

ON BEAMFORMING WITH FINITE RATE FEEDBACK IN MULTIPLE ANTENNA SYSTEMS

ON BEAMFORMING WITH FINITE RATE FEEDBACK IN MULTIPLE ANTENNA SYSTEMS ON BEAMFORMING WITH FINITE RATE FEEDBACK IN MULTIPLE ANTENNA SYSTEMS KRISHNA KIRAN MUKKAVILLI ASHUTOSH SABHARWAL ELZA ERKIP BEHNAAM AAZHANG Abstract In this paper, we study a multiple antenna system where

More information