Rate-Distortion in Near-Linear Time

Size: px
Start display at page:

Download "Rate-Distortion in Near-Linear Time"

Transcription

1 -Distortion in Near-Linear Time IIT 28, Toronto, Canada, July 6 -, 28 Anit Gupta, ergio Verdù Tsachy Weissman, Department of Electrical Engineering, Princeton University, Princeton, NJ 854, {anitg,verdu}@princeton.edu Department of Electrical Engineering, tanford University, tanford, CA 9435, UA, tsachy@stanford.edu Department of Electrical Engineering, Technion, Haifa 32, Israel Abstract We present two results related to the computational complexity of lossy compression. The first result shows that for a memoryless source P with rate-distortion function, the rate-distortion pair ( + γ,d + ɛ) can be achieved with constant decoding time per symbol encoding time per symbol proportional to C (γ)ɛ C 2(γ). The second results establishes that for any given R, there exists a universal lossy compression scheme with O(ng(n)) encoding complexity O(n) decoding complexity, that achieves the point (R, D(R)) asymptotically for any ergodic source with distortion-rate function D( ), where g(n) is an arbitrary non-decreasing unbounded function. A computationally feasible implementation of the first scheme outperforms many of the best previously proposed schemes for binary sources with bloclengths of the order of. I. INTRODUCTION -distortion theory assures us that for an ergodic source with rate-distortion function, there exists a lossy compression scheme that achieves the rate-distortion pair (,D) asymptotically. However all achievability proofs in the literature rely on schemes with encoding complexity growing at least exponentially in the bloclength n. An interesting question is whether there exists a compression/decompression scheme with low complexity that can achieve the optimal performance asymptotically. Asymptotically optimal bloc codes based on sparse matrices have been proposed for the nonredundant source with Hamming distortion criterion, 2 3. Unfortunately, these schemes rely on the exponential time ML encoder for asymptotic optimality. Other schemes based on sparse-graph codes low complexity message-passing encoders are not theoretically optimal, but perform well in practice 4, 5. Recently, sparse graph codes have been constructed which combine asymptotically optimality for a general discrete memoryless source an arbitrary separable distortion criterion under ML encoding suboptimal quadratic encoders with good empirical performance 6. A universal, asymptotically optimal lossy compression scheme, for memoryless sources is proposed in 7, however the scheme performs far away from, for bloclengths of the order of for the simple case of compressing the Bernoulli source with Hamming distortion criterion. An asymptotically optimal low (polynomial) complexity compressor with near optimal empirical performance, has not yet been found in the literature. In this paper we construct lossy compressors by partitioning a source string of length n into words of length l each use a short bloc code (with bloclength equal to l) to encode each of these words (Figure ). The short code has the same rate as the long code (with bloclength equal to n). If the average distortion (over the source realization) achieved by the short code is less than D + ɛ, then the probability that the long code distortion exceeds D + ɛ vanishes asymptotically. We show that that there exists a short code with rate +γ that achieves average distortion less than D + ɛ has bloclength l g (γ)log(/ɛ)+g 2 (γ). By concatenating this code (as shown in Figure ) we obtain a lossy compressor with constant decoding time per symbol compression time proportional to C (γ)ɛ C2(γ). Using the code construction in Figure, we also show the existence of a universal lossy compressor with encoding complexity O(ng(n)) (for a nondecreasing unbounded g(n)) decoding complexity O(n), that asymptotically achieves the rate-distortion pair (R, D(R)) for any ergodic source with distortion-rate function D( ). The rest of the paper is organized as follows. In ection II we present the code design. In ection III we use the code design in ection II to obtain a scheme that has rate + γ, achieves distortion less than D + ɛ asymptotically with encoding time per symbol proportional to C (γ)ɛ C2(γ). In ection IV we show the existence of a universal lossy compression scheme with near-linear encoding complexity linear decoding complexity that asymptotically achieves the optimal rate-distortion tradeoff for any ergodic source. In ection IV we also show a near-linear complexity extension of the lossy compression scheme in ection III, that asymptotically achieves the optimal rate-distortion tradeoff. In contrast to the universal scheme which is only of theoretical interest, this scheme is also amenable to practical implementation as demonstrated by the empirical results in ection V. In ection V we show empirical results obtained when the scheme in ection III is used to encode binary sources with various distortion criteria compare its performance to other schemes in the literature. II. CODE CONTRUCTION The code construction in this section results in a long lossy compressor by bootstrapping a short lossy compression code C (consisting of 2 lr codewords) that encodes a vector of length l into a binary vector of length lr, with average distortion equal to Ed l (, 2,, l ; C ) = D, () where d l (s,s 2,,s l ; C ) is the distance from (s,s 2,,s l ), to its nearest neighbor in C. For an /8/$ IEEE 847

2 integer, we now construct a lossy compressor for a vector of length l = n as follows: l = g (γ)log ɛ + g 2(γ). (3) ource n C C C C C C l Proof. Denoting the distortion-rate function by R ( ), let D = R (+γ/2) = D g(γ), olve the optimization problem R( D) = min PŜ Ed(,Ŝ) D I(; Ŝ), (4) Encoding Fig.. nr Encoding procedure A. Encoding Given a source realization s, we partition it into words of length l each. Each word is encoded by the index of the nearest codeword in C (Figure ), since the code contains 2 lr codewords this can be done in l2 lr operations. Thus encoding time per symbol is equal to K 2 lr,wherek depends only on the underlying computational model. The index of the corresponding codeword is represented by lr bits. The output of the encoder is the sequence of blocs of lr bits each i.e. nr bits. B. Decoding Using the indices corresponding to each bloc, the decoder outputs the sequence of codewords corresponding to each of the words using l symbols for each word. Total number of operations required is equal to l = n, thus the decoding time per symbol is a constant K 2 which depends only on the underlying computational model. III. AYMPTOTIC OPTIMALITY AND COMPUTATIONAL COT PER YMBOL Theorem. Given a memoryless source, a separable bounded distortion criterion a point (,D) on the ratedistortion function, for arbitrary ɛ, γ > there exists a compression/decompression scheme with bloclength n rate R = +γ such that the probability that the normalized distortion between the source realization its reproduction exceeds D + ɛ vanishes with n, ) Encoding time per symbol is proportional to C (γ)ɛ C2(γ). 2) Decoding time per symbol is independent of ɛ γ. We will show this theorem using the following result. Lemma. Given a memoryless source with rate-distortion function, γ>, ɛ> D>, there exists a code C with rate R = +γ bloclength l, such that C (γ), C 2 (γ) do not dependent on ɛ. Ed l ( l, C ) <D+ ɛ, (2) lr to obtain P ( D) P ( D). For now choose an arbitrary integer Ŝ Ŝ, l. LetC l denote a rom codeboo consisting of 2 l(+γ) codewords of length l, such that all symbols are independent identically distributed with the distribution P ( D).Define Ŝ the following subsets of A l Âl D = {x, y : d l (x, y) D}, (5) T = x, y : P ( D) l log l(y x) 2 P ( D) + 3γ (y) 4. (6) The sets D c T c are composed of the pairs (x, y) such that the empirical averages l l i= d(xi, yi) l ( D) ( D) l i= log P (yi xi)/p (yi) differ from the expected value by a gap which is a function of γ. Therefore, Ŝ Ŝ using the Chernoff bound P ( D) lŝl(dc ) exp( l(f (γ)), (7) P ( D) lŝl(t c ) exp( l(f 2 (γ))), (8) where f (γ),f 2 (γ) are large deviations exponents. Define D(x) ={y Âl :(x, y) D}, (9) T (x) ={y Âl :(x, y) T}. () Averaging over source codeboo we have Pd l ( l, C l ) >D () = E( P ( D) (D( l ))) 2l(+γ) (2) E( P ( D) (D( l ) T( l ))) 2l(+γ) (3) E( P ( D) l(d(l ) T( l ) l ) 2 l(+3γ/4) ) 2l(+γ) (4) E P ( D) l(d(l ) T( l ) l )+2 lγ/4 (5) = P ( D) lŝl(d T)+2 lγ/4 (6) = P ( D) lŝl(dc T c )+2 lγ/4 (7) P ( D) lŝl(dc )+P ( D) lŝl(t c )+2 lγ/4 (8) 3(2 lf (γ) ), (9) where (4) follows from the definition in () (5) follows 848

3 from ( pβ ) M p + M β, (2) for p, β>, f (γ) =min{f (γ),f 2 (γ),γ/4}. (2) Using (9) averaging over code source, Ed l ( l, C l ) Pd l ( l, C l ) DD + Pd l ( l, C l ) >DD max (22) 3(D max D)2 l(f (γ)) + D, (23) We finally choose the bloclength l : l = f (γ) log 2 ɛ ( 3Dmax ). (24) From (24) (23) it follows that Ed l ( l, C l ) < D + ɛ, which implies that there exists at least one codeboo C for which (2) is satisfied, with the bloclength given in (24). Lemma may be compared to Theorem in 8, where it is shown that there exists a sequence of codes with rate bloclength n that achieve average distortion D n such that ( log n D n D = D () log n 2n + o n ), (25) where D (R) is the derivative of the distortion-rate function at R. In other words if we allow no slac in the rate, to achieve performance within ɛ of the optimal distortion the scheme in 8 requires bloclength growing at least as (/ɛ)log(/ɛ), whereas Lemma shows that if we allow a slac of γ in the rate, then n log(/ɛ) is enough to achieve a distortion D + ɛ. Note that (9) is reminiscent of Marton s reliability function result 9: There exist a sequence of bloc codes C(l) with bloclength l rate +γ, such that d l ( l, C(l)) satisfies l l log P d l ( l = e(γ), (26), C(l)) >D where e(γ) = min D(P P ), (27) P R(P,D) +γ where R(P,D) denotes the rate-distortion function of the source P. However Marton s result is not sufficient to prove Lemma for which we need a nonasymptotic bound on Pd l ( l, C(l)) >D. Proof of Theorem. Choose l from (3), construct a code for compressing a source with bloclength n = l, usingc as described in ection II. Let D (n) be the rom variable that denotes the normalized distortion between the source realization its reproduction obtained with this scheme, let D i be the normalized distortion achieved in the i th word, i =, 2,,.Then n PD (n) >D+ ɛ = P D i >D+ ɛ =. (28) where (28) follows from the law of large numbers, because D i are i.i.d. rom variables with E(D i ) <D+ ɛ. From ection II-A, the encoding time per symbol is proportional to 2 l(+γ) = C (γ)ɛ C2(γ) where i= C (γ) =2 g2(γ)(+γ), (29) C 2 (γ) =g (γ)(+γ). (3) The decoding complexity per symbol is a constant as demonstrated in ection II-B. Theorem establishes that for a given source P, R = R(P,D)+γ arbitrary ɛ >, there exists a lossy compression scheme, with rate R that achieves distortion D+ɛ asymptotically almost surely, with encoding time per symbol C (γ)ɛ C2(γ).Howeverif R>max R(P,D) (3) P then there exists a lossy compression scheme, with rate R that achieves guaranteed distortion less than D for any source on that alphabet. This scheme is described below: Marton 9 shows that if R satisfies (3), then there exists l such that if l l there exists a bloc-code C(l) with rate R bloclength l, such that for any sequence s l A l, d l (s l, C(l)) <D. Thus for any n = l, l l we can obtain a lossy compressor by concatenating copies of C(l) as in ection II. This compressor has guaranteed distortion less than D for any source realization any bloclength n = l with encoding time per symbol proportional to 2 lr. In particular if P achieves the maximum in (3) then for any rate close to (but slightly larger) than R(P,D), there exists a linear-time lossy compression scheme that guarantees distortion less than D. IV. ALMOT LINEAR-COMPLEXITY AYMPTOTICALLY OPTIMAL UNIVERAL LOY COMPREOR We present a sequence of universal encoding/decoding schemes indexed by bloclength n. These schemes have linear decoding complexity, O(ng(n)) encoding complexity (for an arbitrary non-decreasing unbounded function g(n)) asymptotic rate equal to R. Further, when used to compress an ergodic source with distortion-rate function D(R), the probability that the distortion between the source its reproduction exceeds D(R) vanishes with n. A constructive proof of the existence of such schemes is given below: Theorem 2. For a given R an arbitrary non-decreasing unbounded function g(n), there exists a sequence of universal encoding/decoding schemes indexed by n with asymptotic rate equal to R, encoding decoding complexities 849

4 O(ng(n)) O(n) respectively, such that n Pdn ( n, Ŝn ) >D(R)+ɛ =, (32) for any ɛ>, whered( ) is the distortion-rate function for the ergodic source n. Proof. We show this result assuming that g( ) is subexponential, it then follows a-fortiori for all other g( ). Encoder: Let log2 log l(n) = 2 g(n), (33) (R +) let C l(n) (R), denote the set of all codeboos with rate R, bloclength l(n) with symbols chosen from the reproduction alphabet Â. Given a source realization s n choose a codeboo from C l(n) (R) that achieves the minimum distortion when used to encode the sequence s n as shown in Figure. Describe the selected codeboo the encoding with nr +log 2 C l(n) (R) bits. An upper bound on the number of codeboos is given as C l(n) (R) Â l(n) 2l(n)R (34) Using (34) (33) the fact that g(n) is sub-exponential R l(n) 2 l(n)r n R +log 2 Â = R. (35) n n n Encoding taes n2 l(n)r C l(n) (R) operations. From (34) it can be shown that n2 l(n)r C l(n) (R) < n2 2l(n)(R+) for all sufficiently large n. Thus from (33) the encoding complexity is given as O(ng(n)). Decoder: Identify the codeboo using log 2 C l(n) (R) bits obtain the reproduction as described in ection II-B. Decoding taes n operations thus the decoding complexity is O(n). Optimality ee. A. An extension to Theorem We now modify the lossy compressor in ection III, such that it achieves the rate-distortion point (,D) asymptotically, with O(n) decoding O(ng(n)) encoding complexity. In contrast to the universal scheme which is only of theoretical interest, this compressor is also amenable to practical implementation as suggested by the results in ection V. Code Construction: Choose arbitrary c>. Let log2 g(n) l(n) = (36) +c ( ) log l(n) γ(n) =f, (37) l(n) where f ( ) is defined in (2) f (x) =inf{y :f (y) x}. (38) Choose a codeboo C l(n) with rate +γ(n), bloclength l(n), that satisfies Pd l ( l(n), C l(n) ) D 3(2 l(n)f (γ(n)) ), (39) such a codeboo is guaranteed to exist from (9). The lossy compressor is obtained by concatenating = n/l(n) copies of C l(n) as shown in Figure. It can be shown that n γ(n) = (see ), thus the asymptotic rate is equal to. From ection II-A encoding complexity is given as O(n2 (+γ(n))l(n) ), choosing n large enough such that γ(n) <cthe encoding complexity is upper bounded by O(ng(n)). The decoding complexity depends neither on l(n) nor on γ(n) is equal to O(n). Optimality: Let D i denote the distortion in the i th bloc. It can be shown that (see ) PD i >D n n 3(2 l(n)f (γ(n)) )=. (4) Therefore for any δ>there exists n(δ) such that for n> n(δ), PD i >D <δ.defining the binary rom variables B i ={D i >D}, ɛ = ɛ/d max we have P D n >D+ ɛ P B i >ɛ. (4) Let <δ<ɛ.forn>n(δ), from (4) the Chernoff bound we have PD n >D+ ɛ P B i >ɛ 2 n l(n) d(ɛ δ). (42) i= Therefore PD n >D+ ɛ =. (43) n V. EXPERIMENT i= The scheme in ection III, with fixed bloclength l bootstraps a short code with rate +γ average distortion D+ɛ to obtain a long code, such that when a source realization is compressed using this code, the probability that the resulting distortion exceeds D + ɛ vanishes asymptotically. Thus it can achieve the rate-distortion function in the strong excessdistortion sense. To further highlight this notion of guaranteed distortion we plot the percentile distortion as a function of rate. Where a p percentile distortion implies that in p percent of empirical runs the distortion is less than the plotted value. Empirically for bloclength of the order of thouss, this scheme outperforms existing schemes under the percentile distortion criterion. In each of these experiments we romly generate fix a short rom code with bloclength l = 22/(+γ). Thus, the encoding scheme has linear complexity in the bloclength n. Figure 2 shows the distortionrate performance of our scheme compared with the message passing based encoder applied to a regular sparse graph code described in 4 (with O(n 2 ) complexity) when used to compress a fair coin source with bloclength n =. Each point is obtained by running trials of encoding romly generated source vectors for each rate, choosing the bit error rate level such that 95% of the trials result in smaller bit error rate. In Figure 3 we compare the performance of our scheme when used for compressing the Bernoulli() source with bit error rate-distortion criterion. It is noted in that 85

5 .5 5 Algorithm in 4.5 cheme in 6 Bit Error Distortion Fig. 2. Bit Error (95% percentile criterion) for the fair coin source with bloclength= Fig. 4. Asymmetric distortion (44) (95% percentile criterion) for the fair coin source with bloclength=. Bit Error Algorithm in Fig. 3. Bit Error (95% percentile criterion) for the biased coin (bias=) source bloclength=. Average Bit Error shown for the scheme in. for this problem, even to beat the straight line connecting the points (, h()) (, ) on the (D, ) curve is a challenge for any encoder, no constructive schemes are nown that perform near the optimal distortion-rate function. The compressor proposed in (with O(n) complexity) is slightly better than the straight line, is one of the best implementable schemes for this problem in the literature. The asymptotic (n ) average distortion obtained with the scheme in is compared to our scheme with n = in Figure 4. It should be noted that while the curve for represents the average bit error rate, the curve for our scheme shows the stricter 95% percentile distortion criterion. In Figure 4 we show results for compressing fair coin flips with the asymmetric distortion criterion: d(, ) = 2d(, ) = 2. (44) This problem was also addressed in 6 using sparse graph codes. The new scheme also outperforms the scheme given in 6 (which has a time complexity of O(n 2 log 3 n)). In Figures 3 4 with biased coin asymmetric distortion, respectively, we use a short rom code with bloclength l = 22/(+γ), which is obtained in the following fashion. Choose a desired operating point (D, ) approximate the optimal binary reproduction distribution P Ḓ with a q-type ( P Ḓ ) over {, }. First construct a rom code over the ring {,,,q }, usinganr n matrix G, such that each of its elements is chosen from the set {,,,q } uniformly at rom. The codewords in the code are obtained by c i = G T u i,whereu i is a binary vector addition is modulo q. now apply a mapping Q : {,,,q } {, } to each c i such that Q maps the uniform probability distribution over {,,,q } to P Ḓ over {, }. Increasing bloclength while maintaining the same short code results in a further reduction in the probability of excess distortion. Narrowing the gap with the distortion-rate curve requires increasing the bloclength of the short code. These experimental results establish that the code construction in ection III is not only theoretically interesting but with small (computationally-feasible) wordlength outperforms existing constructive schemes for bloclengths of the order. REFERENCE Y. Matsunaga H. Yamamoto, A coding theorem for lossy data compression by LDPC codes, IEEE Transactions on Information Theory, vol. 49, pp , ep Miyae, Lossy data compression over Z q by LDPC code, IEEE International ymposium on Information Theory, eattle UA, july E. Martinian M. J. Wainwright, Low-density codes achieve the rate-distortion bound, Data Compression Conference, Utah, E. Maneva M. J. Wainwright, Lossy source encoding via messagepassing decimation over generalized codewords of LDGM codes, IEEE International ymposium on Information Theory, Adelaide, Austrailia, eptember Ciliberti, K. Mezard, R. Zecchina, Message passing algorithms for non-linear nodes data compression, arxiv. Online. Available: 6 A. Gupta. Verdú, Nonlinear sparse-graph codes for lossy compression, submitted to: IEEE Transactions on Information Theory. 7 I. Kontoiannis, An Implementable Lossy Version of the Lempel-Ziv Algorithm Part I: Optimality for Memoryless ources, IEEE Transactions on Information Theory, vol. 45, pp , Nov Z. Zhang, E. Yang, V. Wei, The redundancy of source coding with a fidelity criterion.. Known statistics, IEEE Transactions on Information Theory, vol. 43, no., pp. 7 9, June K. Marton, Error exponent for source coding with a fidelity criterion, IEEE Transactions on Information Theory, vol. 2, no. 2, pp , March 974. A. Gupta,. Verdu, T. Weissman, Almost-Linear lossy compression for memoryless sources, Draft, 28. J. Rissanen I. Tabus, -distortion without rom codeboos, Worshop on Information Theory Applications (ITA),UCD,

Achievable Complexity-Performance Tradeoffs in Lossy Compression

Achievable Complexity-Performance Tradeoffs in Lossy Compression ISSN 003-9460, Problems of Information Transmission, 0, Vol. 48, No. 4, pp. 35 375. c Pleiades Publishing, Inc., 0. Original Russian Text c A. Gupta, S. Verdú, T. Weissman, 0, published in Problemy Peredachi

More information

Transmitting k samples over the Gaussian channel: energy-distortion tradeoff

Transmitting k samples over the Gaussian channel: energy-distortion tradeoff Transmitting samples over the Gaussian channel: energy-distortion tradeoff Victoria Kostina Yury Polyansiy Sergio Verdú California Institute of Technology Massachusetts Institute of Technology Princeton

More information

Polar Codes are Optimal for Lossy Source Coding

Polar Codes are Optimal for Lossy Source Coding Polar Codes are Optimal for Lossy Source Coding Satish Babu Korada and Rüdiger Urbanke EPFL, Switzerland, Email: satish.korada,ruediger.urbanke}@epfl.ch Abstract We consider lossy source compression of

More information

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

ALARGE class of codes, including turbo codes [3] and

ALARGE class of codes, including turbo codes [3] and IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 3, MARCH 2010 1351 Lossy Source Compression Using Low-Density Generator Matrix Codes: Analysis and Algorithms Martin J. Wainwright, Elitza Maneva,

More information

A new converse in rate-distortion theory

A new converse in rate-distortion theory A new converse in rate-distortion theory Victoria Kostina, Sergio Verdú Dept. of Electrical Engineering, Princeton University, NJ, 08544, USA Abstract This paper shows new finite-blocklength converse bounds

More information

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016

Midterm Exam Information Theory Fall Midterm Exam. Time: 09:10 12:10 11/23, 2016 Midterm Exam Time: 09:10 12:10 11/23, 2016 Name: Student ID: Policy: (Read before You Start to Work) The exam is closed book. However, you are allowed to bring TWO A4-size cheat sheet (single-sheet, two-sided).

More information

The pioneering work of Shannon provides fundamental bounds on the

The pioneering work of Shannon provides fundamental bounds on the [ Martin J. Wainwright ] DIGITAL VISION Sparse Graph Codes for Side Information and Binning [Sparse graph codes and message-passing algorithms for binning and coding with side information] The pioneering

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet. EE376A - Information Theory Final, Monday March 14th 216 Solutions Instructions: You have three hours, 3.3PM - 6.3PM The exam has 4 questions, totaling 12 points. Please start answering each question on

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

The Information Lost in Erasures Sergio Verdú, Fellow, IEEE, and Tsachy Weissman, Senior Member, IEEE

The Information Lost in Erasures Sergio Verdú, Fellow, IEEE, and Tsachy Weissman, Senior Member, IEEE 5030 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 11, NOVEMBER 2008 The Information Lost in Erasures Sergio Verdú, Fellow, IEEE, Tsachy Weissman, Senior Member, IEEE Abstract We consider sources

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

Subset Universal Lossy Compression

Subset Universal Lossy Compression Subset Universal Lossy Compression Or Ordentlich Tel Aviv University ordent@eng.tau.ac.il Ofer Shayevitz Tel Aviv University ofersha@eng.tau.ac.il Abstract A lossy source code C with rate R for a discrete

More information

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Jonathan Scarlett jonathan.scarlett@epfl.ch Laboratory for Information and Inference Systems (LIONS)

More information

Dispersion of the Gilbert-Elliott Channel

Dispersion of the Gilbert-Elliott Channel Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays

More information

Rényi Information Dimension: Fundamental Limits of Almost Lossless Analog Compression

Rényi Information Dimension: Fundamental Limits of Almost Lossless Analog Compression IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 8, AUGUST 2010 3721 Rényi Information Dimension: Fundamental Limits of Almost Lossless Analog Compression Yihong Wu, Student Member, IEEE, Sergio Verdú,

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 7, JULY

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 7, JULY IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 7, JULY 2008 3059 Joint Fixed-Rate Universal Lossy Coding Identification of Continuous-Alphabet Memoryless Sources Maxim Raginsky, Member, IEEE Abstract

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Universality of Logarithmic Loss in Lossy Compression

Universality of Logarithmic Loss in Lossy Compression Universality of Logarithmic Loss in Lossy Compression Albert No, Member, IEEE, and Tsachy Weissman, Fellow, IEEE arxiv:709.0034v [cs.it] Sep 207 Abstract We establish two strong senses of universality

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive

More information

Low-Density Parity-Check Codes

Low-Density Parity-Check Codes Department of Computer Sciences Applied Algorithms Lab. July 24, 2011 Outline 1 Introduction 2 Algorithms for LDPC 3 Properties 4 Iterative Learning in Crowds 5 Algorithm 6 Results 7 Conclusion PART I

More information

Ma/CS 6b Class 25: Error Correcting Codes 2

Ma/CS 6b Class 25: Error Correcting Codes 2 Ma/CS 6b Class 25: Error Correcting Codes 2 By Adam Sheffer Recall: Codes V n the set of binary sequences of length n. For example, V 3 = 000,001,010,011,100,101,110,111. Codes of length n are subsets

More information

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

Soft Covering with High Probability

Soft Covering with High Probability Soft Covering with High Probability Paul Cuff Princeton University arxiv:605.06396v [cs.it] 20 May 206 Abstract Wyner s soft-covering lemma is the central analysis step for achievability proofs of information

More information

Joint Universal Lossy Coding and Identification of Stationary Mixing Sources With General Alphabets Maxim Raginsky, Member, IEEE

Joint Universal Lossy Coding and Identification of Stationary Mixing Sources With General Alphabets Maxim Raginsky, Member, IEEE IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 1945 Joint Universal Lossy Coding and Identification of Stationary Mixing Sources With General Alphabets Maxim Raginsky, Member, IEEE Abstract

More information

High Sum-Rate Three-Write and Non-Binary WOM Codes

High Sum-Rate Three-Write and Non-Binary WOM Codes Submitted to the IEEE TRANSACTIONS ON INFORMATION THEORY, 2012 1 High Sum-Rate Three-Write and Non-Binary WOM Codes Eitan Yaakobi, Amir Shpilka Abstract Write-once memory (WOM) is a storage medium with

More information

An MCMC Approach to Lossy Compression of Continuous Sources

An MCMC Approach to Lossy Compression of Continuous Sources An MCMC Approach to Lossy Compression of Continuous Sources Dror Baron Electrical Engineering Department Technion Israel Institute of Technology Haifa, Israel Email: drorb@ee.technion.ac.il Tsachy Weissman

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

lossless, optimal compressor

lossless, optimal compressor 6. Variable-length Lossless Compression The principal engineering goal of compression is to represent a given sequence a, a 2,..., a n produced by a source as a sequence of bits of minimal possible length.

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Sparse Regression Codes for Multi-terminal Source and Channel Coding Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder

More information

On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications

On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications Igal Sason, Member and Gil Wiechman, Graduate Student Member Abstract A variety of communication scenarios

More information

Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity

Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity Henry D. Pfister, Member, Igal Sason, Member, and Rüdiger Urbanke Abstract We present two sequences of ensembles of non-systematic

More information

WITH advances in communications media and technologies

WITH advances in communications media and technologies IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 6, SEPTEMBER 1999 1887 Distortion-Rate Bounds for Fixed- Variable-Rate Multiresolution Source Codes Michelle Effros, Member, IEEE Abstract The source

More information

On Competitive Prediction and Its Relation to Rate-Distortion Theory

On Competitive Prediction and Its Relation to Rate-Distortion Theory IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 12, DECEMBER 2003 3185 On Competitive Prediction and Its Relation to Rate-Distortion Theory Tsachy Weissman, Member, IEEE, and Neri Merhav, Fellow,

More information

THIS work is motivated by the goal of finding the capacity

THIS work is motivated by the goal of finding the capacity IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 8, AUGUST 2007 2693 Improved Lower Bounds for the Capacity of i.i.d. Deletion Duplication Channels Eleni Drinea, Member, IEEE, Michael Mitzenmacher,

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

Fixed-Length-Parsing Universal Compression with Side Information

Fixed-Length-Parsing Universal Compression with Side Information Fixed-ength-Parsing Universal Compression with Side Information Yeohee Im and Sergio Verdú Dept. of Electrical Eng., Princeton University, NJ 08544 Email: yeoheei,verdu@princeton.edu Abstract This paper

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 Source Coding With Limited-Look-Ahead Side Information at the Decoder Tsachy Weissman, Member, IEEE, Abbas El Gamal, Fellow,

More information

Motivation for Arithmetic Coding

Motivation for Arithmetic Coding Motivation for Arithmetic Coding Motivations for arithmetic coding: 1) Huffman coding algorithm can generate prefix codes with a minimum average codeword length. But this length is usually strictly greater

More information

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria Source Coding Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Asymptotic Equipartition Property Optimal Codes (Huffman Coding) Universal

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Variable-to-Fixed Length Homophonic Coding with a Modified Shannon-Fano-Elias Code

Variable-to-Fixed Length Homophonic Coding with a Modified Shannon-Fano-Elias Code Variable-to-Fixed Length Homophonic Coding with a Modified Shannon-Fano-Elias Code Junya Honda Hirosuke Yamamoto Department of Complexity Science and Engineering The University of Tokyo, Kashiwa-shi Chiba

More information

Compression and Coding

Compression and Coding Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

Polar Codes for Some Multi-terminal Communications Problems

Polar Codes for Some Multi-terminal Communications Problems Polar Codes for ome Multi-terminal Communications Problems Aria G. ahebi and. andeep Pradhan Department of Electrical Engineering and Computer cience, niversity of Michigan, Ann Arbor, MI 48109, A. Email:

More information

Optimal Lempel-Ziv based lossy compression for memoryless data: how to make the right mistakes

Optimal Lempel-Ziv based lossy compression for memoryless data: how to make the right mistakes Optimal empel-ziv based lossy compression for memoryless data: how to make the right mistakes Narayana P. Santhanam Dept. of Electrical Engg, University of Hawaii at Manoa nsanthan@hawaii.edu Dharmendra

More information

Correcting Bursty and Localized Deletions Using Guess & Check Codes

Correcting Bursty and Localized Deletions Using Guess & Check Codes Correcting Bursty and Localized Deletions Using Guess & Chec Codes Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University serge..hanna@rutgers.edu, salim.elrouayheb@rutgers.edu Abstract

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Relaying a Fountain code across multiple nodes

Relaying a Fountain code across multiple nodes Relaying a Fountain code across multiple nodes Ramarishna Gummadi, R.S.Sreenivas Coordinated Science Lab University of Illinois at Urbana-Champaign {gummadi2, rsree} @uiuc.edu Abstract Fountain codes are

More information

Pointwise Redundancy in Lossy Data Compression and Universal Lossy Data Compression

Pointwise Redundancy in Lossy Data Compression and Universal Lossy Data Compression Pointwise Redundancy in Lossy Data Compression and Universal Lossy Data Compression I. Kontoyiannis To appear, IEEE Transactions on Information Theory, Jan. 2000 Last revision, November 21, 1999 Abstract

More information

Lecture 18: Shanon s Channel Coding Theorem. Lecture 18: Shanon s Channel Coding Theorem

Lecture 18: Shanon s Channel Coding Theorem. Lecture 18: Shanon s Channel Coding Theorem Channel Definition (Channel) A channel is defined by Λ = (X, Y, Π), where X is the set of input alphabets, Y is the set of output alphabets and Π is the transition probability of obtaining a symbol y Y

More information

DISCRETE sources corrupted by discrete memoryless

DISCRETE sources corrupted by discrete memoryless 3476 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 8, AUGUST 2006 Universal Minimax Discrete Denoising Under Channel Uncertainty George M. Gemelos, Student Member, IEEE, Styrmir Sigurjónsson, and

More information

Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code

Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code Lecture 16 Agenda for the lecture Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code Variable-length source codes with error 16.1 Error-free coding schemes 16.1.1 The Shannon-Fano-Elias

More information

Polar Codes are Optimal for Lossy Source Coding

Polar Codes are Optimal for Lossy Source Coding Polar Codes are Optimal for Lossy Source Coding Satish Babu Korada and Rüdiger Urbanke Abstract We consider lossy source compression of a binary symmetric source using polar codes and the low-complexity

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

A Comparison of Methods for Redundancy Reduction in Recurrence Time Coding

A Comparison of Methods for Redundancy Reduction in Recurrence Time Coding 1 1 A Comparison of Methods for Redundancy Reduction in Recurrence Time Coding Hidetoshi Yokoo, Member, IEEE Abstract Recurrence time of a symbol in a string is defined as the number of symbols that have

More information

EE368B Image and Video Compression

EE368B Image and Video Compression EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer

More information

arxiv:cs/ v1 [cs.it] 15 Aug 2005

arxiv:cs/ v1 [cs.it] 15 Aug 2005 To appear in International Symposium on Information Theory; Adelaide, Australia Lossy source encoding via message-passing and decimation over generalized codewords of LDG codes artin J. Wainwright Departments

More information

Entropy Coding. Connectivity coding. Entropy coding. Definitions. Lossles coder. Input: a set of symbols Output: bitstream. Idea

Entropy Coding. Connectivity coding. Entropy coding. Definitions. Lossles coder. Input: a set of symbols Output: bitstream. Idea Connectivity coding Entropy Coding dd 7, dd 6, dd 7, dd 5,... TG output... CRRRLSLECRRE Entropy coder output Connectivity data Edgebreaker output Digital Geometry Processing - Spring 8, Technion Digital

More information

Distributed Arithmetic Coding

Distributed Arithmetic Coding Distributed Arithmetic Coding Marco Grangetto, Member, IEEE, Enrico Magli, Member, IEEE, Gabriella Olmo, Senior Member, IEEE Abstract We propose a distributed binary arithmetic coder for Slepian-Wolf coding

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

Discrete Universal Filtering Through Incremental Parsing

Discrete Universal Filtering Through Incremental Parsing Discrete Universal Filtering Through Incremental Parsing Erik Ordentlich, Tsachy Weissman, Marcelo J. Weinberger, Anelia Somekh Baruch, Neri Merhav Abstract In the discrete filtering problem, a data sequence

More information

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN A Thesis Presented to The Academic Faculty by Bryan Larish In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

High-dimensional graphical model selection: Practical and information-theoretic limits

High-dimensional graphical model selection: Practical and information-theoretic limits 1 High-dimensional graphical model selection: Practical and information-theoretic limits Martin Wainwright Departments of Statistics, and EECS UC Berkeley, California, USA Based on joint work with: John

More information

Expansion Coding for Channel and Source. Coding

Expansion Coding for Channel and Source. Coding Expansion Coding for Channel and Source Coding Hongbo Si, O. Ozan Koyluoglu, Kumar Appaiah and Sriram Vishwanath arxiv:505.0548v [cs.it] 20 May 205 Abstract A general method of coding over expansion is

More information

arxiv: v1 [eess.sp] 22 Oct 2017

arxiv: v1 [eess.sp] 22 Oct 2017 A Binary Wyner-Ziv Code Design Based on Compound LDGM-LDPC Structures Mahdi Nangir 1, Mahmoud Ahmadian-Attari 1, and Reza Asvadi 2,* arxiv:1710.07985v1 [eess.sp] 22 Oct 2017 1 Faculty of Electrical Engineering,

More information

arxiv:cs/ v1 [cs.it] 16 Aug 2005

arxiv:cs/ v1 [cs.it] 16 Aug 2005 On Achievable Rates and Complexity of LDPC Codes for Parallel Channels with Application to Puncturing Igal Sason Gil Wiechman arxiv:cs/587v [cs.it] 6 Aug 5 Technion Israel Institute of Technology Haifa

More information

Universal Estimation of Divergence for Continuous Distributions via Data-Dependent Partitions

Universal Estimation of Divergence for Continuous Distributions via Data-Dependent Partitions Universal Estimation of for Continuous Distributions via Data-Dependent Partitions Qing Wang, Sanjeev R. Kulkarni, Sergio Verdú Department of Electrical Engineering Princeton University Princeton, NJ 8544

More information

Communication by Regression: Sparse Superposition Codes

Communication by Regression: Sparse Superposition Codes Communication by Regression: Sparse Superposition Codes Department of Statistics, Yale University Coauthors: Antony Joseph and Sanghee Cho February 21, 2013, University of Texas Channel Communication Set-up

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Check Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University sergekhanna@rutgersedu, salimelrouayheb@rutgersedu arxiv:759569v3

More information

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications on the ML Decoding Error Probability of Binary Linear Block Codes and Department of Electrical Engineering Technion-Israel Institute of Technology An M.Sc. Thesis supervisor: Dr. Igal Sason March 30, 2006

More information

1 Background on Information Theory

1 Background on Information Theory Review of the book Information Theory: Coding Theorems for Discrete Memoryless Systems by Imre Csiszár and János Körner Second Edition Cambridge University Press, 2011 ISBN:978-0-521-19681-9 Review by

More information

Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear Systems

Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear Systems 6332 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 58, NO 10, OCTOBER 2012 Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Nonstationary/Unstable Linear

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Chec Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, IIT, Chicago sashann@hawiitedu, salim@iitedu Abstract We consider the problem of constructing

More information

High-dimensional graphical model selection: Practical and information-theoretic limits

High-dimensional graphical model selection: Practical and information-theoretic limits 1 High-dimensional graphical model selection: Practical and information-theoretic limits Martin Wainwright Departments of Statistics, and EECS UC Berkeley, California, USA Based on joint work with: John

More information

Optimal Multiple Description and Multiresolution Scalar Quantizer Design

Optimal Multiple Description and Multiresolution Scalar Quantizer Design Optimal ultiple Description and ultiresolution Scalar Quantizer Design ichelle Effros California Institute of Technology Abstract I present new algorithms for fixed-rate multiple description and multiresolution

More information

IN this paper, we study the problem of universal lossless compression

IN this paper, we study the problem of universal lossless compression 4008 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 9, SEPTEMBER 2006 An Algorithm for Universal Lossless Compression With Side Information Haixiao Cai, Member, IEEE, Sanjeev R. Kulkarni, Fellow,

More information

5284 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 11, NOVEMBER 2009

5284 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 11, NOVEMBER 2009 5284 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 11, NOVEMBER 2009 Discrete Denoising With Shifts Taesup Moon, Member, IEEE, and Tsachy Weissman, Senior Member, IEEE Abstract We introduce S-DUDE,

More information

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION EE229B PROJECT REPORT STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION Zhengya Zhang SID: 16827455 zyzhang@eecs.berkeley.edu 1 MOTIVATION Permutation matrices refer to the square matrices with

More information

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications on the ML Decoding Error Probability of Binary Linear Block Codes and Moshe Twitto Department of Electrical Engineering Technion-Israel Institute of Technology Haifa 32000, Israel Joint work with Igal

More information

On Bit Error Rate Performance of Polar Codes in Finite Regime

On Bit Error Rate Performance of Polar Codes in Finite Regime On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve

More information

Error Detection and Correction: Hamming Code; Reed-Muller Code

Error Detection and Correction: Hamming Code; Reed-Muller Code Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation

More information

Subset Source Coding

Subset Source Coding Fifty-third Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 29 - October 2, 205 Subset Source Coding Ebrahim MolavianJazi and Aylin Yener Wireless Communications and Networking

More information

1590 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE Source Coding, Large Deviations, and Approximate Pattern Matching

1590 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE Source Coding, Large Deviations, and Approximate Pattern Matching 1590 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE 2002 Source Coding, Large Deviations, and Approximate Pattern Matching Amir Dembo and Ioannis Kontoyiannis, Member, IEEE Invited Paper

More information

On bounded redundancy of universal codes

On bounded redundancy of universal codes On bounded redundancy of universal codes Łukasz Dębowski Institute of omputer Science, Polish Academy of Sciences ul. Jana Kazimierza 5, 01-248 Warszawa, Poland Abstract onsider stationary ergodic measures

More information

On Source-Channel Communication in Networks

On Source-Channel Communication in Networks On Source-Channel Communication in Networks Michael Gastpar Department of EECS University of California, Berkeley gastpar@eecs.berkeley.edu DIMACS: March 17, 2003. Outline 1. Source-Channel Communication

More information

Entropy. Probability and Computing. Presentation 22. Probability and Computing Presentation 22 Entropy 1/39

Entropy. Probability and Computing. Presentation 22. Probability and Computing Presentation 22 Entropy 1/39 Entropy Probability and Computing Presentation 22 Probability and Computing Presentation 22 Entropy 1/39 Introduction Why randomness and information are related? An event that is almost certain to occur

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity

Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity Henry D. Pfister, Member, Igal Sason, Member Abstract This paper introduces

More information

On Concentration of Martingales and Applications in Information Theory, Communication & Coding

On Concentration of Martingales and Applications in Information Theory, Communication & Coding On Concentration of Martingales and Applications in Information Theory, Communication & Coding Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel

More information

Fountain Uncorrectable Sets and Finite-Length Analysis

Fountain Uncorrectable Sets and Finite-Length Analysis Fountain Uncorrectable Sets and Finite-Length Analysis Wen Ji 1, Bo-Wei Chen 2, and Yiqiang Chen 1 1 Beijing Key Laboratory of Mobile Computing and Pervasive Device Institute of Computing Technology, Chinese

More information