Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

Size: px
Start display at page:

Download "Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes"

Transcription

1 Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes 1 Zheng Wang, Student Member, IEEE, Jie Luo, Member, IEEE arxiv: v1 [cs.it] 27 Aug 2008 Abstract We show that the Blokh-Zyablov error exponent can be arbitrarily approached by concatenated block codes over general discrete-time memoryless channels with linear encoding/decoding complexity in the block length. The key result is an extension to Justesen s general minimum distance decoding algorithm, which enables a low complexity integration of Guruswami-Indyk s outer code into Forney s and Blokh-Zyablov s concatenated coding schemes. Index Terms Blokh-Zyablov error exponent, concatenated code, linear coding complexity I. Introduction Consider communication over a discrete-time memoryless channel using block channel codes. The channel is modeled by a conditional point mass function (PMF) or probability density funtion (PDF) p (Y X) (y x), where x X and y Y are the input and output symbols, X and Y are the input and output alphabets, respectively. Let Shannon capacity of the channel be C. Fano showed in [1] that the minimum error probability P e for codes of block length N and rate R satisfies the following bound lim log P e N N E(R), (1) where E(R) is known as the error exponent, a positive function of channel transition probabilities. Without coding complexity constraints, if the input and output alphabets are finite, the maximum achievable E(R) is given by Gallager in [2] max ρ 1 { ρr + E x (ρ, p X )} E(R) = max E L (R, p X ), E L (R, p X ) = R + E p 0 (1, p X ) X max 0 ρ 1 { ρr + E 0 (ρ, p X )} 0 R R x R x R R crit R crit R C, (2) The authors are with the Electrical and Computer Engineering Department, Colorado State University, Fort Collins, CO {zhwang, rockey}@engr.colostate.edu.

2 2 where p X is the input distribution, and the definitions of other variables can be found in [3]. If the channel input and/or output alphabets are the set of real numbers, i.e., the channel is continuous [2], the maximum achievable error exponent is still given by (2) if we replace the PMF by PDF, the summations by integrals and the max operators by sup. Forney proposed in [3] a one-level concatenated coding scheme that can achieve a positive error exponent for any rate R < C with a coding complexity of O(N 4 ). The maximum error exponent achieved by the one-level concatenated code, known as Forney s error exponent, is given in [3] by ( ) R E c (R) = max r o [ R,1](1 r o)e, (3) r C o where r o is the outer code rate, and R is the overall rate. Achieving E c (R) requires the decoder to exploit reliability information of the inner code and to decode the outer code using Forney s general minimum distance (GMD) decoder [3]. Forney s GMD decoding algorithm essentially carries out outer code decoding (under various conditions) for O(N) times. Forney s concatenated codes were generalized to multilevel concatenated codes, also known as the generalized concatenated codes, by Blokh and Zyablov in [4]. As the order of concatenation goes to infinity, the error exponent approaches the Blokh-Zyablov bound (or Blokh-Zyablov error exponent) [4][5] E ( ) (R) = ( max p X,r o [ R,1] r o R ) [ roc C 0 C ] 1 dx. (4) E L (x, p X ) Guruswami and Indyk proposed a family of linear-time encodable and decodable error-correction codes in [6]. By concatenating these near maximum distance separable codes (as outer codes) with good binary inner codes, together with Justesen s GMD decoding (proposed in [7]), optimal error correction performance can be arbitrarily approached with linear encoding/decoding complexity. With a fixed inner codeword length, Justesen s GMD decoder carries out the outer code decoding for a constant number of times. This is a required property for GMD decoding to ensure the overall linear decoding complexity. For binary symmetric channels (BSCs), since Hamming error-correction is equivalent to (or can be transformed to an equivalent form of) maximum likelihood decoding, Forney s error exponent can be arbitrarily approached by Guruswami-Indyk s coding scheme [6]. Along another line of error correction coding research, Barg and Zémor proposed in [8] a parallel concatenated coding scheme that can arbitrarily approach Forney s error exponent with linear decoding and quadratic encoding complexity 1. 1 In the rest of the paper, when we say concatenated coding we mean Forney s and Blokh-Zyablov s serial concatenated coding schemes [8].

3 3 For a general memoryless channel, without GMD decoding, half of Forney s error exponent (and half of Blokh-Zyablov error exponent) can be approached with linear encoding/decoding complexity using the concatenated coding scheme with Guruswami-Indyk s outer code. With Forney s GMD decoding algorithm, the half error exponent penalty can be lifted at the cost of a quadratic decoding complexity. Assume concatenated coding scheme with Guruswami-Indyk s outer code and constant-sized inner codes. In this paper, we show that, with an arbitrarily small error exponent reduction, Forney s GMD decoding can be revised 2 to carry out outer code decoding for a constant number of times. With the revised GMD decoder, concatenated codes can arbitrarily approach Forney s error exponent (with onelevel concatenation) and Blokh-Zyablov error exponent (with multilevel concatenation) with linear encoding/decoding complexity. II. Revised GMD Decoding with One-level Concatenated Codes Consider Forney s one-level concatenated coding scheme over a discrete-time memoryless channel. Assume for an arbitrarily small ε 1 > 0 we can construct a linear encodable/decodable outer code, with rate r o and length N o, which can correct t errors and s erasures so long as 2t + s < N o (1 r o ε 1 ). Note that this is possible for large N o as shown by Guruswami and Indyk in [6]. To simplify the notations, we assume N o (1 r o ε 1 ) is an integer. The outer code is concatenated to suitable inner codes with rate R i and fixed length N i. The rate and length of the concatenated code are R = r o R i and N = N o N i, respectively. In Forney s GMD decoding, inner codes forward not only estimates ˆx m = [ˆx 1,..., ˆx i,..., ˆx No ] but also reliability information in the form of a weight vector α = [α 1,...,α i,...,α No ] to the outer code, where ˆx i GF(q), 0 α i 1 and 1 i N o. Let +1 x = ˆx s(ˆx, x) = 1 x ˆx. (5) For any outer codeword x m = [x m1, x m2,..., x mno ], define a dot product α x m as follows N o N o α x m = α i s(ˆx i, x mi ) = α i s i. (6) Theorem 1: There is at most one codeword x m that satisfies i=1 α x m > N o (r o + ε 1 ). (7) Theorem 1 is implied by Theorem 3.1 in [3]. Rearrange the weights according to their values and let i 1,...,i j,...,i No be the indices such that i=1 α i1... α ij... α ino. (8) 2 The revision can also be regarded as an extension of Justesen s GMD decoding in [7]

4 4 Define q k = [q k (α 1 ),...,,q k (α j ),...,q k (α No )], for 0 k < 1/ε 2, where ε 2 > 0 is a positive constant with 1/ε 2 being an integer, with 0 α ij kε 2 and i j N o (1 r o ε 1 ) q k (α ij ) = 1 otherwise, (9) and N o N o q k x m = q k (α i )s(ˆx i, x mi ) = q k (α i )s i. (10) i=1 Then we have the following theorem Theorem 2: If α x m > N o (1 + (r o + ε 1 )(1 ε 2 /2)), then, for some k, q k x m > N o (r o + ε 1 ). The proof of Theorem 2 is given in Appendix A. Theorems 1 and 2 indicate that, if x m is transmitted and α x m > N o (1 + (r o + ε 1 )(1 ε 2 /2)), for some k, errors-and-erasures decoding specified by q k (where symbols with q k (α i ) = 0 are erased) will output x m. Since the total number of q k vectors is a constant, the outer code carries out errors-anderasures decoding only for a constant number of times. Consequently, a GMD decoding that carries out errors-and-erasures decoding for all q k s and compares their decoding outputs can recover x m with a complexity of O(N o ). Since the inner code length N i is fixed, the overall complexity is O(N). The following theorem gives an error probability bound of the one-level concatenated codes. Theorem 3: Error probability of the one-level concatenated codes is upper bounded by P e P(α x m N o (1 + (r o + ε 1 )(1 ε 2 /2))) exp [ N (E c (R) ε)], where E c (R) is Forney s error exponent given by (2) and ε is a function of ε 1 and ε 2 with ε 0 if ε 1, ε 2 0. The proof of Theorem 3 can be obtained by first replacing Theorem 3.2 in [3] with Theorem 2, and then combining the results of [3, Section 4.2] and [6, Theorem 8]. The difference between Forney s and the revised GMD decoding schemes lies in the definition of errors-and-erasures decodable vectors q k, the number of which determines the decoding complexity. Forney s GMD decoding needs to carry out errors-and-erasures decoding for a number of times linear in N o, whereas ours for a constant number of times. The idea of revised GMD decoding dates back to Justesen s work in [7], although [7] focused on error-correction code where inner codes forward Hamming distance information (in the form of an α vector) to the outer code. i=1 III. Multilevel Concatenated Codes To approach Forney s error exponent with one-level concatenated codes, the only requirement for the inner codes is that they should achieve Gallager s error exponent given in (2). In order to approach a better error exponent with m-level concatenated codes (m > 1), the inner code must possess certain

5 5 special properties. Take two-level concatenated codes for example, the required property and the existence of optimal inner code are stated in the following lemma. Lemma 1: Consider a discrete-time memoryless channel, let q > 0 be an integer and p X a source distribution. There exists a code of length N i and rate R i with q N ir i codewords, which are partitioned into q N i R i 2 groups each having q N i R i 2 codewords. Define the error probability of the code by P e (2) (R i, p X ) and the maximum error probability of the codes each characterized by the codewords in a particular group of the partition by P e (2) 1 (R i /2, p X ). The error probabilities satisfy the following inequalities lim N The proof of Lemma 1 is given in Appendix B. log P (2) e (R i, p X ) lim E L (R i, p X ), N N i log P e (2) 1 (R i /2, p X ) N i E L (R i /2, p X ). (11) We skip the explanation that Lemma 1 is extendable to m-level concatenated coding schemes, with m > 2. With Lemma 1 and its extensions, we are now ready to generalized the results of Section II to multilevel concatenated coding schemes. Theorem 4: For a discrete-time memoryless channel, for any ε > 0 and integer m > 0, one can construct a sequence of m-level concatenated codes whose encoding/decoding complexity is linear in N, and whose error probability is bounded by lim log P e N N E(m) (R) ε, E (m) r o R C (R) = max p X,r o [ R,1] r oc [ ( mi=1 C EL m ( i )r )] 1. (12) m oc, p X The proof of Theorem 4 can be obtained by combining Theorem 3, Lemma 1 and the derivation of E (m) (R) in [4][5]. Note that lim m E (m) (R) = E ( ) (R), where E ( ) (R) is the Blokh-Zyablov error exponent given in (4). Theorem 4 implies that, for discrete-time memoryless channels, Blokh-Zyablov error exponent can be arbitrarily approached with linear encoding/decoding complexity. The optimal inner code required by Theorem 4 is often not linear, since, unless for specific channels such as BSCs [11], it is not clear whether linear codes can achieve the expurgated exponent. However, since random linear codes can achieve random coding exponent over a general discrete-time memoryless channel [10], we have the following lemma. Lemma 2: Theorem 4 holds for linear codes over BSCs. It also holds for linear codes over a general (( ) discrete-time memoryless channel if one replaces E i L m) ro C, p X in (12) by (( ) ) { ( } i i E r r o C, p X = max ρ r o C + E o (ρ, p X ). (13) m 0 ρ 1 m)

6 6 Lemma 2 can be shown by adopting random linear code [10] as the inner codes. Lemma 2 implies that the following error exponent can be arbitrarily approached by linear codes with linear complexity A. Proof of Theorem 2 1/ε 2. Let ( E r ( ) (R) = max p X,r o [ R,1] r o R ) [ roc C 0 C Appendix ] 1 dx. (14) E r (x, p X ) Proof: Define an integer p = α ino(1 ro ε1 ) /ε 2 and a set of values c j = (j 1/2)ε 2, for 1 p, j λ 0 = c 1 λ k = c k+1 c k, 1 k p 1, 1 p 1/ε 2 λ p = α ino(1 ro ε1 )+1 c p We have and λ h = α ih p+no(1 ro ε1 )+1 α i h p+no(1 ro ε1 ), p < h < p + N o(r o + ε 1 ) λ p+no(r o+ε 1 ) = 1 α ino. (15) j 1 λ k = c j 1 j p α ij p+no(1 ro ε1 ) p < j p + N o (r o + ε 1 ) p+n o(r o+ε 1 ) (16), (17) λ k = 1. (18) Define a new weight vector α = [ α 1,..., α i,..., α No ] with argmin cj,1 j l c j α i α i α ino(1 ro ε1 ) α i =. (19) α i α i > α ino(1 ro ε1 ) Define p k = [p k (α 1 ),...,p k (α i ),..., p k (α No )] with 1 k p + N o (r o + ε 1 ) such that for 0 k < p p k = q k, (20) and for p k p + N o (r o + ε 1 ) 0 α i α ik p+no(1 ro ε1 ) p k (α i ) = 1 α i > α ik p+no(1 ro ε1 ). (21) Thus we have p+n o(r o+ε 1 ) α = λ k p k. (22)

7 7 Define a set of indices According to the definition of α i, for i / U, α i = α i. Hence Since α i α i ε 2 /2, and s i = ±1, we have Consequently, α x m > N o ( 1 + (ro + ε 1 )(1 ε 2 2 ) ) implies i U If p k x m N o (r o + ε 1 ) for all p k s, then α x m = p+n o(r o+ε 1 ) U = {i 1, i 2,..., i No(1 r o ε 1 )}. (23) α x m = α x m + ( α i α i ) s i. (24) i U ( α i α i ) s i N o (1 r o ε 1 ) ε 2 2. (25) α x m > N o (r o + ε 1 ). (26) p+n o(r o+ε 1 ) λ k p k x m N o (r o + ε 1 ) which contradicts to (26). Therefore, there must be some p k that satisfies λ k = N o (r o + ε 1 ), (27) p k x m > N o (r o + ε 1 ). (28) Since for k p, p k has no more than N o (r o + ε 1 ) number of 1s, which implies p k x m N o (r o + ε 1 ). Therefore, the vectors that satisfy (28) must exist among p k with 1 k < p. In words, for some k, q k x m > N o (r o + ε 1 ). B. Proof of Lemma 1 Proof: We first prove the Lemma for R i R x, where R x is defined by (2) and in [3]. For a random block code with length N i and M > q N ir i codewords, partition these codewords into q N ir i /2 groups with at least M/q N ir i /2 codewords in each group. Consider a particular codeword x m, the following two expurgation operations [2] are performed. In the first operation, we consider only codeword x m and codewords that are not in the same group with x m. In words, we temporarily strike out the codewords in the same group with x m. Define P em as the probability of decoding error if x m is transmitted. Let B 1 > 0 be a threshold such that Pr(P em B 1 ) 1/2. We expurgate x m if P em B 1. Assume x m survives the first expurgation. In the second operation, consider the codewords within the group of x m. Define by P em1 the probability of decoding error if codeword x m is transmitted. Let B 2 > 0 be a threshold such that Pr(P em1 B 2 ) 1/2. We expurgate x m if P em1 B 2.

8 8 Since Pr(P em < B 1, P em1 < B 2 ) = Pr(P em < B 1 )Pr(P em1 < B 2 P em < B 1 ) = Pr(P em < B 1 )Pr(P em1 < B 2 ) = 1 4, (29) the probability that x m survives two expurgation operations is at least 1/4. With (29), for R i R x, the conclusion of Lemma 1 follows naturally from Gallager s analysis about expurgated code in [2, Section V]. When R i > R x, only one or no expurgation operation is needed. It is easily seen that the Lemma still holds. Acknowledgment The authors would like to thank Professor Alexander Barg for his help on multilevel concatenated codes and the performance of linear codes. References [1] R. Fano, Transmission of Information, The M.I.T Press, and John Wiley & Sons, Inc., New York, N.Y., [2] R. Gallager, A Simple Derivation of The Coding Theorem and Some Applications, IEEE Trans. on Inform. Theory, Vol.11, pp.3-18, Jan [3] G. Forney, Concatenated Codes, The MIT Press, [4] E. Blokh and V. Zyablov, Linear Concatenated Codes, Nauka, Moscow, 1982 (In Russian). [5] A. Barg and G. Zémor, Multilevel Expander Codes, IEEE ISIT, Adelaide, Austrilia, Sep [6] V. Guruswami and P. Indyk, Linear-Time Encodable/Decodable Codes With Near-Optimal Rate, IEEE Trans. Inform. Theory, Vol. 51, No. 10, pp , Oct [7] J. Justesen, A Class of Constructive Asymptotically Good Algebraic Codes, IEEE Trans. Inform. Theory, Vol. IT-18, No. 5, pp , Sep [8] A. Barg and G. Zémor, Concatenated Codes: Serial and Parallel, IEEE Trans. Inform. Theory, Vol. 51, No. 5, pp , May [9] V. Guruswami, List Decoding of Error-correcting Codes, Ph.D. dissertation, MIT, Cambridge, MA, [10] R. Gallager, Information Theory and Reliable Communicaiton, Wiley, John & Sons, Incorporated, [11] A. Barg, G. Forney, Random Codes: Minimum Distances and Error Exponents, IEEE Trans. Inform. Theory Vol. 48, No. 9, pp , Sep. 2002

On Asymptotic Strategies for GMD Decoding with Arbitrary Error-Erasure Tradeoff

On Asymptotic Strategies for GMD Decoding with Arbitrary Error-Erasure Tradeoff On Asymptotic Strategies for GMD Decoding with Arbitrary Error-Erasure Tradeoff Joschi Brauchle and Vladimir Sidorenko Institute for Communications Engineering, Technische Universität München Arcisstrasse

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation Anatoly Khina EE Systems Dept., TAU Tel Aviv, Israel Email: anatolyk@eng.tau.ac.il Yair Yona Dept. of EE, UCLA

More information

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes International Symposium on Information Theory and its Applications, ISITA004 Parma, Italy, October 10 13, 004 Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

More information

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications on the ML Decoding Error Probability of Binary Linear Block Codes and Department of Electrical Engineering Technion-Israel Institute of Technology An M.Sc. Thesis supervisor: Dr. Igal Sason March 30, 2006

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications

On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications Igal Sason, Member and Gil Wiechman, Graduate Student Member Abstract A variety of communication scenarios

More information

LECTURE 13. Last time: Lecture outline

LECTURE 13. Last time: Lecture outline LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to

More information

Successive Cancellation Decoding of Single Parity-Check Product Codes

Successive Cancellation Decoding of Single Parity-Check Product Codes Successive Cancellation Decoding of Single Parity-Check Product Codes Mustafa Cemil Coşkun, Gianluigi Liva, Alexandre Graell i Amat and Michael Lentmaier Institute of Communications and Navigation, German

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Low-complexity error correction in LDPC codes with constituent RS codes 1

Low-complexity error correction in LDPC codes with constituent RS codes 1 Eleventh International Workshop on Algebraic and Combinatorial Coding Theory June 16-22, 2008, Pamporovo, Bulgaria pp. 348-353 Low-complexity error correction in LDPC codes with constituent RS codes 1

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

On the Error Exponents of ARQ Channels with Deadlines

On the Error Exponents of ARQ Channels with Deadlines On the Error Exponents of ARQ Channels with Deadlines Praveen Kumar Gopala, Young-Han Nam and Hesham El Gamal arxiv:cs/06006v [cs.it] 8 Oct 2006 March 22, 208 Abstract We consider communication over Automatic

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

A multiple access system for disjunctive vector channel

A multiple access system for disjunctive vector channel Thirteenth International Workshop on Algebraic and Combinatorial Coding Theory June 15-21, 2012, Pomorie, Bulgaria pp. 269 274 A multiple access system for disjunctive vector channel Dmitry Osipov, Alexey

More information

Discrete Memoryless Channels with Memoryless Output Sequences

Discrete Memoryless Channels with Memoryless Output Sequences Discrete Memoryless Channels with Memoryless utput Sequences Marcelo S Pinho Department of Electronic Engineering Instituto Tecnologico de Aeronautica Sao Jose dos Campos, SP 12228-900, Brazil Email: mpinho@ieeeorg

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Lecture 28: Generalized Minimum Distance Decoding

Lecture 28: Generalized Minimum Distance Decoding Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 007) Lecture 8: Generalized Minimum Distance Decoding November 5, 007 Lecturer: Atri Rudra Scribe: Sandipan Kundu & Atri Rudra 1

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

Error-Correcting Codes:

Error-Correcting Codes: Error-Correcting Codes: Progress & Challenges Madhu Sudan Microsoft/MIT Communication in presence of noise We are not ready Sender Noisy Channel We are now ready Receiver If information is digital, reliability

More information

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels 1 Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels Shunsuke Horii, Toshiyasu Matsushima, and Shigeichi Hirasawa arxiv:1508.01640v2 [cs.it] 29 Sep 2015 Abstract In this paper,

More information

On Two Probabilistic Decoding Algorithms for Binary Linear Codes

On Two Probabilistic Decoding Algorithms for Binary Linear Codes On Two Probabilistic Decoding Algorithms for Binary Linear Codes Miodrag Živković Abstract A generalization of Sullivan inequality on the ratio of the probability of a linear code to that of any of its

More information

Cut-Set Bound and Dependence Balance Bound

Cut-Set Bound and Dependence Balance Bound Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems

More information

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

An Achievable Error Exponent for the Mismatched Multiple-Access Channel An Achievable Error Exponent for the Mismatched Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@camacuk Albert Guillén i Fàbregas ICREA & Universitat Pompeu Fabra University of

More information

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels POST-PRIT OF THE IEEE TRAS. O IFORMATIO THEORY, VOL. 54, O. 5, PP. 96 99, MAY 8 An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels Gil Wiechman Igal Sason Department

More information

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication

More information

Lecture 4 Channel Coding

Lecture 4 Channel Coding Capacity and the Weak Converse Lecture 4 Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 15, 2014 1 / 16 I-Hsiang Wang NIT Lecture 4 Capacity

More information

A Comparison of Superposition Coding Schemes

A Comparison of Superposition Coding Schemes A Comparison of Superposition Coding Schemes Lele Wang, Eren Şaşoğlu, Bernd Bandemer, and Young-Han Kim Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA

More information

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets Jing Guo University of Cambridge jg582@cam.ac.uk Jossy Sayir University of Cambridge j.sayir@ieee.org Minghai Qin

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

Variable Rate Channel Capacity. Jie Ren 2013/4/26

Variable Rate Channel Capacity. Jie Ren 2013/4/26 Variable Rate Channel Capacity Jie Ren 2013/4/26 Reference This is a introduc?on of Sergio Verdu and Shlomo Shamai s paper. Sergio Verdu and Shlomo Shamai, Variable- Rate Channel Capacity, IEEE Transac?ons

More information

Bounds on the Maximum Likelihood Decoding Error Probability of Low Density Parity Check Codes

Bounds on the Maximum Likelihood Decoding Error Probability of Low Density Parity Check Codes Bounds on the Maximum ikelihood Decoding Error Probability of ow Density Parity Check Codes Gadi Miller and David Burshtein Dept. of Electrical Engineering Systems Tel-Aviv University Tel-Aviv 69978, Israel

More information

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 1. Cascade of Binary Symmetric Channels The conditional probability distribution py x for each of the BSCs may be expressed by the transition probability

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

Channel combining and splitting for cutoff rate improvement

Channel combining and splitting for cutoff rate improvement Channel combining and splitting for cutoff rate improvement Erdal Arıkan Electrical-Electronics Engineering Department Bilkent University, Ankara, 68, Turkey Email: arikan@eebilkentedutr arxiv:cs/5834v

More information

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 26 Linear Block Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay July 28, 2014 Binary Block Codes 3 / 26 Let F 2 be the set

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

Notes 7: Justesen codes, Reed-Solomon and concatenated codes decoding. 1 Review - Concatenated codes and Zyablov s tradeoff

Notes 7: Justesen codes, Reed-Solomon and concatenated codes decoding. 1 Review - Concatenated codes and Zyablov s tradeoff Introduction to Coding Theory CMU: Spring 2010 Notes 7: Justesen codes, Reed-Solomon and concatenated codes decoding March 2010 Lecturer: V. Guruswami Scribe: Venkat Guruswami & Balakrishnan Narayanaswamy

More information

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

Arbitrary Alphabet Size

Arbitrary Alphabet Size Optimal Coding for the Erasure Channel with Arbitrary Alphabet Size Shervan Fashandi, Shahab Oveis Gharan and Amir K. Khandani ECE Dept., University of Waterloo, Waterloo, O, Canada, 2L3G email: {sfashand,shahab,khandani}@cst.uwaterloo.ca

More information

Chapter 7 Reed Solomon Codes and Binary Transmission

Chapter 7 Reed Solomon Codes and Binary Transmission Chapter 7 Reed Solomon Codes and Binary Transmission 7.1 Introduction Reed Solomon codes named after Reed and Solomon [9] following their publication in 1960 have been used together with hard decision

More information

Katalin Marton. Abbas El Gamal. Stanford University. Withits A. El Gamal (Stanford University) Katalin Marton Withits / 9

Katalin Marton. Abbas El Gamal. Stanford University. Withits A. El Gamal (Stanford University) Katalin Marton Withits / 9 Katalin Marton Abbas El Gamal Stanford University Withits 2010 A. El Gamal (Stanford University) Katalin Marton Withits 2010 1 / 9 Brief Bio Born in 1941, Budapest Hungary PhD from Eötvös Loránd University

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University

More information

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications on the ML Decoding Error Probability of Binary Linear Block Codes and Moshe Twitto Department of Electrical Engineering Technion-Israel Institute of Technology Haifa 32000, Israel Joint work with Igal

More information

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback 2038 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 9, SEPTEMBER 2004 Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback Vincent

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Linear time list recovery via expander codes

Linear time list recovery via expander codes Linear time list recovery via expander codes Brett Hemenway and Mary Wootters June 7 26 Outline Introduction List recovery Expander codes List recovery of expander codes Conclusion Our Results One slide

More information

CLASSICAL error control codes have been designed

CLASSICAL error control codes have been designed IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 56, NO 3, MARCH 2010 979 Optimal, Systematic, q-ary Codes Correcting All Asymmetric and Symmetric Errors of Limited Magnitude Noha Elarief and Bella Bose, Fellow,

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Tackling Intracell Variability in TLC Flash Through Tensor Product Codes

Tackling Intracell Variability in TLC Flash Through Tensor Product Codes Tackling Intracell Variability in TLC Flash Through Tensor Product Codes Ryan Gabrys, Eitan Yaakobi, Laura Grupp, Steven Swanson, Lara Dolecek University of California, Los Angeles University of California,

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

2012 IEEE International Symposium on Information Theory Proceedings

2012 IEEE International Symposium on Information Theory Proceedings Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

List Decoding of Reed Solomon Codes

List Decoding of Reed Solomon Codes List Decoding of Reed Solomon Codes p. 1/30 List Decoding of Reed Solomon Codes Madhu Sudan MIT CSAIL Background: Reliable Transmission of Information List Decoding of Reed Solomon Codes p. 2/30 List Decoding

More information

Variable-to-Fixed Length Homophonic Coding with a Modified Shannon-Fano-Elias Code

Variable-to-Fixed Length Homophonic Coding with a Modified Shannon-Fano-Elias Code Variable-to-Fixed Length Homophonic Coding with a Modified Shannon-Fano-Elias Code Junya Honda Hirosuke Yamamoto Department of Complexity Science and Engineering The University of Tokyo, Kashiwa-shi Chiba

More information

Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity

Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity Henry D. Pfister, Member, Igal Sason, Member, and Rüdiger Urbanke Abstract We present two sequences of ensembles of non-systematic

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem LECTURE 15 Last time: Feedback channel: setting up the problem Perfect feedback Feedback capacity Data compression Lecture outline Joint source and channel coding theorem Converse Robustness Brain teaser

More information

The PPM Poisson Channel: Finite-Length Bounds and Code Design

The PPM Poisson Channel: Finite-Length Bounds and Code Design August 21, 2014 The PPM Poisson Channel: Finite-Length Bounds and Code Design Flavio Zabini DEI - University of Bologna and Institute for Communications and Navigation German Aerospace Center (DLR) Balazs

More information

Bounds on Mutual Information for Simple Codes Using Information Combining

Bounds on Mutual Information for Simple Codes Using Information Combining ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar

More information

A Simple Converse of Burnashev s Reliability Function

A Simple Converse of Burnashev s Reliability Function A Simple Converse of Burnashev s Reliability Function 1 arxiv:cs/0610145v3 [cs.it] 23 Sep 2008 Peter Berlin, Barış Nakiboğlu, Bixio Rimoldi, Emre Telatar School of Computer and Communication Sciences Ecole

More information

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V. Active distances for convolutional codes Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V Published in: IEEE Transactions on Information Theory DOI: 101109/18749009 Published: 1999-01-01

More information

A One-to-One Code and Its Anti-Redundancy

A One-to-One Code and Its Anti-Redundancy A One-to-One Code and Its Anti-Redundancy W. Szpankowski Department of Computer Science, Purdue University July 4, 2005 This research is supported by NSF, NSA and NIH. Outline of the Talk. Prefix Codes

More information

Construction of Polar Codes with Sublinear Complexity

Construction of Polar Codes with Sublinear Complexity 1 Construction of Polar Codes with Sublinear Complexity Marco Mondelli, S. Hamed Hassani, and Rüdiger Urbanke arxiv:1612.05295v4 [cs.it] 13 Jul 2017 Abstract Consider the problem of constructing a polar

More information

Berlekamp-Massey decoding of RS code

Berlekamp-Massey decoding of RS code IERG60 Coding for Distributed Storage Systems Lecture - 05//06 Berlekamp-Massey decoding of RS code Lecturer: Kenneth Shum Scribe: Bowen Zhang Berlekamp-Massey algorithm We recall some notations from lecture

More information

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done

More information

Binary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1

Binary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1 5 Conference on Information Sciences and Systems, The Johns Hopkins University, March 6 8, 5 inary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity Ahmed O.

More information

Upper Bounds to Error Probability with Feedback

Upper Bounds to Error Probability with Feedback Upper Bounds to Error robability with Feedbac Barış Naiboğlu Lizhong Zheng Research Laboratory of Electronics at MIT Cambridge, MA, 0239 Email: {naib, lizhong }@mit.edu Abstract A new technique is proposed

More information

On the Block Error Probability of LP Decoding of LDPC Codes

On the Block Error Probability of LP Decoding of LDPC Codes On the Block Error Probability of LP Decoding of LDPC Codes Ralf Koetter CSL and Dept. of ECE University of Illinois at Urbana-Champaign Urbana, IL 680, USA koetter@uiuc.edu Pascal O. Vontobel Dept. of

More information

Decoding Concatenated Codes using Soft Information

Decoding Concatenated Codes using Soft Information Decoding Concatenated Codes using Soft Information Venkatesan Guruswami University of California at Berkeley Computer Science Division Berkeley, CA 94720. venkat@lcs.mit.edu Madhu Sudan MIT Laboratory

More information

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels 2012 IEEE International Symposium on Information Theory Proceedings On Generalied EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels H Mamani 1, H Saeedi 1, A Eslami

More information

Secure RAID Schemes from EVENODD and STAR Codes

Secure RAID Schemes from EVENODD and STAR Codes Secure RAID Schemes from EVENODD and STAR Codes Wentao Huang and Jehoshua Bruck California Institute of Technology, Pasadena, USA {whuang,bruck}@caltechedu Abstract We study secure RAID, ie, low-complexity

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16 EE539R: Problem Set 4 Assigned: 3/08/6, Due: 07/09/6. Cover and Thomas: Problem 3.5 Sets defined by probabilities: Define the set C n (t = {x n : P X n(x n 2 nt } (a We have = P X n(x n P X n(x n 2 nt

More information

An Extended Fano s Inequality for the Finite Blocklength Coding

An Extended Fano s Inequality for the Finite Blocklength Coding An Extended Fano s Inequality for the Finite Bloclength Coding Yunquan Dong, Pingyi Fan {dongyq8@mails,fpy@mail}.tsinghua.edu.cn Department of Electronic Engineering, Tsinghua University, Beijing, P.R.

More information

Analytical Bounds on Maximum-Likelihood Decoded Linear Codes: An Overview

Analytical Bounds on Maximum-Likelihood Decoded Linear Codes: An Overview Analytical Bounds on Maximum-Likelihood Decoded Linear Codes: An Overview Igal Sason Department of Electrical Engineering, Technion Haifa 32000, Israel Sason@ee.technion.ac.il December 21, 2004 Background

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College

More information

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University) Performance-based Security for Encoding of Information Signals FA9550-15-1-0180 (2015-2018) Paul Cuff (Princeton University) Contributors Two students finished PhD Tiance Wang (Goldman Sachs) Eva Song

More information

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel Stefano Rini, Ernest Kurniawan and Andrea Goldsmith Technische Universität München, Munich, Germany, Stanford University,

More information

arxiv: v1 [cs.it] 5 Sep 2008

arxiv: v1 [cs.it] 5 Sep 2008 1 arxiv:0809.1043v1 [cs.it] 5 Sep 2008 On Unique Decodability Marco Dalai, Riccardo Leonardi Abstract In this paper we propose a revisitation of the topic of unique decodability and of some fundamental

More information

arxiv:cs/ v1 [cs.it] 16 Aug 2005

arxiv:cs/ v1 [cs.it] 16 Aug 2005 On Achievable Rates and Complexity of LDPC Codes for Parallel Channels with Application to Puncturing Igal Sason Gil Wiechman arxiv:cs/587v [cs.it] 6 Aug 5 Technion Israel Institute of Technology Haifa

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,

More information

Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels

Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Submitted: December, 5 Abstract In modern communication systems,

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information