Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

Similar documents
On Asymptotic Strategies for GMD Decoding with Arbitrary Error-Erasure Tradeoff

Error Exponent Region for Gaussian Broadcast Channels

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

Lecture 4 Noisy Channel Coding

Practical Polar Code Construction Using Generalised Generator Matrices

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

for some error exponent E( R) as a function R,

Lecture 8: Shannon s Noise Models

On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications

LECTURE 13. Last time: Lecture outline

Successive Cancellation Decoding of Single Parity-Check Product Codes

Lecture 4: Proof of Shannon s theorem and an explicit code

Low-complexity error correction in LDPC codes with constituent RS codes 1

On the Duality between Multiple-Access Codes and Computation Codes

On the Error Exponents of ARQ Channels with Deadlines

LECTURE 10. Last time: Lecture outline

A multiple access system for disjunctive vector channel

Discrete Memoryless Channels with Memoryless Output Sequences

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Introduction to Convolutional Codes, Part 1

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Lecture 28: Generalized Minimum Distance Decoding

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Error-Correcting Codes:

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels

On Two Probabilistic Decoding Algorithms for Binary Linear Codes

Cut-Set Bound and Dependence Balance Bound

An Achievable Error Exponent for the Mismatched Multiple-Access Channel

An Improved Sphere-Packing Bound for Finite-Length Codes over Symmetric Memoryless Channels

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Lecture 4 Channel Coding

A Comparison of Superposition Coding Schemes

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

Lecture 6: Expander Codes

Variable Rate Channel Capacity. Jie Ren 2013/4/26

Bounds on the Maximum Likelihood Decoding Error Probability of Low Density Parity Check Codes

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

An Introduction to Low Density Parity Check (LDPC) Codes

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Channel combining and splitting for cutoff rate improvement

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

Notes 7: Justesen codes, Reed-Solomon and concatenated codes decoding. 1 Review - Concatenated codes and Zyablov s tradeoff

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding

Arbitrary Alphabet Size

Chapter 7 Reed Solomon Codes and Binary Transmission

Katalin Marton. Abbas El Gamal. Stanford University. Withits A. El Gamal (Stanford University) Katalin Marton Withits / 9

(Classical) Information Theory III: Noisy channel coding

The Method of Types and Its Application to Information Hiding

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback

National University of Singapore Department of Electrical & Computer Engineering. Examination for

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Upper Bounds on the Capacity of Binary Intermittent Communication

Linear time list recovery via expander codes

CLASSICAL error control codes have been designed

Cooperative Communication with Feedback via Stochastic Approximation

Lecture 5 Channel Coding over Continuous Channels

Tackling Intracell Variability in TLC Flash Through Tensor Product Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

2012 IEEE International Symposium on Information Theory Proceedings

Graph-based codes for flash memory

List Decoding of Reed Solomon Codes

Variable-to-Fixed Length Homophonic Coding with a Modified Shannon-Fano-Elias Code

Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity

Reliable Computation over Multiple-Access Channels

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

The PPM Poisson Channel: Finite-Length Bounds and Code Design

Bounds on Mutual Information for Simple Codes Using Information Combining

A Simple Converse of Burnashev s Reliability Function

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.

A One-to-One Code and Its Anti-Redundancy

Construction of Polar Codes with Sublinear Complexity

Berlekamp-Massey decoding of RS code

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Binary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1

Upper Bounds to Error Probability with Feedback

On the Block Error Probability of LP Decoding of LDPC Codes

Decoding Concatenated Codes using Soft Information

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

Secure RAID Schemes from EVENODD and STAR Codes

THIS paper is aimed at designing efficient decoding algorithms

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

An Extended Fano s Inequality for the Finite Blocklength Coding

Analytical Bounds on Maximum-Likelihood Decoded Linear Codes: An Overview

The Poisson Channel with Side Information

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

Performance-based Security for Encoding of Information Signals. FA ( ) Paul Cuff (Princeton University)

Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive Interference Channel

arxiv: v1 [cs.it] 5 Sep 2008

arxiv:cs/ v1 [cs.it] 16 Aug 2005

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

Error Exponent Regions for Gaussian Broadcast and Multiple Access Channels

4 An Introduction to Channel Coding and Decoding over BSC

Transcription:

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes 1 Zheng Wang, Student Member, IEEE, Jie Luo, Member, IEEE arxiv:0808.3756v1 [cs.it] 27 Aug 2008 Abstract We show that the Blokh-Zyablov error exponent can be arbitrarily approached by concatenated block codes over general discrete-time memoryless channels with linear encoding/decoding complexity in the block length. The key result is an extension to Justesen s general minimum distance decoding algorithm, which enables a low complexity integration of Guruswami-Indyk s outer code into Forney s and Blokh-Zyablov s concatenated coding schemes. Index Terms Blokh-Zyablov error exponent, concatenated code, linear coding complexity I. Introduction Consider communication over a discrete-time memoryless channel using block channel codes. The channel is modeled by a conditional point mass function (PMF) or probability density funtion (PDF) p (Y X) (y x), where x X and y Y are the input and output symbols, X and Y are the input and output alphabets, respectively. Let Shannon capacity of the channel be C. Fano showed in [1] that the minimum error probability P e for codes of block length N and rate R satisfies the following bound lim log P e N N E(R), (1) where E(R) is known as the error exponent, a positive function of channel transition probabilities. Without coding complexity constraints, if the input and output alphabets are finite, the maximum achievable E(R) is given by Gallager in [2] max ρ 1 { ρr + E x (ρ, p X )} E(R) = max E L (R, p X ), E L (R, p X ) = R + E p 0 (1, p X ) X max 0 ρ 1 { ρr + E 0 (ρ, p X )} 0 R R x R x R R crit R crit R C, (2) The authors are with the Electrical and Computer Engineering Department, Colorado State University, Fort Collins, CO 80523. E-mail: {zhwang, rockey}@engr.colostate.edu.

2 where p X is the input distribution, and the definitions of other variables can be found in [3]. If the channel input and/or output alphabets are the set of real numbers, i.e., the channel is continuous [2], the maximum achievable error exponent is still given by (2) if we replace the PMF by PDF, the summations by integrals and the max operators by sup. Forney proposed in [3] a one-level concatenated coding scheme that can achieve a positive error exponent for any rate R < C with a coding complexity of O(N 4 ). The maximum error exponent achieved by the one-level concatenated code, known as Forney s error exponent, is given in [3] by ( ) R E c (R) = max r o [ R,1](1 r o)e, (3) r C o where r o is the outer code rate, and R is the overall rate. Achieving E c (R) requires the decoder to exploit reliability information of the inner code and to decode the outer code using Forney s general minimum distance (GMD) decoder [3]. Forney s GMD decoding algorithm essentially carries out outer code decoding (under various conditions) for O(N) times. Forney s concatenated codes were generalized to multilevel concatenated codes, also known as the generalized concatenated codes, by Blokh and Zyablov in [4]. As the order of concatenation goes to infinity, the error exponent approaches the Blokh-Zyablov bound (or Blokh-Zyablov error exponent) [4][5] E ( ) (R) = ( max p X,r o [ R,1] r o R ) [ roc C 0 C ] 1 dx. (4) E L (x, p X ) Guruswami and Indyk proposed a family of linear-time encodable and decodable error-correction codes in [6]. By concatenating these near maximum distance separable codes (as outer codes) with good binary inner codes, together with Justesen s GMD decoding (proposed in [7]), optimal error correction performance can be arbitrarily approached with linear encoding/decoding complexity. With a fixed inner codeword length, Justesen s GMD decoder carries out the outer code decoding for a constant number of times. This is a required property for GMD decoding to ensure the overall linear decoding complexity. For binary symmetric channels (BSCs), since Hamming error-correction is equivalent to (or can be transformed to an equivalent form of) maximum likelihood decoding, Forney s error exponent can be arbitrarily approached by Guruswami-Indyk s coding scheme [6]. Along another line of error correction coding research, Barg and Zémor proposed in [8] a parallel concatenated coding scheme that can arbitrarily approach Forney s error exponent with linear decoding and quadratic encoding complexity 1. 1 In the rest of the paper, when we say concatenated coding we mean Forney s and Blokh-Zyablov s serial concatenated coding schemes [8].

3 For a general memoryless channel, without GMD decoding, half of Forney s error exponent (and half of Blokh-Zyablov error exponent) can be approached with linear encoding/decoding complexity using the concatenated coding scheme with Guruswami-Indyk s outer code. With Forney s GMD decoding algorithm, the half error exponent penalty can be lifted at the cost of a quadratic decoding complexity. Assume concatenated coding scheme with Guruswami-Indyk s outer code and constant-sized inner codes. In this paper, we show that, with an arbitrarily small error exponent reduction, Forney s GMD decoding can be revised 2 to carry out outer code decoding for a constant number of times. With the revised GMD decoder, concatenated codes can arbitrarily approach Forney s error exponent (with onelevel concatenation) and Blokh-Zyablov error exponent (with multilevel concatenation) with linear encoding/decoding complexity. II. Revised GMD Decoding with One-level Concatenated Codes Consider Forney s one-level concatenated coding scheme over a discrete-time memoryless channel. Assume for an arbitrarily small ε 1 > 0 we can construct a linear encodable/decodable outer code, with rate r o and length N o, which can correct t errors and s erasures so long as 2t + s < N o (1 r o ε 1 ). Note that this is possible for large N o as shown by Guruswami and Indyk in [6]. To simplify the notations, we assume N o (1 r o ε 1 ) is an integer. The outer code is concatenated to suitable inner codes with rate R i and fixed length N i. The rate and length of the concatenated code are R = r o R i and N = N o N i, respectively. In Forney s GMD decoding, inner codes forward not only estimates ˆx m = [ˆx 1,..., ˆx i,..., ˆx No ] but also reliability information in the form of a weight vector α = [α 1,...,α i,...,α No ] to the outer code, where ˆx i GF(q), 0 α i 1 and 1 i N o. Let +1 x = ˆx s(ˆx, x) = 1 x ˆx. (5) For any outer codeword x m = [x m1, x m2,..., x mno ], define a dot product α x m as follows N o N o α x m = α i s(ˆx i, x mi ) = α i s i. (6) Theorem 1: There is at most one codeword x m that satisfies i=1 α x m > N o (r o + ε 1 ). (7) Theorem 1 is implied by Theorem 3.1 in [3]. Rearrange the weights according to their values and let i 1,...,i j,...,i No be the indices such that i=1 α i1... α ij... α ino. (8) 2 The revision can also be regarded as an extension of Justesen s GMD decoding in [7]

4 Define q k = [q k (α 1 ),...,,q k (α j ),...,q k (α No )], for 0 k < 1/ε 2, where ε 2 > 0 is a positive constant with 1/ε 2 being an integer, with 0 α ij kε 2 and i j N o (1 r o ε 1 ) q k (α ij ) = 1 otherwise, (9) and N o N o q k x m = q k (α i )s(ˆx i, x mi ) = q k (α i )s i. (10) i=1 Then we have the following theorem Theorem 2: If α x m > N o (1 + (r o + ε 1 )(1 ε 2 /2)), then, for some k, q k x m > N o (r o + ε 1 ). The proof of Theorem 2 is given in Appendix A. Theorems 1 and 2 indicate that, if x m is transmitted and α x m > N o (1 + (r o + ε 1 )(1 ε 2 /2)), for some k, errors-and-erasures decoding specified by q k (where symbols with q k (α i ) = 0 are erased) will output x m. Since the total number of q k vectors is a constant, the outer code carries out errors-anderasures decoding only for a constant number of times. Consequently, a GMD decoding that carries out errors-and-erasures decoding for all q k s and compares their decoding outputs can recover x m with a complexity of O(N o ). Since the inner code length N i is fixed, the overall complexity is O(N). The following theorem gives an error probability bound of the one-level concatenated codes. Theorem 3: Error probability of the one-level concatenated codes is upper bounded by P e P(α x m N o (1 + (r o + ε 1 )(1 ε 2 /2))) exp [ N (E c (R) ε)], where E c (R) is Forney s error exponent given by (2) and ε is a function of ε 1 and ε 2 with ε 0 if ε 1, ε 2 0. The proof of Theorem 3 can be obtained by first replacing Theorem 3.2 in [3] with Theorem 2, and then combining the results of [3, Section 4.2] and [6, Theorem 8]. The difference between Forney s and the revised GMD decoding schemes lies in the definition of errors-and-erasures decodable vectors q k, the number of which determines the decoding complexity. Forney s GMD decoding needs to carry out errors-and-erasures decoding for a number of times linear in N o, whereas ours for a constant number of times. The idea of revised GMD decoding dates back to Justesen s work in [7], although [7] focused on error-correction code where inner codes forward Hamming distance information (in the form of an α vector) to the outer code. i=1 III. Multilevel Concatenated Codes To approach Forney s error exponent with one-level concatenated codes, the only requirement for the inner codes is that they should achieve Gallager s error exponent given in (2). In order to approach a better error exponent with m-level concatenated codes (m > 1), the inner code must possess certain

5 special properties. Take two-level concatenated codes for example, the required property and the existence of optimal inner code are stated in the following lemma. Lemma 1: Consider a discrete-time memoryless channel, let q > 0 be an integer and p X a source distribution. There exists a code of length N i and rate R i with q N ir i codewords, which are partitioned into q N i R i 2 groups each having q N i R i 2 codewords. Define the error probability of the code by P e (2) (R i, p X ) and the maximum error probability of the codes each characterized by the codewords in a particular group of the partition by P e (2) 1 (R i /2, p X ). The error probabilities satisfy the following inequalities lim N The proof of Lemma 1 is given in Appendix B. log P (2) e (R i, p X ) lim E L (R i, p X ), N N i log P e (2) 1 (R i /2, p X ) N i E L (R i /2, p X ). (11) We skip the explanation that Lemma 1 is extendable to m-level concatenated coding schemes, with m > 2. With Lemma 1 and its extensions, we are now ready to generalized the results of Section II to multilevel concatenated coding schemes. Theorem 4: For a discrete-time memoryless channel, for any ε > 0 and integer m > 0, one can construct a sequence of m-level concatenated codes whose encoding/decoding complexity is linear in N, and whose error probability is bounded by lim log P e N N E(m) (R) ε, E (m) r o R C (R) = max p X,r o [ R,1] r oc [ ( mi=1 C EL m ( i )r )] 1. (12) m oc, p X The proof of Theorem 4 can be obtained by combining Theorem 3, Lemma 1 and the derivation of E (m) (R) in [4][5]. Note that lim m E (m) (R) = E ( ) (R), where E ( ) (R) is the Blokh-Zyablov error exponent given in (4). Theorem 4 implies that, for discrete-time memoryless channels, Blokh-Zyablov error exponent can be arbitrarily approached with linear encoding/decoding complexity. The optimal inner code required by Theorem 4 is often not linear, since, unless for specific channels such as BSCs [11], it is not clear whether linear codes can achieve the expurgated exponent. However, since random linear codes can achieve random coding exponent over a general discrete-time memoryless channel [10], we have the following lemma. Lemma 2: Theorem 4 holds for linear codes over BSCs. It also holds for linear codes over a general (( ) discrete-time memoryless channel if one replaces E i L m) ro C, p X in (12) by (( ) ) { ( } i i E r r o C, p X = max ρ r o C + E o (ρ, p X ). (13) m 0 ρ 1 m)

6 Lemma 2 can be shown by adopting random linear code [10] as the inner codes. Lemma 2 implies that the following error exponent can be arbitrarily approached by linear codes with linear complexity A. Proof of Theorem 2 1/ε 2. Let ( E r ( ) (R) = max p X,r o [ R,1] r o R ) [ roc C 0 C Appendix ] 1 dx. (14) E r (x, p X ) Proof: Define an integer p = α ino(1 ro ε1 ) /ε 2 and a set of values c j = (j 1/2)ε 2, for 1 p, j λ 0 = c 1 λ k = c k+1 c k, 1 k p 1, 1 p 1/ε 2 λ p = α ino(1 ro ε1 )+1 c p We have and λ h = α ih p+no(1 ro ε1 )+1 α i h p+no(1 ro ε1 ), p < h < p + N o(r o + ε 1 ) λ p+no(r o+ε 1 ) = 1 α ino. (15) j 1 λ k = c j 1 j p α ij p+no(1 ro ε1 ) p < j p + N o (r o + ε 1 ) p+n o(r o+ε 1 ) (16), (17) λ k = 1. (18) Define a new weight vector α = [ α 1,..., α i,..., α No ] with argmin cj,1 j l c j α i α i α ino(1 ro ε1 ) α i =. (19) α i α i > α ino(1 ro ε1 ) Define p k = [p k (α 1 ),...,p k (α i ),..., p k (α No )] with 1 k p + N o (r o + ε 1 ) such that for 0 k < p p k = q k, (20) and for p k p + N o (r o + ε 1 ) 0 α i α ik p+no(1 ro ε1 ) p k (α i ) = 1 α i > α ik p+no(1 ro ε1 ). (21) Thus we have p+n o(r o+ε 1 ) α = λ k p k. (22)

7 Define a set of indices According to the definition of α i, for i / U, α i = α i. Hence Since α i α i ε 2 /2, and s i = ±1, we have Consequently, α x m > N o ( 1 + (ro + ε 1 )(1 ε 2 2 ) ) implies i U If p k x m N o (r o + ε 1 ) for all p k s, then α x m = p+n o(r o+ε 1 ) U = {i 1, i 2,..., i No(1 r o ε 1 )}. (23) α x m = α x m + ( α i α i ) s i. (24) i U ( α i α i ) s i N o (1 r o ε 1 ) ε 2 2. (25) α x m > N o (r o + ε 1 ). (26) p+n o(r o+ε 1 ) λ k p k x m N o (r o + ε 1 ) which contradicts to (26). Therefore, there must be some p k that satisfies λ k = N o (r o + ε 1 ), (27) p k x m > N o (r o + ε 1 ). (28) Since for k p, p k has no more than N o (r o + ε 1 ) number of 1s, which implies p k x m N o (r o + ε 1 ). Therefore, the vectors that satisfy (28) must exist among p k with 1 k < p. In words, for some k, q k x m > N o (r o + ε 1 ). B. Proof of Lemma 1 Proof: We first prove the Lemma for R i R x, where R x is defined by (2) and in [3]. For a random block code with length N i and M > q N ir i codewords, partition these codewords into q N ir i /2 groups with at least M/q N ir i /2 codewords in each group. Consider a particular codeword x m, the following two expurgation operations [2] are performed. In the first operation, we consider only codeword x m and codewords that are not in the same group with x m. In words, we temporarily strike out the codewords in the same group with x m. Define P em as the probability of decoding error if x m is transmitted. Let B 1 > 0 be a threshold such that Pr(P em B 1 ) 1/2. We expurgate x m if P em B 1. Assume x m survives the first expurgation. In the second operation, consider the codewords within the group of x m. Define by P em1 the probability of decoding error if codeword x m is transmitted. Let B 2 > 0 be a threshold such that Pr(P em1 B 2 ) 1/2. We expurgate x m if P em1 B 2.

8 Since Pr(P em < B 1, P em1 < B 2 ) = Pr(P em < B 1 )Pr(P em1 < B 2 P em < B 1 ) = Pr(P em < B 1 )Pr(P em1 < B 2 ) 1 2 1 2 = 1 4, (29) the probability that x m survives two expurgation operations is at least 1/4. With (29), for R i R x, the conclusion of Lemma 1 follows naturally from Gallager s analysis about expurgated code in [2, Section V]. When R i > R x, only one or no expurgation operation is needed. It is easily seen that the Lemma still holds. Acknowledgment The authors would like to thank Professor Alexander Barg for his help on multilevel concatenated codes and the performance of linear codes. References [1] R. Fano, Transmission of Information, The M.I.T Press, and John Wiley & Sons, Inc., New York, N.Y., 1961. [2] R. Gallager, A Simple Derivation of The Coding Theorem and Some Applications, IEEE Trans. on Inform. Theory, Vol.11, pp.3-18, Jan. 1965. [3] G. Forney, Concatenated Codes, The MIT Press, 1966. [4] E. Blokh and V. Zyablov, Linear Concatenated Codes, Nauka, Moscow, 1982 (In Russian). [5] A. Barg and G. Zémor, Multilevel Expander Codes, IEEE ISIT, Adelaide, Austrilia, Sep. 2005. [6] V. Guruswami and P. Indyk, Linear-Time Encodable/Decodable Codes With Near-Optimal Rate, IEEE Trans. Inform. Theory, Vol. 51, No. 10, pp. 3393-3400, Oct. 2005. [7] J. Justesen, A Class of Constructive Asymptotically Good Algebraic Codes, IEEE Trans. Inform. Theory, Vol. IT-18, No. 5, pp. 652-656, Sep. 1972. [8] A. Barg and G. Zémor, Concatenated Codes: Serial and Parallel, IEEE Trans. Inform. Theory, Vol. 51, No. 5, pp. 1625-1634, May 2005. [9] V. Guruswami, List Decoding of Error-correcting Codes, Ph.D. dissertation, MIT, Cambridge, MA, 2001. [10] R. Gallager, Information Theory and Reliable Communicaiton, Wiley, John & Sons, Incorporated, 1968. [11] A. Barg, G. Forney, Random Codes: Minimum Distances and Error Exponents, IEEE Trans. Inform. Theory Vol. 48, No. 9, pp. 2568-2573, Sep. 2002