Iterative Encoding of Low-Density Parity-Check Codes

Similar documents
Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Structured Low-Density Parity-Check Codes: Algebraic Constructions

An Introduction to Low Density Parity Check (LDPC) Codes

Construction of LDPC codes

LDPC Codes. Intracom Telecom, Peania

Convergence analysis for a class of LDPC convolutional codes on the erasure channel

Low-density parity-check codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Codes designed via algebraic lifts of graphs

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Design of Non-Binary Quasi-Cyclic LDPC Codes by Absorbing Set Removal

Decoding of LDPC codes with binary vector messages and scalable complexity

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

Optimal Rate and Maximum Erasure Probability LDPC Codes in Binary Erasure Channel

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Quasi-cyclic Low Density Parity Check codes with high girth

Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1

Time-invariant LDPC convolutional codes

Practical Polar Code Construction Using Generalised Generator Matrices

Lecture 4 : Introduction to Low-density Parity-check Codes

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Analysis of Sum-Product Decoding of Low-Density Parity-Check Codes Using a Gaussian Approximation

ECEN 655: Advanced Channel Coding

LDPC Codes. Slides originally from I. Land p.1

Bifurcations in iterative decoding and root locus plots

Quasi-Cyclic Asymptotically Regular LDPC Codes

On the minimum distance of LDPC codes based on repetition codes and permutation matrices 1

LDPC codes based on Steiner quadruple systems and permutation matrices

Asynchronous Decoding of LDPC Codes over BEC

On the Block Error Probability of LP Decoding of LDPC Codes

Making Error Correcting Codes Work for Flash Memory

RECURSIVE CONSTRUCTION OF (J, L) QC LDPC CODES WITH GIRTH 6. Communicated by Dianhua Wu. 1. Introduction

Slepian-Wolf Code Design via Source-Channel Correspondence

Introducing Low-Density Parity-Check Codes

Low-Density Parity-Check codes An introduction

Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes

Efficient design of LDPC code using circulant matrix and eira code Seul-Ki Bae

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Design of regular (2,dc)-LDPC codes over GF(q) using their binary images

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

An Efficient Maximum Likelihood Decoding of LDPC Codes Over the Binary Erasure Channel

Low-density parity-check (LDPC) codes

Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel

Achieving Flexibility in LDPC Code Design by Absorbing Set Elimination

Implementing the Belief Propagation Algorithm in MATLAB

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Fountain Uncorrectable Sets and Finite-Length Analysis

Integrated Code Design for a Joint Source and Channel LDPC Coding Scheme

5. Density evolution. Density evolution 5-1

On the Construction and Decoding of Cyclic LDPC Codes

Polar Codes: Graph Representation and Duality

Girth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Turbo Codes are Low Density Parity Check Codes

An Introduction to Low-Density Parity-Check Codes

Codes on graphs and iterative decoding

Decoding Codes on Graphs

On Bit Error Rate Performance of Polar Codes in Finite Regime

ECC for NAND Flash. Osso Vahabzadeh. TexasLDPC Inc. Flash Memory Summit 2017 Santa Clara, CA 1

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q)

Which Codes Have 4-Cycle-Free Tanner Graphs?

Research Letter Design of Short, High-Rate DVB-S2-Like Semi-Regular LDPC Codes

Graph-based codes for flash memory

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

From Stopping sets to Trapping sets

Bounds on the Performance of Belief Propagation Decoding

Performance Comparison of LDPC Codes Generated With Various Code-Construction Methods

A Class of Quantum LDPC Codes Constructed From Finite Geometries

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

Enhancing Binary Images of Non-Binary LDPC Codes

Low-Density Parity-Check Codes

An Introduction to Algorithmic Coding Theory

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Which Codes Have 4-Cycle-Free Tanner Graphs?

Analytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes

THE seminal paper of Gallager [1, p. 48] suggested to evaluate

An algorithm to improve the error rate performance of Accumulate-Repeat-Accumulate codes Tae-Ui Kim

Quasi-Cyclic Low-Density Parity-Check Codes With Girth Larger Than

Recent Results on Capacity-Achieving Codes for the Erasure Channel with Bounded Complexity

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Lecture 4: Linear Codes. Copyright G. Caire 88

GALLAGER S binary low-density parity-check (LDPC)

arxiv:cs/ v2 [cs.it] 1 Oct 2006

AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation

Low-Density Parity-Check Code Design Techniques to Simplify Encoding

Trapping Set Enumerators for Specific LDPC Codes

Successive Cancellation Decoding of Single Parity-Check Product Codes

Low-complexity error correction in LDPC codes with constituent RS codes 1

On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble

Construction of Type-II QC LDPC Codes Based on Perfect Cyclic Difference Set

On the minimum distance of LDPC codes based on repetition codes and permutation matrices

A Class of Quantum LDPC Codes Derived from Latin Squares and Combinatorial Design

On the Design of Raptor Codes for Binary-Input Gaussian Channels

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 2, FEBRUARY

An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

Transcription:

Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA 5095 Australia e-mail: dhaley@sprilevelsunisaeduau Abstract Motivated by the potential to reuse the decoder architecture, and thus reduce circuit space, we explore the use of iterative encoding techniques which are based upon the graphical representation of the code We design codes by identifying associated encoder convergence constraints and also eliminating some well known undesirable properties for sum-product decoding such as 4-cycles In particular we show how the Jacobi method for iterative matrix inversion can be viewed as message passing and employed as the core of an iterative encoder Example constructions of both regular and irregular LDPC codes that are encodable using this method are investigated I Introduction Since the rediscovery of low-density parity-check (LDPC) codes [1] some effort has been directed into finding computationally efficient encoders This has been motivated by the fact that in general, matrix-vector multiplication has complexity O(n 2 ) for block length n Following on from [2], [3], a class of codes built from irregular cascaded graphs, was introduced in [4] A message passing algorithm for erasure decoding these codes was presented The cascaded graph structure allows these codes to be encoded using an algorithm similar to the erasure decoder In [5] it was proposed that the parity check matrix for irregular LDPC codes be constructed such that it is in approximately lower triangular form For this case an appropriate encoder architecture could exploit the fact that most of the parity bits are computable using sparse operations, leading to approximately linear time encoding complexity In fact, it is known that the parity check matrix for most LDPC codes can be manipulated into approximately triangular form, such that the coefficient of the quadratic term in the encoding complexity is made quite small Furthermore, performance optimized LDPC codes actually admit linear time encoding [6] Consider a binary systematic (n, k) code with codewords arranged as row vectors x = [x p x u ], where x u are the information bits and x p are the parity bits Likewise partition the parity check matrix, H = [H p H u ] Thus x is a codeword iff [H p H u ][x p x u ] t = 0, or equivalently H p x t p = H u x t u Defining b = H u x t u, encoding becomes This work was supported by Southern-Poro Communications and the Australian Government under ARC SPIRT C00002232 equivalent to solving H p x t p = b (1) For m m non-singular H p, we have x t p = H 1 p b In this paper we investigate iterative solution methods for (1) and the corresponding convergence criteria and constraints imposed on H p Our goal is to develop encoding techniques which converge quickly and which re-use the sum-product message passing decoder architecture described in [7] The idea behind using the code constraints to perform encoding on the graph is not new and was originally suggested by Tanner [8] The work presented here forms a link between this concept and classical iterative matrix inversion techniques, allowing the design of good codes that encode quickly By reusing the decoder architecture for encoding, both operations can be performed by the same circuit on a time switched basis Hence, by eliminating the need for a separate dedicated encoder circuit we aim to reduce the overall size of the communication system Encoding and decoding operations are represented in the usual way as message passing on bipartite graphs Variable nodes will be labelled v i and check nodes labelled c j (the nodes can also have values associated with them, and we shall re-use the symbols v i and c j for this purpose) The matrix H specifies the edges of the graph We define A(v i ) as the set of all check nodes adjacent to variable node v i, specified by column i of H Similarly, A(c j ) is specified by row j in H Variable-check messages will be denoted λ vi c j and check-variable messages λ cj v i II The Sum-Product Encoder It is well known that if H p is upper triangular then encoding (solution of (1)) may be performed in m steps by simply performing back substitution, which implies solution for each of the parity bits in a particular order Hence upper triangular matrices are of interest For any upper triangular A with elements from F 2 (the binary Galois field), A non-singular diaga = I (2) (since the diagonal elements are the eigenvalues, none of which may be 0) Let T be the set of all non-singular

m m matrices that may be made upper triangular using only row and column permutations Consider a binary erasure channel with the mapping M of the output: M(0) = +1, M(1) = 1, M(erasure) = 0 The message passing erasure decoder cf [4] operates as follows (all arithmetic is real) Algorithm 1 1 Initialization: Set v i {0, 1,+1} to be the received symbol corresponding to variable v i Initialize all messages to 0 2 Variable - Check: From each variable v i to each c A(v i ) send λ vi c = v i + c j A(v i)\c λ cj v i 3 Check - Variable: From each check c j to each variable v A(c j ) send λ cj v = v i A(c j)\v λ vi c j 4 Stop/Continue: If at least one λ cj v i 0 for all v i then exit, otherwise return to 2 The decoder of Algorithm 1 can be used for encoding certain types of LDPC codes as we shall now show Theorem 1 Let A T and b { 1,+1} m be given Algorithm 1 solves Ax = b in at most m iterations, without regard to the actual order of node updates Proof Without loss of generality assume A upper triangular Let x = A 1 b Construct the bipartite graph with variable nodes v i connected to checks c j according to A Also connected to each check c j is the additional variable node v j Initialize the nodes (step 1) with v i = M(b i) { 1,+1} and v i = 0 Call v i active if at least one λ cj v i 0 An active v i is correct if λ cj v i {sgn M(x i ),0} for all c j A(v i ) For any correct v i, sgn λ vi c j {sgn M(x i ),0} In the case that every node is either correct or not active, nodes can only be made correct, left correct, or left inactive at the next Step 3, since each new λ cj v i {M(x i ),0} After the first Step 3, v 1 will be correct (since the only non-zero incoming message will be M(b 1 )) Similarly, any other nodes activated will be correct Assume there is a set of k 1 correct nodes C and that every node v C is inactive It remains only to show that at least one correct node is created at the next Step 3 This is true since there will exist an integer j 1 such that v 1,,v j are correct At the next Step 3, v j+1 C will be correct since by A triangular and (2), c i A(v i ) and c j A(v i ) for j < i Likewise, v j A(c j ) and v i A(c j ) for i > j The induction requires at most m steps before every node is correct Hence, if H p T, we may perform encoding by applying Algorithm 1 to (1), initializing the variables representing x u with ±1 and those representing x p with 0 The idea of Theorem 1 is certainly not new, but we have not seen it made explicit The number of iterations required for convergence may be greatly reduced below the upper bound of m for LDPC codes as they are represented by sparse matrices It is possible to design H p T using a tiered approach, similar to that described in [5] In this construction, the parity bits for one or more tiers will be evaluated at each iteration, and therefore the total number of iterations may be set by the designer The selection of H u is always arbitrary with respect to the sum-product encodability of H III Encoding via Iterative Matrix Inversion Having reduced the encoding problem statement to one of matrix inversion, it is natural to wonder whether classical iterative matrix inversion techniques such as those described in [9] can be applied Suppose we wish to solve Ax = b Split A according to A = S T We can then write Sx = Tx + b, and try the iteration Sx k+1 = Tx k + b (3) for some initial guess x 0 In order to compute x k+1 easily, S should be easily invertible The Gauss-Seidel method chooses S triangular, so for A T, we see that the method of the previous section actually implements Gauss-Seidel (in this case simply back-substitution) The classical Jacobi method chooses S = diag(a) and converges for any initial guess provided the spectral radius of the real or complex matrix S 1 T is less than 1 We will consider the use of this method for F 2 matrices, necessitating different convergence criteria Over F 2, S invertible implies S = I and diaga = I Hence (3) becomes x k+1 = (A I)x k + b (4) Theorem 2 For arbitrary x 0, the iteration (4) yields x k = A 1 b for k k iff (A I) k = 0 Proof Let the error term at iteration k be e k = (x x k ) Subtracting x k+1 = Tx k + b from x = Tx + b gives e k+1 = Te k So e k = T k e 0, where e 0 is the error of the initialization x 0 Hence the error term vanishes for iterations k k if T k = 0 Conversely, if T k 0 for all k, the algorithm will fail to universally converge since the error will be zero only if e 0 is in the null space of T k, which cannot be guaranteed independently of the initial guess

Based on Theorem 2, we can in principle construct reversible LDPC codes that are iteratively encodable in k iterations using (4) by selecting H p such that (H p I) k = 0 We call such codes Jacobi encodable It is interesting to note that the codes with H p T mentioned in the last section are also Jacobi encodable v 1 µ v1 c 2 c 1 v 1 Theorem 3 Any code with upper triangular H p is Jacobi encodable over F 2 Proof Let T = H p I Hence diagt = 0 Each successive power of T will therefore be upper triangular, with its first non-zero entry of each row occurring at least one place later Thus T k = 0 for some k We may view the Jacobi iteration as message passing on a bipartite graph formed as follows Let variable node v i correspond to x i and let nodes v j correspond to b j The v i are connected to checks c j according to A and the v j are connected to c j This is the same connection structure as required for sum-product decoding The Jacobi message passing schedule, for a binary mapping: M(0) = +1, M(1) = 1 is defined as follows Algorithm 2 1 Initialization: Set all v i = +1 and v j = b j 2 Variable - Check: Send µ vi c = v i to all c A(v i )\c i 3 Check - Variable: Each check c j sends µ cj v j = v i A(c j)\v j µ vi c j to v j only Let v i = µ ci v i Return to step 2 An example of how this algorithm operates on the graph is shown in Fig 1 During each iteration variables may be updated in parallel For clarity Fig 1 shows only those messages used to update v 2 We note that Algorithm 2 has a strong resemblance to the sum-product decoder In fact, the update process for µ cj v j in the Jacobi method is identical to that used in the update λ cj v j in the sum-product case, so the decoder architecture may be re-used It is also worth noting that only one operation per node needs to be performed in each step of the Jacobi method, compared to one per connected edge for each of the nodes in the sum-product case Therefore the Jacobi encoder implementation offers the potential for reduced power consumption IV Reversible LDPC Codes In this section we demonstrate the use of the F 2 Jacobi convergence rule, to design codes which are iteratively encodable in two iterations of the Jacobi method We therefore seek a matrix H p with (H p I) 2 = 0 = H 2 p = I There are many rules that can be applied to build a matrix H p of this form Here we build an example code using some of the simplest We may begin with any matrix A for which A 2 = I, for example A = I, and grow v 2 v 3 v 4 µ v4 c 2 µ c2 v 2 µ v 2 c 2 c 2 c 3 c 4 Fig 1 Jacobi algorithm as message passing it to the desired size of H p by recursively applying either of the following two rules [ ] [ ] A I A 0 B = B = 0 A 0 A In both cases if A 2 = I then B 2 = I The first method provides some flexibility in growing the column and row density distribution of A whereas the second method allows us to expand A without altering the distribution Neither method introduces new cycles of length 4 into the graph of H We complete H for a rate 1/2 code, building H u by adding randomly generated columns of weight 4 to the right hand side of H p and rejecting columns that would introduce a 4-cycle We have constructed an n = 512 code using this method and observed that its BER performance compares well to that of a regular code However, this simple code contains some single weight columns which are undesirable for sum-product decoding For such columns, only a single edge connects the variable node to the remainder of the graph If the variable becomes corrupted then it will always pass the corrupted message value along this edge, thus connected checks may not be satisfied, preventing their use as a stopping criteria The codes constructed in the following sections do not contain any single weight columns V Regular Construction In this section we allow four encoder iterations and build (3,6)-regular codes, constructing H p as an m m v 2 v 3 v 4

circulant matrix The first row of a circulant matrix is specified by the polynomial c(x), where the coefficients of x j 1 represent the j th column entry The i th row of the matrix is then specified by the polynomial p(x) = x i 1 c(x) mod (x m + 1) Theorem 4 If C is a binary m m circulant matrix, where m = 2 q for q > 2, built from cyclic rotations of the first row polynomial c(x) = 1+x+x 2q 2 +1, then C is an invertible (3,3)-regular matrix, satisfying the condition (C I) 4 = 0 Proof Given that the weight of c(x) is 3 and the transpose of a circulant matrix is also circulant it follows that C is (3,3)-regular Without loss of generality, we note that the statement (C I) 4 = 0 is equivalent to C 4 = I and that if this holds then C must be invertible The algebra of C over F 2 is isomorphic to the algebra of polynomials modulo x m + 1 having coefficients from F 2 [10] It therefore remains only to show that c 4 (x) 1 modulo x m + 1 c 4 (x) = x m+4 + x 4 + 1 = 1 mod (x m + 1) An example circulant matrix for m = 8, which satisfies Theorem 4, having the first row polynomial c(x) = 1 + x + x 3 follows 1 1 0 1 0 0 0 0 0 1 1 0 1 0 0 0 0 0 1 1 0 1 0 0 C = 0 0 0 1 1 0 1 0 0 0 0 0 1 1 0 1 1 0 0 0 0 1 1 0 0 1 0 0 0 0 1 1 1 0 1 0 0 0 0 1 Matrices built using the above method are also 4-cycle free, the proof being omitted here to preserve space We complete H by randomly building a (3,3) distributed H u, whilst blocking the introduction of 4-cycles In Fig 2 the performance of two reversible codes constructed using the above technique is shown to compare well with that of randomly generated (3,6)-regular codes (from [11]) These experiments were performed over an AWGN channel, using a sum-product decoder with a maximum of 1000 iterations VI Irregular Construction In this section we investigate the construction of irregular reversible LDPC codes, again allowing four encoder iterations BER 10 0 10 1 10 2 10 3 10 4 10 5 10 6 Uncoded BPSK n=1008 random n=1024 reversible n=4000 random n=4096 reversible 06 08 1 12 14 16 18 2 22 24 26 E b /N 0 (db) Fig 2 Random and reversible regular LDPC codes To build H p we start with a 4-cycle free seed matrix A which has the property (A I) 4 = 0 1 1 1 0 A = 0 1 0 1 0 0 1 1 1 0 0 1 We then grow it to the desired size of H p by recursively applying either of the following two rules, where kron represents the matrix Kronecker product [ ] I (A I) B = kron(a,i) B = (A I) I In both cases (B I) 4 = 0 and the column and row density distribution of B is equal to that of A Neither method introduces new cycles of length 4 Richardson et al [12], [13] have shown how density evolution can be used to compute the capacity of a given ensemble of randomly constructed LDPC codes They define the threshold as the maximum level of noise such that the probability of error tends to zero as both the block length and number of decoder iterations tends to infinity Chung et al [14] have since presented a less complex Gaussian approximation algorithm for determining the threshold over AWGN channels and sum-product decoding Using these algorithms the authors provide optimized irregular distribution sequences for good irregular codes We note that both algorithms are based upon random LDPC constructions, and depend upon the local tree assumption that the girth of the graph will be large enough to sustain cycle free local subgraphs during decoding [14] Here H p is structured and we are interested in observing the effect that this has on the decoder performance

BER 10 1 10 2 10 3 10 4 Uncoded BPSK n=1000 random n=1008 reversible 1 12 14 16 18 2 22 24 26 E b /N 0 (db) Fig 3 Random and reversible irregular LDPC codes The matrix H p created above has equal column and row density distributions λ(x) = ρ(x) = 06667x + 03333x 2 using the notation from [13], with respect to edges We build H u randomly for maximum column weight λ max = 9 so that the overall distribution of H is close to that for the density evolution optimized code (5) from [13], which has a noise threshold σ = 09540 (Eb /N 0 = 04090dB) λ(x) = 027684x + 028342x 2 + 043974x 8 ρ(x) = 001568x 5 + 085244x 6 + 013188x 7 (5) The noise threshold for the designed distribution (6) of H, evaluated using Chung s Gaussian approximation calculator [15] is σ = 09412 (E b /N 0 = 05260dB) λ(x) = 027664x + 028331x 2 + 044005x 8 ρ(x) = 088599x 6 + 011401x 7 (6) The performance of the optimized n = 1008 reversible code, using a sum-product decoder for 1000 iterations over an AWGN channel, is compared to that of the optimized n = 1000 random code from [13] in Fig 3 We see that it compares well until around E b /N 0 = 18dB A possible explanation for the divergence after this point is the fact that the seed matrix, although 4-cycle free, contains several cycles of length 6 The methods used to grow the seed above also grow the number of 6-cycles As a result, this particular structure of H p violates the local tree assumption in many instances illustrated how codes may be designed to encode within a guaranteed number of iterations We have drawn a link between iterative encoding/decoding and classical iterative matrix inversion techniques The Jacobi method was proposed as an iterative encoding algorithm with very low complexity Examples of both regular and irregular reversible codes were constructed and their performance analyzed The example regular reversible LDPC codes compare well to those constructed randomly, while it appears that there is still work to be done in building optimized irregular reversible structures The efficient re-use of circuit space and potential for reduced power consumption presented by the low complexity Jacobi encoder is of obvious practical relevance References [1] R G Gallager, Low-density parity-check codes MIT Press, 1963 [2] M Sipser and D A Spielman, Expander codes, IEEE Trans Inform Theory, vol 42, pp 1710 1722, Nov 1996 [3] D A Spielman, Linear-time encodable and decodable errorcorrecting codes, IEEE Trans Inform Theory, vol 42, pp 1723 1731, Nov 1996 [4] M J Luby, M Mitzenmacher, M A Shokrollahi, D A Spielman, and V Stemann, Practical loss-resilient codes, in Proc, 29th Symp Theory Computing, pp 150 159, Aug 1997 [5] D J C MacKay, S T Wilson, and M C Davey, Comparison of construction of irregular Gallager codes IEEE Trans Commun, vol 47, pp 1449 1454, Oct 1999 [6] T J Richardson and R L Urbanke, Efficient encoding of low-density parity-check codes, IEEE Trans Inform Theory, vol 47, pp 638 656, Feb 2001 [7] F R Kschischang, B J Frey, and H-A Loeliger, Factor graphs and the sum-product algorithm, IEEE Trans Inform Theory, vol 47, pp 498 519, Feb 2001 [8] R M Tanner, A recursive approach to low complexity codes, IEEE Trans Inform Theory, vol 27, pp 533 547, Sep 1981 [9] G Strang, Linear algebra and its applications Saunders College Publishing, 3 ed, 1988 [10] M Karlin, New binary coding results by circulants, IEEE Trans Inform Theory, vol 15, pp 81 92, 1969 [11] D J C MacKay, Encyclopedia of sparse graph codes, http: //wolraphycamacuk/mackay/codes/ [12] T J Richardson and R L Urbanke, The capacity of lowdensity parity-check codes under message passing decoding, IEEE Trans Inform Theory, vol 47, pp 599 618, Feb 2001 [13] T J Richardson and R L Urbanke, Design of capacityapproaching irregular low-density parity-check codes, IEEE Trans Inform Theory, vol 47, pp 619 637, Feb 2001 [14] S-Y Chung, T J Richardson and R L Urbanke, Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation, IEEE Trans Inform Theory, vol 47, pp 657 670, Feb 2001 [15] S-Y Chung, Threshold calculation using a Gaussian approximation, http://truthmitedu/~sychung/gathhtml VII Conclusions We have presented practical algorithms for iterative encoding of LDPC codes which make use of the architecture in place for a sum-product decoder In each case we have