Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Similar documents
An Introduction to Low Density Parity Check (LDPC) Codes

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

ABSTRACT. The original low-density parity-check (LDPC) codes were developed by Robert

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Intracom Telecom, Peania

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Iterative Encoding of Low-Density Parity-Check Codes

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Expectation propagation for symbol detection in large-scale MIMO communications

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Lecture 4 : Introduction to Low-density Parity-check Codes

Low-Density Parity-Check codes An introduction

Low-density parity-check (LDPC) codes

Making Error Correcting Codes Work for Flash Memory

On the minimum distance of LDPC codes based on repetition codes and permutation matrices 1

Lecture 12. Block Diagram

Mapper & De-Mapper System Document

Low-density parity-check codes

An Introduction to Low-Density Parity-Check Codes

Information Theoretic Imaging

Capacity-approaching codes

ECEN 655: Advanced Channel Coding

5. Density evolution. Density evolution 5-1

6.451 Principles of Digital Communication II Wednesday, May 4, 2005 MIT, Spring 2005 Handout #22. Problem Set 9 Solutions

Structured Low-Density Parity-Check Codes: Algebraic Constructions

State-of-the-Art Channel Coding

Coding Techniques for Data Storage Systems

ECC for NAND Flash. Osso Vahabzadeh. TexasLDPC Inc. Flash Memory Summit 2017 Santa Clara, CA 1

AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES

Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes

Low-Density Parity-Check Codes

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Graph-based Codes for Quantize-Map-and-Forward Relaying

Introducing Low-Density Parity-Check Codes

Graph-based codes for flash memory

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Chapter 7: Channel coding:convolutional codes

Design of Non-Binary Quasi-Cyclic LDPC Codes by Absorbing Set Removal

LDPC codes based on Steiner quadruple systems and permutation matrices

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields

UTA EE5362 PhD Diagnosis Exam (Spring 2011)

Iterative Solutions Coded Modulation Library Theory of Operation

Enhancing Binary Images of Non-Binary LDPC Codes

The New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the Binary LDPC Codes With Better Cycle-Structures

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Research Letter Design of Short, High-Rate DVB-S2-Like Semi-Regular LDPC Codes

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

Non-binary Hybrid LDPC Codes: structure, decoding and optimization

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

The Turbo Principle in Wireless Communications

The PPM Poisson Channel: Finite-Length Bounds and Code Design

Decoding of LDPC codes with binary vector messages and scalable complexity

Belief-Propagation Decoding of LDPC Codes

A Simplified Min-Sum Decoding Algorithm. for Non-Binary LDPC Codes

Pipeline processing in low-density parity-check codes hardware decoder

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

A low complexity Soft-Input Soft-Output MIMO detector which combines a Sphere Decoder with a Hopfield Network

THE seminal paper of Gallager [1, p. 48] suggested to evaluate

Convergence of the Sum-Product Algorithm for Short Low-Density Parity-Check Codes

Practical Polar Code Construction Using Generalised Generator Matrices

Error Floors of LDPC Coded BICM

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

Expected Error Based MMSE Detection Ordering for Iterative Detection-Decoding MIMO Systems

Chapter 7 Reed Solomon Codes and Binary Transmission

Slepian-Wolf Code Design via Source-Channel Correspondence

Constellation Shaping for Communication Channels with Quantized Outputs

Constellation Shaping for Communication Channels with Quantized Outputs

On Turbo-Schedules for LDPC Decoding

On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble

Joint Equalization and Decoding for Nonlinear Two-Dimensional Intersymbol Interference Channels with Application to Optical Storage

A Short Length Low Complexity Low Delay Recursive LDPC Code

16.36 Communication Systems Engineering

The Concept of Soft Channel Encoding and its Applications in Wireless Relay Networks

LOW-density parity-check (LDPC) codes were invented

An algorithm to improve the error rate performance of Accumulate-Repeat-Accumulate codes Tae-Ui Kim

Integrated Code Design for a Joint Source and Channel LDPC Coding Scheme

Bounds on Mutual Information for Simple Codes Using Information Combining

Efficient LLR Calculation for Non-Binary Modulations over Fading Channels

LDPC Decoder LLR Stopping Criterion

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

Quasi-cyclic Low Density Parity Check codes with high girth

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Codes on graphs and iterative decoding

On the minimum distance of LDPC codes based on repetition codes and permutation matrices

Compressed Sensing and Linear Codes over Real Numbers

Fountain Uncorrectable Sets and Finite-Length Analysis

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1

JOINT ITERATIVE DETECTION AND DECODING IN THE PRESENCE OF PHASE NOISE AND FREQUENCY OFFSET

Distributed Source Coding Using LDPC Codes

SPA decoding on the Tanner graph

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

On Accuracy of Gaussian Assumption in Iterative Analysis for LDPC Codes

The Design of Rate-Compatible Structured Low-Density Parity-Check Codes

Efficient Log Likelihood Ratio Estimation for Polar Codes

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

Transcription:

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras e-mail: hari_jethanandani@yahoo.com Abstract Low-density parity-check (LDPC) codes are discussed as practical, capacity-approaching error correction codes owing to their less complex and possibly parallel decoding. The LDPC decoder is represented as a normal graph and decoding using the message-passing algorithm (MPA) for BPSK and M- ary QAM is studied. The decoders are discussed and an approximate function is evaluated to make the decoding more practical at the cost of slightly degraded performance. 1 1 0 1 1 0 0 H = 1 0 1 1 0 1 0 0 1 1 1 0 0 1 1. Introduction Low-density parity-check codes were introduced by Gallager [1] in 1963 and were recently rediscovered with the advent of turbo codes, which employ the soft iterative decoding paradigm. LDPC codes are linear block codes with very sparse paritycheck matrices. Codes based on such matrices that have a small, fixed number of 1 s in each row and column are called regular LDPC codes while matrices that have varying, small number of 1 s in the rows and columns are called irregular LDPC codes. Irregular large blocklength LDPC codes can achieve better performance than Turbo codes as shown in [3]. A regular (n,j,m) binary LDPC code has blocklength n, while the parity-check matrix has fixed column weight j and fixed row weight m. The number of rows is (n*j)/m. Thus the designed rate of the code is 1-(j/m). An equivalent systematic parity-check matrix H sys is constructed from H by performing Gaussian elimination. The number of rows in the matrix H sys may be less than what it was designed for, i.e., (n*j)/m, because some of the rows of the sparse H matrix may be linearly dependent which are removed in Gaussian elimination. The rate of the code then increases. A systematic generator matrix G sys can be obtained from H sys for encoding data bits to code bits. For decoding, the original sparse H matrix is used. 2. LDPC codes modeled as normal graphs Any parity-check matrix can be represented by a normal graph. The parity-check matrix for a (7,4) Hamming code and its associated normal graph is shown in Fig 1. Fig 1 - Normal graph for H of (7,4) Hamming code Such graphs can also be drawn for LDPC codes. There are two main types of nodes shown above: the bit nodes (or equality nodes) denoted B i represent code bits and the parity-check nodes denoted C j represent parity-checks (the rows of H matrix). The edges from a particular bit node B i joining the check nodes show the parity-checks a bit participates in. The edges from a check node C j joining the bit nodes shows the bits checked by that parity-check. There are also present edges from some conveniently introduced nodes N i (decoder input nodes) to the bit nodes, which input the intrinsic probabilities of bits from the demodulator to the bit nodes. The intrinsic probabilities P int (x i =b), where (b ε {0,1}), and x i are code bits, are probabilities of the bits being a 1 or a 0 as seen at the demodulator input. For received input y i = x i + n i, where x i are BPSK modulated bits (1,-1) and n i is AWGN of variance σ 2, these are given by: P int (x i =1) = P(x i =1 y i ) = P(y i x i =1).P(x i =1)/P(y i ) = exp ( 2y i /σ 2 ) / {1+exp ( 2y i /σ 2 )}.

P int (x i =0) = 1 P int (x i =1) = 1/{1+exp(2y i /σ 2 )}. The intrinsic probabilities are dependent only on the soft input to the decoder. The bit nodes and paritycheck nodes represent constraints on the possible values that can be taken by the code vector x, whose individual bits are represented here as edge-variables x i of the edges. The bit node constrains the edgevariables connected to it to be equal, hence the edgevariables e ij connected to B i represent the bits x i of x. The parity-check node constrains the edge-variables connected to it to be in even parity. The edges between N i and B i are called external edges while those between B i and C j are called internal edges. 3. Message-passing Consider a node as shown in Fig 2. all possible combinations of the edge-variables x 1, x 2,.., x, then P post (x i =b) = c. P int (x i =b). Σ { Π P int (x j )} (1) { x1,.. x } ε S j=1 N ~{xi } j i where 1/c = P(N) = Σ { Π P int (x j )} { x1,.. x } ε S j=1 N where the summation in (1) is over all values of x 1,.. x (except x i ) such that they are in the constraint set of N. Define extrinsic probabilities as: P ext (x i =b) = c. Σ { Π P(x j =b)} (2) {x1,.. x } ε S j=1 N ~{xi } j i where c is such that P ext (x i =0) + P ext (x i =1) = 1. Fig 2 A single node x 0, x 1 and x 2 are the edge-variables (along the 3 edges) which belong to the alphabet {0,1}, and let N be an equality node. Then, N is a constraint on the edgevariables requiring them to be equal. The intrinsic probabilities P int (x i =b), (b ε 0,1), for i = 0,1,2, are probabilities without the constraint imposed by node N. Given the intrinsic probabilities, one can find the posterior probabilities (i.e. the probabilities of the edge-variables of being 1 or 0 conditional on the constraint imposed by N) as: P post (x i =b) = P(x i =b N) = P(N x i =b).p int (x i =b)/p(n) where P(N) = Σ P(N x i =b). P int (x i =b) { xi } In Fig 2, P(N x 0 =0) = P int (x 1 =0). P int (x 2 =0) P post (x 0 =0) = c.p int (x 1 =0). P int (x 2 =0). P int (x 0 =0) where c = 1/P(N) = 1/{P int (x 1 =0) P int (x 2 =0) P int (x 0 =0) + P int (x 1 =1) P int (x 2 =1) P int (x 0 =1)} In general, if the constraint set of edge-variables allowed by a node N is S N ε S, where S is the set of Hence, the extrinsic probabilities when N in Fig 2 is an equality node are: 2 P ext (x i =b) = c. Π P int (x j =b). (3) j=0 j i where c is such that P ext (x i =0) + P ext (x i =1) = 1. In Fig 2, if N is considered to be a parity-check node, i.e. when N allows x 0, x 1 and x 2 such that x 0 + x 1 + x 2 = 0 mod 2, it is shown in [2] that: 2 P ext (x i =0) = (1 + Π (1 2p j )) / 2 (4) j=0 j i P ext (x i =1) = (1 P ext (x i =0)) where p j = P int (x j = 1). It is shown in [2] that for a graph with 2 or more nodes, the posterior probabilities of a bit being equal to a 1 or a 0, can be calculated as a distributed computation by passing extrinsic probabilities (or messages μ) along the edges in the graph. This procedure, also known as the Message-passing algorithm (MPA) or the sum-product algorithm, gives the exact posterior probabilities if the graph is cycle-free. By a cycle, we mean a path along the edges of the graph that starts and ends at the same node. The length of a cycle is the number of nodes a

path traverses before completing the cycle. It has been found in practice that for many situations of graphs with cycles, such as those for the parity-check matrices of LDPC codes, the message-passing algorithm, though only approximate, gives very good estimates of the posterior probabilities of the code bits of being a 1 or a 0 with much lower complexity than an exact decoding, i.e. estimating the codeword as: x = argmax P(x j y), j where x j are all possible codewords and y is received soft vector. It is known that the MPA performs well if the normal graph of the parity-check matrix does not contain cycles of small length. A properly constructed sparse parity-check matrix for an LDPC code can be represented as a normal graph with cycles of large length and small number of edges joined to any single node, hence the message-passing algorithm results in very efficient and accurate decoding of LDPC codes. With large blocklength and same j and m, the LDPC decoder works better because of the increased sparseness of the H matrix. 4. Message-passing algorithm for LDPC decoder The decoder works iteratively by passing messages (μ), which are the extrinsic probabilities as defined in (2), along the edges of the graph. For example, μ X Y (x=b) means the extrinsic probability of the bit represented by x being equal to b which is calculated at node X and being used by node Y for its own calculation of probabilities. So, it is in some sense a message being passed from X to Y. The extrinsic probabilities or messages outgoing from X to Y act as intrinsic probabilities for Y, as shown in [2]. The message-passing algorithm is summaried as: 1. Initialie. The algorithm starts with the following initialiation: μ Ni Bi (x i = b) = P int (x i = b), and uniform distributions for μ Cj Bi (e ij = 0) = μ Cj Bi (e ij = 1) = ½, as there is no knowledge about them a priori. 2. Message-passing and update rule. Let M(i) denote the set of parity checks in which bit x i participates. Let L(j) denote the set of bits checked by parity-check j. Then, the updated values of messages from bit nodes B i to the parity-check nodes C j are given by (a simple extension of (2)): μ Bi Cj (e ij = 0) = c. μ Ni Bi (x i = 0). Π μ Cj Bi (e ij = 0) μ Bi Cj (e ij = 1) = c.μ Ni Bi (x i = 1). Π μ Cj Bi (e ij = 1) where M(i)\{j} is the set M(i) with element j omitted and c is such that μ Bi Cj (e ij = 0) + μ Bi Cj (e ij = 0) = 1. The messages from the check node C j to the bit nodes B i are given by (an extension of (4)): μ Cj Bi (e ij = 0) = ½.(1 + Π(1 2μ Bi Cj (e i j = 1))) (5) μ Cj Bi (e ij = 1) = ½.(1- Π(1 2μ Bi Cj (e i j = 1))) (6) 3. Calculate output. The posterior probability which is used to decode individual bits of x after each iteration is given by: q i 0 = c. P int (x i =0) Π μ Cj Bi (e ij = 0) q i 1 = c. P int (x i =1) Π μ Cj Bi (e ij = 1) The bits x i are decoded as: If q i 0 > q i 1, bit x i is decoded as 0 otherwise as 1. The steps 1-3 comprise of a single iteration of the message-passing algorithm. Such iterations are repeated using updated values of messages till a correct codeword is found (i.e. if xh T = 0) or a prefixed maximum number of iterations is reached. A decoding failure may be reported if a codeword cannot be found within the allowed number of iterations. 5. Decoding with Log-Likelihood Ratios The Log-likelihood ratio (LLR) of a probability is LLR(P(x)) = ln{p(x=0)/p(x=1)} where P can be intrinsic, extrinsic or posterior probability of a binary variable x. Also, let p i = μ Ni Bi q ij = μ Bi Cj r ij = μ Cj Bi The message-passing algorithm is now modified as: Initialie. Start with LLR(r ij ) = 0 and LLR(p i ) = ln{1/exp(2y i /σ 2 )} = 2y i /σ 2. Message-passing and update. At each iteration, calculate: LLR(q ij ) = Σ LLR(r ij ) + LLR(p i )

LLR(r ij ) = 2 tanh -1 (Π tanh (½ LLR(q i j ))) (7) i ε L(i)\{j} where we note that, for any Probability of a binary variable x and for y ε (-1,1): 1 2P(x=1) = tanh( ½ LLR(P(x))) (8) 2tanh -1 (y) = ln{(1+y) / (1 y)} (9) Applying (8) and (9) to (5) and (6) gives (7). Calculate output. The posterior LLR is given by summing over all the checks that contain the i th bit and the intrinsic LLR: LLR(q i ) = Σ LLR(r ij ) + LLR(p i ) The bits q i can now be decoded as: If LLR(q i ) > 0 decode x i = 0 else x i = 1. It can be noted that any product of real numbers can be written as follows: Π a i = ( Π sgn(a i ) ). exp( Σ ln ( a i ) ). i i i Let Ψ (x) = ln(tanh(x/2)) = ln[(1 + exp(-x)) / (1 exp(-x))], where Ψ(x) is defined for x > 0 and is plotted in Fig 3. Noting that Ψ(x) is its own inverse, we can write: LLR(r ij ) = s ij. Ψ ( Σ Ψ ( LLR(q i j ) ) ) constellation with Gray coding (Fig 4), where each symbol represents 4 bits as [b1 b2 b3 b4], the intrinsic probability of bit1 is: LLR(P int (bit1) = ln{p(bit1=0 x,y) P(bit1=1 x,y)} = ln [P(symbol ε {0011, 0010,, 0101} x,y) P(symbol ε {1001, 1000,, 1111} x,y)]. Here, the numerator = [ exp{-((y-3) 2 + (x+3) 2 )/(2*σ 2 )} + exp{-((y-3) 2 + (x+1) 2 )/(2*σ 2 )} + + exp{-((y-1) 2 + (x-3) 2 )/(2*σ 2 )}], the denominator = [ exp{-((y+1) 2 + (x+3) 2 )/(2*σ 2 )} + exp{-((y+1) 2 + (x+1) 2 )/(2*σ 2 )} + + exp{-((y+3) 2 + (x-3) 2 )/(2*σ 2 )}], x and y are respectively the received values of the in-phase and quadrature components of the signal from the demodulator and σ 2 is the noise variance. Similarly, the intrinsic probabilities for bit2, bit3 and bit4 can be calculated. For each symbol received in 16-QAM, intrinsic probabilities for four bits are calculated. The rest of the decoding is done as before taking these intrinsic probabilities as inputs. where sign s ij = Π sgn(llr(q i j )). Fig 4 16-QAM constellation with Gray coding 6. Practicality of decoding Fig 3 - Ψ(x) (= Si(x)) vs x Decoding for multilevel modulation. The calculation of intrinsic probabilities is extended for multilevel modulation. For example, for the 16-QAM The concept of using sparse matrices for decoding using the MPA leads to practical decoding in the following ways: a. Decoding with LLRs, the multiplications required for computation of q ij get converted to additions making it suitable for hardware implementation. Also, the calculation of r ij requires a single mapping Ψ(x) and a few bit operations.

b. It can be observed [2] in Figure 3 that the dominant term in the calculation of LLR(r ij ) is the minimum of the terms LLR(q i j ) for. Thus, LLR(r ij ) can be calculated as: LLR(r ij ) = s ij. ( min ( LLR(q i j ) ) ) This requires a few comparisons and bit-operations. c. The decoding time increases only linearly with the blocklength. The implementation of decoding with Ψ(x) requires a large number of calculations for calculating ln(tanh(x/2)) whereas the min approximation requires only comparisons. The probability of undetected errors is very low, since the blocklength is large. There were no undetected errors in our simulations. This is one more advantage of large blocklengths, apart from improved performance. Uncoded BPSK has a BER of 10e-5 at 9.6 db. Coding gain at a BER of 10^-5 for BPSK is about 6.9 db when decoding is done with the Si(x) function and is about 6.6 db with the min approximation. Simulation results for QPSK and 16-QAM for the above rate half irregular LDPC code (decoding with the Si(x) function) are plotted in Fig 6. Coding gain for QPSK at a BER of 3*10^-5 is about 6.6 db. For 16-QAM, it is about 7 db at a BER of 3.5*10e-5. d. The biggest advantage of the LDPC decoder is that parallel processing is possible, i.e. the calculations for all nodes can be done in parallel. 7. Simulation Results Fig 5 gives the results of our BPSK simulations. As can be seen in Fig 1, cycles of length 4 are formed when any two columns of H have more than one 1 at same row positions. Cycles of length 4 were avoided completely in our construction of H matrix by taking care that any pair of columns of H matrix did not have more than one 1 bit at a common row position. These results are for rate ½ irregular LDPC code, blocklength n = 1000 and the simulations were averaged over 5000 runs for different values of Eb/No. The H matrix was constructed with variable row weight (mean weight 6) and column weight was Fig 6 Eb/No vs BER for QPSK and 16-QAM References 1. R.G.Gallager, Low-Density Parity-Check Codes, MIT Press, 1963. 2. John L. Fan, Constrained Coding and Soft Iterative Decoding, K.A. Publishers, 2001. 3. Richardson et al, Design of Capacityapproaching Irregular Low-Density Parity-Check Codes, IEEE Trans.Info.Theory, Feb. 2001. Fig 5 Eb/No vs BER for BPSK fixed to j = 3. The maximum number of iterations allowed was 50. When the decoder failed to converge at a codeword after 50 iterations, the x vector, though not a codeword, was taken to be the decoded output.