LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom, Peania A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 1 / 61
Outline 1 Introduction to LDPC Codes 2 Iterative Decoding of LDPC Codes Belief Propagation Density Evolution Gaussian Approximation EXIT Charts 3 Efficient Encoding of LDPC and QC-LDPC Codes 4 Construction of QC-LDPC Codes Deterministic Random 5 RA and Structured RA Codes A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 2 / 61
LDPC Codes When? Discovered in 1962 by Gallager. Rediscovered by MacKay and others in late 1990s. Why? Capacity approaching on many channels. Linear decoding complexity using Belief Propagation. Linear complexity encoding, under certain conditions. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 3 / 61
LDPC Codes We can define a linear block code C as the nullspace of its m n parity check matrix: C = {c {0, 1} n : Hc = 0}. We can also define a linear block code C through its k n generator matrix as follows: { C = c = G T u : u {0, 1} k}. LDPC codes are linear block codes with a sparse parity check matrix. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 4 / 61
Factor Graph Representation An LDPC code can be associated with a bipartite graph, also called a factor graph or Tanner graph. Variable nodes correspond to codeword bits and check nodes correspond to the parity-check equations enforced by the code. Variable node j is connected with check node i if and only if H ij = 1. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 5 / 61
Factor Graph Representation and Degree Distributions Example ((7, 4) Hamming Code) 1 0 0 1 0 1 1 H = 0 1 0 1 1 1 0 0 0 1 0 1 1 1 Variable nodes Check nodes + + + A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 6 / 61
Degree Distributions The variable node degree distribution from an edge perspective is denoted as: λ(x) = λ i x i 1 i The check node degree distribution from an edge perspective is denoted as: ρ(x) = ρ i x i 1 i The λ i s and ρ i s determine the ratio of edges that are connected to variable and check nodes of degree i, respectively. They also determine the code s design rate: R = 1 i ρ i/i i λ i/i, as well as its average performance, as we will see. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 7 / 61
+ + Factor Graph Representation and Degree Distributions Example ((7, 4) Hamming Code) 1 0 0 1 0 1 1 H = 0 1 0 1 1 1 0 0 0 1 0 1 1 1 Variable nodes Check nodes + λ(x) = 3 12 + 6 12 x + 3 12 x 2 ρ(x) = x 3 A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 8 / 61
1 Introduction to LDPC Codes 2 Iterative Decoding of LDPC Codes Belief Propagation Density Evolution Gaussian Approximation EXIT Charts 3 Efficient Encoding of LDPC and QC-LDPC Codes 4 Construction of QC-LDPC Codes Deterministic Random 5 RA and Structured RA Codes A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 9 / 61
Sum-Product Marginalization Consider a function which can be factorized as follows f (x 1,..., x 6 ) = f 1 (x 1, x 2, x 3 )f 2 (x 1, x 4, x 6 )f 3 (x 4 )f 4 (x 4, x 5 ) Brute force calculation of marginal f (x 1 ) = x 1 f (x 1,..., x 6 ) requires O( X 6 ) operations. Using the factorization and the distributive law, we can write [ ] f (x 1 ) = f 1 (x 1, x 2, x 3 ) Kernel {}}{ f 2 (x 1, x 4, x 6 ) f 3 (x 4 )f 4 (x 4, x 5 ) x 1 x 1 }{{} Can be further expanded. reducing complexity to O( X 4 ). A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 10 / 61
Sum-Product Marginalization By further expanding f (x 1 ) we get the following factor graph: A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 11 / 61
Sum-Product Marginalization Rules (a) Variable node rules. (b) Function node rules. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 12 / 61
Sum-Product Marginalization - Step 1 A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 13 / 61
Sum-Product Marginalization - Step 2 A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 14 / 61
Sum-Product Marginalization - Step 3 A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 15 / 61
Sum-Product Marginalization - Step 4 A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 16 / 61
Sum-Product Marginalization - Final Step A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 17 / 61
Sum-Product Decoding Transmission of a codeword x = [ x 1 ] x n C over an AWGN channel y i = x i + w i, w i N (0, σw 2 ), i = 1, 2,..., n. The MAP decoding rule for x i reads ˆx i = arg max p(x i y) x i {±1} = arg max p(x y) x i {±1} x i = (Bayes rule, conditional independence) n = arg max p(y j x j ) 1 [x C] x i {±1} x i j=1 }{{} Sum-Product form! A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 18 / 61
Belief Propagation Log-domain decoding simplifies the sum-product rules and guarantees numerical stability of the calculations. LLRs are used as messages, i.e. each variable node aims to calculate ( ) p(xi = +1 y) L i = log p(x i = 1 y) If L i > 0, then ˆx i = +1, if L i < 0 then ˆx i = 1. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 19 / 61
Belief Propagation (a) Variable node rules. (b) Check node rules. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 20 / 61
Belief Propagation Decoding - Initialization p(x i ) and p(y i ) are known. So, given p(y i x i ), we can calculate p(x i y i ), i.e. the a posteriori probability of x i given the channel observation y i. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 21 / 61
Belief Propagation If graph is cycle-free, after running BP, we get the marginal p(x 1 y). The same holds for all p(x i y), i = 1, 2,... A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 22 / 61
Belief Propagation Decoding If Tanner graph is cycle-free, we can calculate all p(x i y), i = 1, 2,..., simultaneously. Bad news about cycle-free codes: they necessarily contain many low-weight codewords and, hence, have a high probability of error. However, BP still performs very well in practice, even when cycles are present. On graphs with cycles, when Hˆx = 0 or a maximum number of iterations is reached, decoding halts. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 23 / 61
Other Iterative Decoding Algorithms Wide variety of performance-complexity trade-offs. In order of increasing complexity and performance: 1 Bit-flipping 2 Majority Logic Decoding 3 Min-Sum Decoding 4 Belief Propagation Decoding A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 24 / 61
Density Evolution In the limit of infinite blocklength, the performance of the (λ, ρ)-ensemble of LDPC codes under Belief Propagation decoding can be predicted by Density Evolution. Due to the concentration theorem, performance of individual codes in the ensemble is close to the ensemble average performance (exponential convergence w.r.t. the codeword length). Density Evolution tracks the evolution of message probability density functions throughout the decoding procedure, under the assumption that the all-zero codeword is transmitted. For the BI-AWGNC, probability density functions must be tracked since the received LLR messages are continuous random variables (computationally very demanding). A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 25 / 61
Density Evolution - BI-AWGN Channel Initial LLR based only on the channel output is: LLR(y) = 2y ( 2 σ 2 N σ 2, 4 ) σ 2. 1 x 10 3 Initial LLR distribution 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 10 5 0 5 10 15 20 25 30 35 LLR magnitude A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 26 / 61
Density Evolution - BI-AWGN Channel 6 5 4 3 2 1 0 10 5 0 5 10 15 20 25 30 35 LLR magnitude Figure: (3, 6)-regular code, σ 2 = 0.7692. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 27 / 61
Density Evolution - BI-AWGN Channel 6 5 4 3 2 1 0 10 5 0 5 10 15 20 25 30 35 LLR magnitude Figure: (3, 6)-regular code, σ 2 = 0.7692. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 28 / 61
Density Evolution - BI-AWGN Channel 6 5 4 3 2 1 0 10 5 0 5 10 15 20 25 30 35 LLR magnitude Figure: (3, 6)-regular code, σ 2 = 0.7692. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 29 / 61
Density Evolution - Gaussian Approximation Gaussian Approximation (GA): we assume the messages are symmetric Gaussian random variables, i.e. variance σ 2 and mean σ 2 /2 (Central Limit Theorem). We must track only the mean. Variable node of degree i at iteration l: m (l) v i = m ch + k m (l 1) k Average over λ(x): m (l) v = i λ i m (l) v i. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 30 / 61
Density Evolution - Gaussian Approximation Check node of degree i at iteration l: [ u,i = φ 1 1 m (l) 1 i ( λ i φ m (l) v i ) ] j 1 Average over ρ(x): m (l) u = i ρ i m (l) u,i. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 31 / 61
Gaussian Approximation - Convergence Under the assumption that the message densities are symmetric Gaussian and that the all-zero codeword was transmitted, the average probability of bit error is: P (l) e = Q m (l) v 2 m(l) v 0. If m (l) v > m (l 1) v, l = 1, 2,... then, the BP decoder converges to a vanishingly small probability of error. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 32 / 61
EXIT Charts Message: extrinsic LLR of codebit i: L i = log p(x i = +1 y i ) p(x i = 1 y i ). Message Density: density of extrinsic LLR assuming that the all-zero codeword was transmitted. BP at a variable node acts as a decoder of a repetition code. BP at a check node acts as a decoder of a parity-check code. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 33 / 61
EXIT Charts - Repetition Code Consider an [n, 1, n] repetition code. Transmit X = ±1 over AWGNC: y i = x + w i, w i N (0, σ 2 w ), i = 1,..., n. p(x y) = p(x, y) p(y) = p(y x)p(x) p(y) p(y x) = n p(y i x). i=1 }{{} variable node rule Before decoding: p(x y i ) = a and I (X ; Y i ) = 1 H(a). After decoding: I (X ; Y) = 1 H(a n ), where n denotes n-fold convolution. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 34 / 61
EXIT Charts - Parity-Check Code Consider an [n, n 1, 2] parity-check code. Transmit codeword x = [ x 1 x 2... x n ], xi = ±1, over AWGNC: y i = x i + w i, w i N (0, σ 2 w ), i = 1,..., n. p(x i y) = p(x i, y) p(y) x i p(x)p(y x) = x i (1 [x1 x 2... x n=1] ) n p(y i x i ) } {{ } check node rule Before decoding: p(x i y i ) = b and I (X i ; Y i ) = 1 H(b). After decoding: I (X i ; Y) = 1 H(b n ), where n denotes n-fold convolution in the G-domain. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 35 / 61 i=1
EXIT Charts Now, if we consider the repetition and parity-check codes as being part of a large system that performs message passing decoding, we have: For the variable node, if I (X ; L i (X )) = 1 H(a), then: I (X ; L(X )) = 1 H(a n ). For the check codes, if the I (X i ; L i (X i )) = 1 H(b), then: I (X ; L(X )) = 1 H(b n ). A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 36 / 61
EXIT Charts We define: J(σ) = 1 1 + 2πσ e (x σ2 /2) 2 2σ 2 log 2 (1 + e x )dx. J(σ) is the mutual information between a codeword bit and the corresponding message, assuming that the message is a symmetric Gaussian random variable with standard deviation σ. J 1 (I ) is the standard deviation of the symmetric Gaussian distributed message which has mutual information I with a codeword bit. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 37 / 61
EXIT Charts - Variable Nodes I EV (resp. I AV ) is the average mutual information between the outgoing (resp. incoming) messages of variable nodes and the codeword bits. The EXIT chart I EV describing the variable node function of degree i: ) I i EV (I AV, σ 2 ch ) = J ( (i 1)J 1 (I AV ) 2 + σ 2 ch Average over λ(x): I EV (I AV, σ 2 ch ) = i λ i I i EV (I AV, σ 2 ch ), with σch 2 = 4, σ 2 σw 2 w is the noise variance. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 38 / 61
EXIT Charts - Check Nodes I EC (resp. I AC ) is the average mutual information between the outgoing (resp. incoming) messages of check nodes and the codeword bits. The EXIT chart I EC describing the check node function of degree i: ( ) IEC i (I AC ) 1 J (i 1)J 1 (1 I AC ) Average over ρ(x): I EC (I AC ) i ρ i I i EC (I AC ). A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 39 / 61
EXIT Charts - Code Optimization For every possible incoming average mutual information at variable nodes, we want the outgoing average mutual information to be larger: I EC (I EV (I AV )) > I AV If the EXIT chart of the variable nodes lies above the inverse of the EXIT chart for the check nodes, i.e. I EV (I AV ) > I 1 EC (I AV ) then the decoding converges to a vanishingly small probability of error. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 40 / 61
EXIT Charts - Code Rate Optimization For given λ(x) and ρ(x), the code design rate is: R = 1 i ρ i/i i λ i/i. A common approach is to maximize R over λ(x) for fixed ρ(x). This gives rise to a continuous linear program: max v max λ i /i i=2 s.t. I 1 EC (I AV ) < I EV (I AV ) = v max λ i IEV i (I AV ), I AV (0, 1) i=2 v max λ i = 1, λ i 0, i = 2, 3,..., v max. i=2 We solve it by discretizing I AV (0, 1). A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 41 / 61
EXIT Charts - Code Optimization 0.8 I AC or I EV 0.7 0.6 0.5 0.4 0.3 0.2 0.1 1 I EC I AV I AC or I EV 0.8 0.7 0.6 0.5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 I AV or I EC 0.4 0 0.1 0.2 0.3 0.4 0.5 I AV or I EC (a) (3, 6)-regular ensemble, r = 0.5. Figure: Noise variance σ 2 = 0.76. (b) Optimized ensemble, r = 0.58. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 42 / 61
1 Introduction to LDPC Codes 2 Iterative Decoding of LDPC Codes Belief Propagation Density Evolution Gaussian Approximation EXIT Charts 3 Efficient Encoding of LDPC and QC-LDPC Codes 4 Construction of QC-LDPC Codes Deterministic Random 5 RA and Structured RA Codes A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 43 / 61
Efficient Encoding of LDPC Codes In the general case, encoding has complexity O(n 2 ). If parity-check matrix has an approximate upper triangular form, it can be shown that encoding can be achieved with complexity O(n + g 2 ). A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 44 / 61
QC-LDPC Codes - Motivation In general, storage of parity check matrix requires O(n 2 ) memory. In Quasi-Cyclic (QC) LDPC codes, the parity check matrix consists of q q sparse circulant submatrices (usually, permutation matrices). Each circulant is fully characterized by its first row or column. Required memory becomes O(n 2 /q), i.e. reduction by a factor of q. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 45 / 61
Efficient Encoding of QC-LDPC Codes The generator matrix G can be obtained in systematic form from its cq tq parity-check matrix G = [ I G P ]. Encoding is done as follows where c = ag = [ a p ], G P has the form: G 1,1 G 1,2... G 1,c G 2,1 G 2,2... G 2,c G P =...... G t c,1 G t c,2... G t c,c G i,j s are circulants, e.g. 0 1 0... 0 0 0 1... 0 0 0 0... 0 G i,j =........ 0 0 0... 1 1 0 0... 0 A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 46 / 61
Efficient Encoding of QC-LDPC Codes For p, we have: The j-th parity q-tuple can then be obtained as follows: p j = a 1 G 1,j + a 2 G 2,j +... + a t c G t c,j, where a j is the j-th q-tuple of the input. The G i,j s are also circulants which can be fully described by their first row, g i,j. Thus, we can calculate each term of p j as follows: a i G i,j = a (i 1)b+1 g (0) i,j + a (i 1)b+2 g (1) i,j +... + a ib g (b 1) i,j, (1) where g (l) i,j denotes the l-th right cyclic shift of g i,j. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 47 / 61
Efficient Encoding of QC-LDPC Codes A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 48 / 61
Efficient Encoding of QC-LDPC Codes Some other schemes have been proposed. Wide variety of throughput-size trade-offs. Scheme Encoding speed FFs 2-input XOR 2-input AND 1 (t c)q 2cq cq cq 2 cq (t c)q (t c)q 1 (t c)q 3 q tq O(c 2 q) 0 Table: Various encoder architectures and their complexity. q t c Size of circulants Number of horizontally stacked circulants. Number of vertically stacked circulants. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 49 / 61
1 Introduction to LDPC Codes 2 Iterative Decoding of LDPC Codes Belief Propagation Density Evolution Gaussian Approximation EXIT Charts 3 Efficient Encoding of LDPC and QC-LDPC Codes 4 Construction of QC-LDPC Codes Deterministic Random 5 RA and Structured RA Codes A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 50 / 61
Deterministic QC-LDPC Codes (Array Codes) For a prime q and positive integer m q, we create the matrix: I I... I... I I P 1... P (m 1)... P (q 1) H = I P 2... P 2(m 1)... P 2(q 1)........... I P (m 1)... P (m 1)(m 1)... P (m 1)(q 1) 0 1 0... 0 0 0 1... 0 0 0 0... 0 P =....... 0 0 0... 1 1 0 0... 0 P i is P with all columns cyclically shifted to the right by i positions. By convention, P 0 = I and P = 0. H represents a (j, q) regular QC-LDPC code. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 51 / 61
Deterministic QC-LDPC Codes (Modified Array Codes) For a prime q and positive integer m q, we create the matrix: I I I... I... I 0 I P 1... P (m 2)... P (q 2) H = 0 0 I... P 2(m 3)... P 2(q 3)............ 0 0 0... I... P (m 1)(q m) Efficient encoding possible due to structure. Slightly irregular code. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 52 / 61
Random Construction of QC-LDPC Codes (Regular Codes) The mq nq parity-check matrix is constructed as follows: P a 11 P a 12... P a 1n P a 21 P a 22... P a 2n H =...... P a m1 P a m2... P amn where a ij {0, 1, 2,..., q 1}. If a ij {0, 1, 2,..., L 1}, then irregular codes can also be constructed. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 53 / 61
Random Construction of QC-LDPC Codes (Irreg. Codes) The number of non-zero blocks in each column of the block parity check matrix can be chosen according to a degree distribution. Permutation matrices P i are used, i is usually chosen at random, and, if some constraints, e.g. cycle girth, check node distribution, are violated, another value for i is chosen. Very similar to one of Gallager s constructions of irregular codes. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 54 / 61
Other Constructions of QC-LDPC Codes Constructions based on Finite Geometries. Constructions based on Reed-Solomon codes. Constructions based on masking of existing LDPC codes. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 55 / 61
1 Introduction to LDPC Codes 2 Iterative Decoding of LDPC Codes Belief Propagation Density Evolution Gaussian Approximation EXIT Charts 3 Efficient Encoding of LDPC and QC-LDPC Codes 4 Construction of QC-LDPC Codes Deterministic Random 5 RA and Structured RA Codes A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 56 / 61
Repeat-Accumulate Codes RA codes: subclass of LDPC codes. Encoding: 1 A frame of information symbols of length N is repeated q times. 2 A random (but fixed) permutation is applied to the resulting frame. 3 The permuted frame is fed to a rate-1 accumulator with transfer function 1/(1 + D). For Irregular RA codes, each information bit is repeated according to a repetition profile. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 57 / 61
Structured Repeat-Accumulate Codes The parity-check matrix of an RA code can be written as follows: H = [ H 1 H 2 ], where: 1 0 0 0... 0 1 1 0 0... 0 0 1 1 0... 0 H 2 =......... 0 0... 1 1 0 0 0... 0 1 1 We can make RA codes even simpler by enforcing a QC structure on H 1. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 58 / 61
Structured IRA Codes We construct the matrix H 1 as follows: P a 11 P a 12... P a 1n P a 21 P a 22... P a 2n H 1 =...... P a m1 P a m2... P amn where P is a right cyclic shift q q permutation matrix and a i,j {0, 1,..., q 1, } are the corresponding exponents. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 59 / 61
Structured IRA Codes If we use H 1 as is, the resulting code has a poor minimum distance. We use a permuted version of H 1 instead, where the permutation is chosen so that the minimum codeword weight for low-weight inputs is increased. So, the final parity-check matrix will be: H = [ ΠH 1 H 2 ] The resulting code is regular unless we choose to mask out some entries (by choosing a ij = ) in the H 1 matrix in accordance with a targeted repetition profile. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 60 / 61
Appendix - φ(x) and J(σ) The function φ(x) is defined as follows: { 1 1 + φ(x) = 4πx tanh ( ) } u 2 exp { (u x)2 4x du, x > 0 1, x = 0. The function J(σ) is defined as follows: J(σ) = 1 1 + 2πσ e (x σ2 /2) 2 2σ 2 log 2 (1 + e x )dx. A. Balatsoukas-Stimming and A. P. Liavas () LDPC Codes December 16, 2011 61 / 61