Upper Bounding the Performance of Arbitrary Finite LDPC Codes on Binary Erasure Channels An efficient exhaustion algorithm for error-prone patterns C.-C. Wang, S.R. Kulkarni, and H.V. Poor School of Electrical & Computer Engineering, Purdue Unversity Department of Electrical Engineering, Princeton University Wang, Kulkarni, & Poor p. 1/24
Content Brief introduction Stopping sets in erasure channels. Hardness & Applications. Existing approaches for upper / lower bounding the fixed code performance: A tree-based upper bound for bit error rates. Frame error rates and trapping sets. Wang, Kulkarni, & Poor p. 2/24
Stopping Sets (SSs) Definition: a set of variable nodes such that the induced graph contains no check node of degree 1. Example: The binary (7,4) Hamming Code i = 1 j =1 2 3 2 3 4 5 6 7 (2, 5, 7) a valid codeword vs. (4, 6, 7) a pure stopping set. Hamming distance vs. Stopping distance Wang, Kulkarni, & Poor p. 3/24
Stopping Sets (SSs) Definition: a set of variable nodes such that the induced graph contains no check node of degree 1. Example: The binary (7,4) Hamming Code i = 1 j =1 2 3 2 2 2 3 4 5 6 2 7 (2, 5, 7) a valid codeword vs. (4, 6, 7) a pure stopping set. Hamming distance vs. Stopping distance Wang, Kulkarni, & Poor p. 3/24
Stopping Sets (SSs) Definition: a set of variable nodes such that the induced graph contains no check node of degree 1. Example: The binary (7,4) Hamming Code i = 1 j =1 2 3 2 2 2 3 4 5 6 2 7 (2, 5, 7) a valid codeword vs. (4, 6, 7) a pure stopping set. Hamming distance vs. Stopping distance Wang, Kulkarni, & Poor p. 3/24
Upper Bounds vs. Exhausting Minimum Stopping Sets An n = 72 regular (3,6) LDPC code 10 0 10 5 ber 10 10 10 15 10 20 10 3 10 2 10 1 10 0 erasure prob ε V60: Monte Carlo V60: UB Wang, Kulkarni, & Poor p. 4/24
Upper Bounds vs. Exhausting Minimum Stopping Sets An n = 72 regular (3,6) LDPC code 10 0 10 5 An 10 10 NP-Complete Problem!! ber 10 15 by Krishnan et al. 10 20 10 3 10 2 10 1 10 0 erasure prob ε V60: Monte Carlo V60: UB Wang, Kulkarni, & Poor p. 4/24
A Hard but Useful Problem As an upper bound: Guaranteed worst performance for extremely low ber regimes. As an SS exhaustion algorithm: Code annealing [Wang 06]. 10 0 10 1 10 2 Error Probability 10 3 10 4 Typical CL CL+d2opt 10 5 CL+CA (DA+CA)+CL+CA 10 6 (DA+CA)+CL+CA, asym. Random Constr. Direct CA 10 7 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Erasure Probability ε Wang, Kulkarni, & Poor p. 5/24
Existing Approaches Ensemble analysis: [Di et al. 02] avg. ber and FER curves, [Amraoui et al. 04] the scaling law for water fall regions. Fixed Code analysis: [Holzlöhner et al. 05] dual adaptive importance sampling, [Stepanov et al. 06] instanton method Enumerating bad patterns: [Richardson 03] error floors of LDPC codes, [Yedidia et al. 01] projection algebra, [Hu et al. 04] the minimum distance of LDPC codes. They are all inexhaustive enumeration. Wang, Kulkarni, & Poor p. 6/24
The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure Wang, Kulkarni, & Poor p. 7/24
The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 Wang, Kulkarni, & Poor p. 7/24
The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 f 2,1 = x 2 (x 1 + x 3 )(x 4 + x 6 ) Wang, Kulkarni, & Poor p. 7/24
The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 f 2,1 = x 2 (x 1 + x 3 )(x 4 + x 6 ) f 2,2 = x 2 (x 1 ( f 5 4,1 + f 6 4,1 ) + x 3 ( f 4 3,1 + f 5 3,1 )) (x 4 ( f 3 3,1 + f 5 3,1 ) + x 6 ( f 1 4,1 + f 5 4,1 )) = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) Wang, Kulkarni, & Poor p. 7/24
The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 f 2,1 = x 2 (x 1 + x 3 )(x 4 + x 6 ) f 2,2 = x 2 (x 1 ( f 5 4,1 + f 6 4,1 ) + x 3 ( f 4 3,1 + f 5 3,1 )) (x 4 ( f 3 3,1 + f 5 3,1 ) + x 6 ( f 1 4,1 + f 5 4,1 )) = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) f 2 := lim l f 2,l = f 2,2 Wang, Kulkarni, & Poor p. 7/24
The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 f 2,1 = x 2 (x 1 + x 3 )(x 4 + x 6 ) f 2,2 = x 2 (x 1 ( f 5 4,1 + f 6 4,1 ) + x 3 ( f 4 3,1 + f 5 3,1 )) (x 4 ( f 3 3,1 + f 5 3,1 ) + x 6 ( f 1 4,1 + f 5 4,1 )) = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) f 2 := lim f 2,l = f 2,2 l p 2 = E{ f 2 } Wang, Kulkarni, & Poor p. 7/24
The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 f 2,1 = x 2 (x 1 + x 3 )(x 4 + x 6 ) f 2,2 = x 2 (x 1 ( f 5 4,1 + f 6 4,1 ) + x 3 ( f 4 3,1 + f 5 3,1 )) (x 4 ( f 3 3,1 + f 5 3,1 ) + x 6 ( f 1 4,1 + f 5 4,1 )) = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) f 2 := lim f 2,l = f 2,2 l Asymptotically, no repeated variables p 2 = E{ f 2 } Wang, Kulkarni, & Poor p. 7/24
Enumeration-Based LB & UB f 2 = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) = x 1 x 2 x 6 + x 2 x 3 x 4 + x 1 x 2 x 4 x 5 + x 2 x 3 x 5 x 6 Each product term corresponds to an irreducible stopping set. Wang, Kulkarni, & Poor p. 8/24
Enumeration-Based LB & UB f 2 = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) = x 1 x 2 x 6 + x 2 x 3 x 4 + x 1 x 2 x 4 x 5 + x 2 x 3 x 5 x 6 Each product term corresponds to an irreducible stopping set. LB: Graph-based search [Richardson 03], iteration-based relaxation [Yedidia et al. 01] f 2,LBa = x 2 x 3 x 4 f 2,LBb = x 1 x 2 x 4 x 5 LB a = E{ f 2,LB } = ǫ 3 LB b = E{ f 2,LBb } = ǫ 4 Wang, Kulkarni, & Poor p. 8/24
Enumeration-Based LB & UB f 2 = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) = x 1 x 2 x 6 + x 2 x 3 x 4 + x 1 x 2 x 4 x 5 + x 2 x 3 x 5 x 6 Each product term corresponds to an irreducible stopping set. LB: Graph-based search [Richardson 03], iteration-based relaxation [Yedidia et al. 01] f 2,LBa = x 2 x 3 x 4 f 2,LBb = x 1 x 2 x 4 x 5 LB a = E{ f 2,LB } = ǫ 3 LB b = E{ f 2,LBb } = ǫ 4 UB: Iteration-based relaxation [Yedidia et al. 01] f 2,2 = x 2 x 3 x 4 + x 1 x 2 x 4 x 5 + x 1 x 2 x 6 + x 2 x 3 x 5 x 6 x 2 1x 4 + 1x 2 x 4 1 + 1x 2 x 6 + x 2 1 1x 6 = x 2 x 4 + x 2 x 6 UB = 2ǫ 2 ǫ 3 Wang, Kulkarni, & Poor p. 8/24
An Upper Bound f a 1 3 2 3 2 4 4 f b 1 3 2 4 3 4 Wang, Kulkarni, & Poor p. 9/24
An Upper Bound f a 1 3 2 3 2 4 4 f b 1 3 2 4 f a = f b 3 4 E DE { f a } < p e = E{ f a } = E{ f b } E DE { f b } Wang, Kulkarni, & Poor p. 9/24
An Upper Bound f a 2 1 3 2 2 3 2 2 4 4 f b 1 2 3 2 4 f a = f b 3 4 E DE { f a } < p e = E{ f a } = E{ f b } E DE { f b } Wang, Kulkarni, & Poor p. 9/24
An Upper Bound f a 2 1 3 1 2 f b 3 2 2 2 3 2 2 4 4 2 4 f a = f b 3 2 4 E DE { f a } < p e = E{ f a } = E{ f b } E DE { f b } Wang, Kulkarni, & Poor p. 9/24
An Upper Bound (Cont d) Theorem 1 Suppose all messages entering any variable node are independent, or equivalently, the youngest common ancestors of all pairs of repeated bits are check nodes. E DE { f } E{ f } Theorem 2 This upper bound is tight in order. E DE { f }(ǫ) = O(E{ f }(ǫ)), ǫ Wang, Kulkarni, & Poor p. 10/24
A Pivoting Rule f = x 0 f x0 =1 + f x0 =0 Wang, Kulkarni, & Poor p. 11/24
A Pivoting Rule f = x 0 f x0 =1 + f x0 =0 f x 0 (a) Original x 0 f x0 =1 1 0 (b) Decoupled f f x0 =0 Wang, Kulkarni, & Poor p. 11/24
The Two-Stage Algorithm f Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm f Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm f f FINITE Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm f f FINITE Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24
The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite 2 2 2 2 2 Wang, Kulkarni, & Poor p. 12/24
A Narrowing Search Theorem 3 (Asymptotically Tight) With an optimal" growing module, we have UB(ǫ) = O(p e (ǫ)). Wang, Kulkarni, & Poor p. 13/24
A Narrowing Search Theorem 3 (Asymptotically Tight) With an optimal" growing module, we have UB(ǫ) = O(p e (ǫ)). Theorem 4 (Exhaustive Enumeration) d := min{size(x) x such that f FINITE (x) = 1}. Then the stopping distance d. If there exists such a minimum x being a SS, then ALL minimum SSs are in the set of all minimum x. The exhaustive list of minimum SSs leads to a lower bound tight in both order and multiplicity. Wang, Kulkarni, & Poor p. 13/24
The Two-Stage Algorithm p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite 2 2 2 2 2 Wang, Kulkarni, & Poor p. 14/24
Numerical Experiments The (7,4,3) Hamming code: H = 0 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 1 1 1 Wang, Kulkarni, & Poor p. 15/24
Numerical Experiments The (7,4,3) Hamming code: H = 0 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 1 1 1 i [1, 7], ǫ (0, 1], UB i (ǫ) p i (ǫ) 1.3. Wang, Kulkarni, & Poor p. 15/24
Numerical Experiments The (7,4,3) Hamming code: The (23,12,7) Golay code: H = [H I] in which H = H = 0 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 1 1 1 i [1, 7], ǫ (0, 1], UB i (ǫ) p i (ǫ) 1.3. 1 0 0 1 1 1 0 0 0 1 1 1 1 0 1 0 1 1 0 1 1 0 0 1 1 0 1 1 0 1 1 0 1 0 1 0 1 0 1 1 1 0 1 1 0 1 0 0 1 1 0 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 1 1 0 0 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 1 1 0 0 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 Wang, Kulkarni, & Poor p. 15/24
Numerical Experiments (cont d) The (23,12,7) Golay code: bit 0, (4*,75*) 10 0 10 2 10 4 ber 10 6 10 8 10 10 10 3 10 2 10 1 10 0 erasure prob ε V0: MC S V0: UB V0: C UB V0: LB Wang, Kulkarni, & Poor p. 16/24
Numerical Experiments (cont d) A fixed, finite LDPC code with var deg. 3, chk deg. 6, n = 50: bit 0, (4*,1*) 10 0 10 5 ber 10 10 10 15 V0: MC S V0: UB V0: C UB V0: LB 10 3 10 2 10 1 10 0 erasure prob ε Wang, Kulkarni, & Poor p. 17/24
Numerical Experiments (cont d) A fixed, finite LDPC code with var deg. 3, chk deg. 6, n = 50: bit 26, (6*,2*) 10 0 10 5 ber 10 10 10 15 V26: MC S V26: UB V26: C UB V26: LB 10 3 10 2 10 1 10 0 erasure prob ε Wang, Kulkarni, & Poor p. 17/24
Numerical Experiments (cont d) A fixed, finite LDPC code with var deg. 3, chk deg. 6, n = 50: bit 19, (7*,10 5*) 10 0 10 5 ber 10 10 10 15 V19: MC S V19: UB V19: C UB V19: LB 10 3 10 2 10 1 10 0 erasure prob ε Wang, Kulkarni, & Poor p. 17/24
Frame Error Rates and Irregular Codes Frame error rate (FER): f FER = n i=1 f i. Wang, Kulkarni, & Poor p. 18/24
Frame Error Rates and Irregular Codes Frame error rate (FER): f FER = n i=1 f i. Efficiency: More cycles = less efficiency. (The Golay code is the worst.) The larger n, the easier for our algorithm. Wang, Kulkarni, & Poor p. 18/24
Frame Error Rates and Irregular Codes Frame error rate (FER): f FER = n i=1 f i. Efficiency: More cycles = less efficiency. (The Golay code is the worst.) The larger n, the easier for our algorithm. Experimental results Consider codes of n = 500 1000. For regular codes, d SS 12 can be exhausted. (( 512 12 ) = 5.9 1023 trials) The FER of irregular codes suits the algorithm most. d SS 13 can be exhausted. (( 512 13 ) = 2.5 1025 trials) Wang, Kulkarni, & Poor p. 18/24
The FER of A Rate 1/2 Irregular Code λ(x) = 0.416667x + 0.166667x 2 + 0.416667x 5 ρ(x) = x 5. 61% variable nodes of degree 2 A rate 1/2 n = 572 irregular LDPC code, (order, multi)=(13*,104*) 10 0 10 1 10 2 Error Probability 10 3 10 4 Typical CL CL+d2opt 10 5 CL+CA (DA+CA)+CL+CA 10 6 (DA+CA)+CL+CA, asym. Random Constr. Direct CA 10 7 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Erasure Probability ε Wang, Kulkarni, & Poor p. 19/24
Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Wang, Kulkarni, & Poor p. 20/24
Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Taking advantages of the tree-like structure. Wang, Kulkarni, & Poor p. 20/24
Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Taking advantages of the tree-like structure. Suitable for both FER and ber, punctured codes, unequal sub-channel analysis, non-sparse codes, etc. Wang, Kulkarni, & Poor p. 20/24
Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Taking advantages of the tree-like structure. Suitable for both FER and ber, punctured codes, unequal sub-channel analysis, non-sparse codes, etc. Our algorithm can be easily modified for trapping sets. Wang, Kulkarni, & Poor p. 20/24
Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Taking advantages of the tree-like structure. Suitable for both FER and ber, punctured codes, unequal sub-channel analysis, non-sparse codes, etc. Our algorithm can be easily modified for trapping sets. A useful tool for finite code optimization. Wang, Kulkarni, & Poor p. 20/24
Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Taking advantages of the tree-like structure. Suitable for both FER and ber, punctured codes, unequal sub-channel analysis, non-sparse codes, etc. Our algorithm can be easily modified for trapping sets. A useful tool for finite code optimization. A starting point for more efficient algorithms. Wang, Kulkarni, & Poor p. 20/24
Thank you for the attention. Wang, Kulkarni, & Poor p. 21/24
Searching for Trapping Sets We define the k-out trapping set, namely, the induced graph has k check node of degree 1. 2 2 2 Theorem 5 Consider a fixed k. Determining the k-out trapping distance is NP-complete. Our algorithm can be modified for trapping set exhaustion. Wang, Kulkarni, & Poor p. 22/24
Some Notes Complexity of evaluating UB: O( T log( T )), even for arbitrarily small ǫ. Wang, Kulkarni, & Poor p. 23/24
Some Notes Complexity of evaluating UB: O( T log( T )), even for arbitrarily small ǫ. A pleasant byproduct: an exhaustive list of minimum SS leading to a lower bound tight in both order and multiplicity. Wang, Kulkarni, & Poor p. 23/24
Some Notes Complexity of evaluating UB: O( T log( T )), even for arbitrarily small ǫ. A pleasant byproduct: an exhaustive list of minimum SS leading to a lower bound tight in both order and multiplicity. Complexity and performance tradeoff. Wang, Kulkarni, & Poor p. 23/24
Some Notes Complexity of evaluating UB: O( T log( T )), even for arbitrarily small ǫ. A pleasant byproduct: an exhaustive list of minimum SS leading to a lower bound tight in both order and multiplicity. Complexity and performance tradeoff. Punctured vs. Shortened Codes Wang, Kulkarni, & Poor p. 23/24
Some Notes Complexity of evaluating UB: O( T log( T )), even for arbitrarily small ǫ. A pleasant byproduct: an exhaustive list of minimum SS leading to a lower bound tight in both order and multiplicity. Complexity and performance tradeoff. Punctured vs. Shortened Codes A partitioned approach a hybrid search E{ f } = E{A j }E{ f A j }. j Ex: A 1 = {x 3 = 0}, A 2 = {x 3 = 1, x 7 = 0}, and A 3 = {x 3 = 1, x 7 = 1}. Wang, Kulkarni, & Poor p. 23/24
The Tree Converting Algorithm 1: repeat 2: Find the next leaf variable node, say x j. 3: if there exists another non-leaf x j in T then 4: if the youngest common ancestor of the leaf x j and existing non-leaf x j, denoted as yca(x j ), is a check node then 5: Include the new x j in T. 6: else if yca(x j ) is a variable node then 7: Do the pivoting construction. 8: end if 9: end if 10: Construct the immediate children of x j. 11: until the size of T exceeds the preset limit. Wang, Kulkarni, & Poor p. 24/24