Upper Bounding the Performance of Arbitrary Finite LDPC Codes on Binary Erasure Channels

Similar documents
Upper Bounding the Performance of Arbitrary Finite LDPC Codes on Binary Erasure Channels

From Stopping sets to Trapping sets

On the Exhaustion and Elimination of Trapping Sets: Algorithms & The Suppressing Effect

Spatially Coupled LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Message Passing Algorithm with MAP Decoding on Zigzag Cycles for Non-binary LDPC Codes

An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes

Symmetric Product Codes

An Integer Programming-Based Search Technique for Error-Prone Structures of LDPC Codes

ECEN 655: Advanced Channel Coding

An Introduction to Low Density Parity Check (LDPC) Codes

Low-density parity-check (LDPC) codes

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Lecture 4 : Introduction to Low-density Parity-check Codes

Error Floors of LDPC Coded BICM

Design of Non-Binary Quasi-Cyclic LDPC Codes by Absorbing Set Removal

Graph-based codes for flash memory

Iterative Encoding of Low-Density Parity-Check Codes

Practical Polar Code Construction Using Generalised Generator Matrices

Coding Techniques for Data Storage Systems

Iterative Quantization. Using Codes On Graphs

Density Evolution for Asymmetric Memoryless Channels: The Perfect Projection Condition and the Typicality of the Linear LDPC Code Ensemble

Enhancing Binary Images of Non-Binary LDPC Codes

A Deviation-Based Conditional Upper Bound on the Error Floor Performance for Min-Sum Decoding of Short LDPC Codes

On Bit Error Rate Performance of Polar Codes in Finite Regime

On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble

Design of regular (2,dc)-LDPC codes over GF(q) using their binary images

Recent Results on Capacity-Achieving Codes for the Erasure Channel with Bounded Complexity

Trapping Set Enumerators for Specific LDPC Codes

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

Low-Density Parity-Check Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

An Introduction to Algorithmic Coding Theory

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

Optimal Rate and Maximum Erasure Probability LDPC Codes in Binary Erasure Channel

The New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the Binary LDPC Codes With Better Cycle-Structures

Codes designed via algebraic lifts of graphs

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation

Making Error Correcting Codes Work for Flash Memory

Joint Iterative Decoding of LDPC Codes and Channels with Memory

5. Density evolution. Density evolution 5-1

Quasi-cyclic Low Density Parity Check codes with high girth

Fountain Uncorrectable Sets and Finite-Length Analysis

Ma/CS 6b Class 25: Error Correcting Codes 2

Asynchronous Decoding of LDPC Codes over BEC

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Slepian-Wolf Code Design via Source-Channel Correspondence

Convergence analysis for a class of LDPC convolutional codes on the erasure channel

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity

where X is the feasible region, i.e., the set of the feasible solutions.

ML and Near-ML Decoding of LDPC Codes Over the BEC: Bounds and Decoding Algorithms

Spatially Coupled LDPC Codes Constructed from Protographs

Optimizing Flash based Storage Systems

Lecture 2 Linear Codes

Searching for low weight pseudo-codewords

Belief propagation decoding of quantum channels by passing quantum messages

An algorithm to improve the error rate performance of Accumulate-Repeat-Accumulate codes Tae-Ui Kim

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Intracom Telecom, Peania

3.4 Relaxations and bounds

4216 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Density Evolution for Asymmetric Memoryless Channels

Computing the threshold shift for general channels

Optimisation and Operations Research

GLDPC-Staircase AL-FEC codes: A Fundamental study and New results

Distance Properties of Short LDPC Codes and Their Impact on the BP, ML and Near-ML Decoding Performance

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Graph-based Codes for Quantize-Map-and-Forward Relaying

An Introduction to Low-Density Parity-Check Codes

Low-density parity-check codes

On the number of circuits in random graphs. Guilhem Semerjian. [ joint work with Enzo Marinari and Rémi Monasson ]

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

Linear and conic programming relaxations: Graph structure and message-passing

Quasi-Cyclic Asymptotically Regular LDPC Codes

Information, Physics, and Computation

Communication by Regression: Achieving Shannon Capacity

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

A Class of Quantum LDPC Codes Derived from Latin Squares and Combinatorial Design

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Network Design and Game Theory Spring 2008 Lecture 6

Pseudocodewords of Tanner Graphs

Integrated Code Design for a Joint Source and Channel LDPC Coding Scheme

Probabilistic Graphical Models

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

Low-complexity error correction in LDPC codes with constituent RS codes 1

B I N A R Y E R A S U R E C H A N N E L

ECC for NAND Flash. Osso Vahabzadeh. TexasLDPC Inc. Flash Memory Summit 2017 Santa Clara, CA 1

On the Design of Fast Convergent LDPC Codes for the BEC: An Optimization Approach

Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity

Error-Correcting Codes:

HAMILTON CYCLES IN RANDOM REGULAR DIGRAPHS

Codes on graphs and iterative decoding

Finite Alphabet Iterative Decoders Approaching Maximum Likelihood Performance on the Binary Symmetric Channel

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 1, JANUARY

Transcription:

Upper Bounding the Performance of Arbitrary Finite LDPC Codes on Binary Erasure Channels An efficient exhaustion algorithm for error-prone patterns C.-C. Wang, S.R. Kulkarni, and H.V. Poor School of Electrical & Computer Engineering, Purdue Unversity Department of Electrical Engineering, Princeton University Wang, Kulkarni, & Poor p. 1/24

Content Brief introduction Stopping sets in erasure channels. Hardness & Applications. Existing approaches for upper / lower bounding the fixed code performance: A tree-based upper bound for bit error rates. Frame error rates and trapping sets. Wang, Kulkarni, & Poor p. 2/24

Stopping Sets (SSs) Definition: a set of variable nodes such that the induced graph contains no check node of degree 1. Example: The binary (7,4) Hamming Code i = 1 j =1 2 3 2 3 4 5 6 7 (2, 5, 7) a valid codeword vs. (4, 6, 7) a pure stopping set. Hamming distance vs. Stopping distance Wang, Kulkarni, & Poor p. 3/24

Stopping Sets (SSs) Definition: a set of variable nodes such that the induced graph contains no check node of degree 1. Example: The binary (7,4) Hamming Code i = 1 j =1 2 3 2 2 2 3 4 5 6 2 7 (2, 5, 7) a valid codeword vs. (4, 6, 7) a pure stopping set. Hamming distance vs. Stopping distance Wang, Kulkarni, & Poor p. 3/24

Stopping Sets (SSs) Definition: a set of variable nodes such that the induced graph contains no check node of degree 1. Example: The binary (7,4) Hamming Code i = 1 j =1 2 3 2 2 2 3 4 5 6 2 7 (2, 5, 7) a valid codeword vs. (4, 6, 7) a pure stopping set. Hamming distance vs. Stopping distance Wang, Kulkarni, & Poor p. 3/24

Upper Bounds vs. Exhausting Minimum Stopping Sets An n = 72 regular (3,6) LDPC code 10 0 10 5 ber 10 10 10 15 10 20 10 3 10 2 10 1 10 0 erasure prob ε V60: Monte Carlo V60: UB Wang, Kulkarni, & Poor p. 4/24

Upper Bounds vs. Exhausting Minimum Stopping Sets An n = 72 regular (3,6) LDPC code 10 0 10 5 An 10 10 NP-Complete Problem!! ber 10 15 by Krishnan et al. 10 20 10 3 10 2 10 1 10 0 erasure prob ε V60: Monte Carlo V60: UB Wang, Kulkarni, & Poor p. 4/24

A Hard but Useful Problem As an upper bound: Guaranteed worst performance for extremely low ber regimes. As an SS exhaustion algorithm: Code annealing [Wang 06]. 10 0 10 1 10 2 Error Probability 10 3 10 4 Typical CL CL+d2opt 10 5 CL+CA (DA+CA)+CL+CA 10 6 (DA+CA)+CL+CA, asym. Random Constr. Direct CA 10 7 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Erasure Probability ε Wang, Kulkarni, & Poor p. 5/24

Existing Approaches Ensemble analysis: [Di et al. 02] avg. ber and FER curves, [Amraoui et al. 04] the scaling law for water fall regions. Fixed Code analysis: [Holzlöhner et al. 05] dual adaptive importance sampling, [Stepanov et al. 06] instanton method Enumerating bad patterns: [Richardson 03] error floors of LDPC codes, [Yedidia et al. 01] projection algebra, [Hu et al. 04] the minimum distance of LDPC codes. They are all inexhaustive enumeration. Wang, Kulkarni, & Poor p. 6/24

The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure Wang, Kulkarni, & Poor p. 7/24

The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 Wang, Kulkarni, & Poor p. 7/24

The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 f 2,1 = x 2 (x 1 + x 3 )(x 4 + x 6 ) Wang, Kulkarni, & Poor p. 7/24

The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 f 2,1 = x 2 (x 1 + x 3 )(x 4 + x 6 ) f 2,2 = x 2 (x 1 ( f 5 4,1 + f 6 4,1 ) + x 3 ( f 4 3,1 + f 5 3,1 )) (x 4 ( f 3 3,1 + f 5 3,1 ) + x 6 ( f 1 4,1 + f 5 4,1 )) = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) Wang, Kulkarni, & Poor p. 7/24

The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 f 2,1 = x 2 (x 1 + x 3 )(x 4 + x 6 ) f 2,2 = x 2 (x 1 ( f 5 4,1 + f 6 4,1 ) + x 3 ( f 4 3,1 + f 5 3,1 )) (x 4 ( f 3 3,1 + f 5 3,1 ) + x 6 ( f 1 4,1 + f 5 4,1 )) = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) f 2 := lim l f 2,l = f 2,2 Wang, Kulkarni, & Poor p. 7/24

The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 f 2,1 = x 2 (x 1 + x 3 )(x 4 + x 6 ) f 2,2 = x 2 (x 1 ( f 5 4,1 + f 6 4,1 ) + x 3 ( f 4 3,1 + f 5 3,1 )) (x 4 ( f 3 3,1 + f 5 3,1 ) + x 6 ( f 1 4,1 + f 5 4,1 )) = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) f 2 := lim f 2,l = f 2,2 l p 2 = E{ f 2 } Wang, Kulkarni, & Poor p. 7/24

The Bit-Oriented Detection i = 1 j =1 2 3 4 2 3 4 5 6 Using 1: for erasure 0: for non-erasure f 2,0 = x 2 f 2,1 = x 2 (x 1 + x 3 )(x 4 + x 6 ) f 2,2 = x 2 (x 1 ( f 5 4,1 + f 6 4,1 ) + x 3 ( f 4 3,1 + f 5 3,1 )) (x 4 ( f 3 3,1 + f 5 3,1 ) + x 6 ( f 1 4,1 + f 5 4,1 )) = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) f 2 := lim f 2,l = f 2,2 l Asymptotically, no repeated variables p 2 = E{ f 2 } Wang, Kulkarni, & Poor p. 7/24

Enumeration-Based LB & UB f 2 = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) = x 1 x 2 x 6 + x 2 x 3 x 4 + x 1 x 2 x 4 x 5 + x 2 x 3 x 5 x 6 Each product term corresponds to an irreducible stopping set. Wang, Kulkarni, & Poor p. 8/24

Enumeration-Based LB & UB f 2 = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) = x 1 x 2 x 6 + x 2 x 3 x 4 + x 1 x 2 x 4 x 5 + x 2 x 3 x 5 x 6 Each product term corresponds to an irreducible stopping set. LB: Graph-based search [Richardson 03], iteration-based relaxation [Yedidia et al. 01] f 2,LBa = x 2 x 3 x 4 f 2,LBb = x 1 x 2 x 4 x 5 LB a = E{ f 2,LB } = ǫ 3 LB b = E{ f 2,LBb } = ǫ 4 Wang, Kulkarni, & Poor p. 8/24

Enumeration-Based LB & UB f 2 = x 2 (x 1 (x 5 + x 6 ) + x 3 (x 4 + x 5 )) (x 4 (x 3 + x 5 ) + x 6 (x 1 + x 5 )) = x 1 x 2 x 6 + x 2 x 3 x 4 + x 1 x 2 x 4 x 5 + x 2 x 3 x 5 x 6 Each product term corresponds to an irreducible stopping set. LB: Graph-based search [Richardson 03], iteration-based relaxation [Yedidia et al. 01] f 2,LBa = x 2 x 3 x 4 f 2,LBb = x 1 x 2 x 4 x 5 LB a = E{ f 2,LB } = ǫ 3 LB b = E{ f 2,LBb } = ǫ 4 UB: Iteration-based relaxation [Yedidia et al. 01] f 2,2 = x 2 x 3 x 4 + x 1 x 2 x 4 x 5 + x 1 x 2 x 6 + x 2 x 3 x 5 x 6 x 2 1x 4 + 1x 2 x 4 1 + 1x 2 x 6 + x 2 1 1x 6 = x 2 x 4 + x 2 x 6 UB = 2ǫ 2 ǫ 3 Wang, Kulkarni, & Poor p. 8/24

An Upper Bound f a 1 3 2 3 2 4 4 f b 1 3 2 4 3 4 Wang, Kulkarni, & Poor p. 9/24

An Upper Bound f a 1 3 2 3 2 4 4 f b 1 3 2 4 f a = f b 3 4 E DE { f a } < p e = E{ f a } = E{ f b } E DE { f b } Wang, Kulkarni, & Poor p. 9/24

An Upper Bound f a 2 1 3 2 2 3 2 2 4 4 f b 1 2 3 2 4 f a = f b 3 4 E DE { f a } < p e = E{ f a } = E{ f b } E DE { f b } Wang, Kulkarni, & Poor p. 9/24

An Upper Bound f a 2 1 3 1 2 f b 3 2 2 2 3 2 2 4 4 2 4 f a = f b 3 2 4 E DE { f a } < p e = E{ f a } = E{ f b } E DE { f b } Wang, Kulkarni, & Poor p. 9/24

An Upper Bound (Cont d) Theorem 1 Suppose all messages entering any variable node are independent, or equivalently, the youngest common ancestors of all pairs of repeated bits are check nodes. E DE { f } E{ f } Theorem 2 This upper bound is tight in order. E DE { f }(ǫ) = O(E{ f }(ǫ)), ǫ Wang, Kulkarni, & Poor p. 10/24

A Pivoting Rule f = x 0 f x0 =1 + f x0 =0 Wang, Kulkarni, & Poor p. 11/24

A Pivoting Rule f = x 0 f x0 =1 + f x0 =0 f x 0 (a) Original x 0 f x0 =1 1 0 (b) Decoupled f f x0 =0 Wang, Kulkarni, & Poor p. 11/24

The Two-Stage Algorithm f Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm f Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm f f FINITE Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm f f FINITE Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite Wang, Kulkarni, & Poor p. 12/24

The Two-Stage Algorithm w. the same order p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite 2 2 2 2 2 Wang, Kulkarni, & Poor p. 12/24

A Narrowing Search Theorem 3 (Asymptotically Tight) With an optimal" growing module, we have UB(ǫ) = O(p e (ǫ)). Wang, Kulkarni, & Poor p. 13/24

A Narrowing Search Theorem 3 (Asymptotically Tight) With an optimal" growing module, we have UB(ǫ) = O(p e (ǫ)). Theorem 4 (Exhaustive Enumeration) d := min{size(x) x such that f FINITE (x) = 1}. Then the stopping distance d. If there exists such a minimum x being a SS, then ALL minimum SSs are in the set of all minimum x. The exhaustive list of minimum SSs leads to a lower bound tight in both order and multiplicity. Wang, Kulkarni, & Poor p. 13/24

The Two-Stage Algorithm p e = E{ f } E{ f FINITE } E DE { f FINITE } f f FINITE = ffinite 2 2 2 2 2 Wang, Kulkarni, & Poor p. 14/24

Numerical Experiments The (7,4,3) Hamming code: H = 0 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 1 1 1 Wang, Kulkarni, & Poor p. 15/24

Numerical Experiments The (7,4,3) Hamming code: H = 0 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 1 1 1 i [1, 7], ǫ (0, 1], UB i (ǫ) p i (ǫ) 1.3. Wang, Kulkarni, & Poor p. 15/24

Numerical Experiments The (7,4,3) Hamming code: The (23,12,7) Golay code: H = [H I] in which H = H = 0 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 1 1 1 i [1, 7], ǫ (0, 1], UB i (ǫ) p i (ǫ) 1.3. 1 0 0 1 1 1 0 0 0 1 1 1 1 0 1 0 1 1 0 1 1 0 0 1 1 0 1 1 0 1 1 0 1 0 1 0 1 0 1 1 1 0 1 1 0 1 0 0 1 1 0 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 1 1 0 0 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 1 1 0 0 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 Wang, Kulkarni, & Poor p. 15/24

Numerical Experiments (cont d) The (23,12,7) Golay code: bit 0, (4*,75*) 10 0 10 2 10 4 ber 10 6 10 8 10 10 10 3 10 2 10 1 10 0 erasure prob ε V0: MC S V0: UB V0: C UB V0: LB Wang, Kulkarni, & Poor p. 16/24

Numerical Experiments (cont d) A fixed, finite LDPC code with var deg. 3, chk deg. 6, n = 50: bit 0, (4*,1*) 10 0 10 5 ber 10 10 10 15 V0: MC S V0: UB V0: C UB V0: LB 10 3 10 2 10 1 10 0 erasure prob ε Wang, Kulkarni, & Poor p. 17/24

Numerical Experiments (cont d) A fixed, finite LDPC code with var deg. 3, chk deg. 6, n = 50: bit 26, (6*,2*) 10 0 10 5 ber 10 10 10 15 V26: MC S V26: UB V26: C UB V26: LB 10 3 10 2 10 1 10 0 erasure prob ε Wang, Kulkarni, & Poor p. 17/24

Numerical Experiments (cont d) A fixed, finite LDPC code with var deg. 3, chk deg. 6, n = 50: bit 19, (7*,10 5*) 10 0 10 5 ber 10 10 10 15 V19: MC S V19: UB V19: C UB V19: LB 10 3 10 2 10 1 10 0 erasure prob ε Wang, Kulkarni, & Poor p. 17/24

Frame Error Rates and Irregular Codes Frame error rate (FER): f FER = n i=1 f i. Wang, Kulkarni, & Poor p. 18/24

Frame Error Rates and Irregular Codes Frame error rate (FER): f FER = n i=1 f i. Efficiency: More cycles = less efficiency. (The Golay code is the worst.) The larger n, the easier for our algorithm. Wang, Kulkarni, & Poor p. 18/24

Frame Error Rates and Irregular Codes Frame error rate (FER): f FER = n i=1 f i. Efficiency: More cycles = less efficiency. (The Golay code is the worst.) The larger n, the easier for our algorithm. Experimental results Consider codes of n = 500 1000. For regular codes, d SS 12 can be exhausted. (( 512 12 ) = 5.9 1023 trials) The FER of irregular codes suits the algorithm most. d SS 13 can be exhausted. (( 512 13 ) = 2.5 1025 trials) Wang, Kulkarni, & Poor p. 18/24

The FER of A Rate 1/2 Irregular Code λ(x) = 0.416667x + 0.166667x 2 + 0.416667x 5 ρ(x) = x 5. 61% variable nodes of degree 2 A rate 1/2 n = 572 irregular LDPC code, (order, multi)=(13*,104*) 10 0 10 1 10 2 Error Probability 10 3 10 4 Typical CL CL+d2opt 10 5 CL+CA (DA+CA)+CL+CA 10 6 (DA+CA)+CL+CA, asym. Random Constr. Direct CA 10 7 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Erasure Probability ε Wang, Kulkarni, & Poor p. 19/24

Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Wang, Kulkarni, & Poor p. 20/24

Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Taking advantages of the tree-like structure. Wang, Kulkarni, & Poor p. 20/24

Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Taking advantages of the tree-like structure. Suitable for both FER and ber, punctured codes, unequal sub-channel analysis, non-sparse codes, etc. Wang, Kulkarni, & Poor p. 20/24

Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Taking advantages of the tree-like structure. Suitable for both FER and ber, punctured codes, unequal sub-channel analysis, non-sparse codes, etc. Our algorithm can be easily modified for trapping sets. Wang, Kulkarni, & Poor p. 20/24

Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Taking advantages of the tree-like structure. Suitable for both FER and ber, punctured codes, unequal sub-channel analysis, non-sparse codes, etc. Our algorithm can be easily modified for trapping sets. A useful tool for finite code optimization. Wang, Kulkarni, & Poor p. 20/24

Summary Upper bounding and exhausting the bad patterns An NP-complete problem but feasible for at least short practical LDPC codes. Taking advantages of the tree-like structure. Suitable for both FER and ber, punctured codes, unequal sub-channel analysis, non-sparse codes, etc. Our algorithm can be easily modified for trapping sets. A useful tool for finite code optimization. A starting point for more efficient algorithms. Wang, Kulkarni, & Poor p. 20/24

Thank you for the attention. Wang, Kulkarni, & Poor p. 21/24

Searching for Trapping Sets We define the k-out trapping set, namely, the induced graph has k check node of degree 1. 2 2 2 Theorem 5 Consider a fixed k. Determining the k-out trapping distance is NP-complete. Our algorithm can be modified for trapping set exhaustion. Wang, Kulkarni, & Poor p. 22/24

Some Notes Complexity of evaluating UB: O( T log( T )), even for arbitrarily small ǫ. Wang, Kulkarni, & Poor p. 23/24

Some Notes Complexity of evaluating UB: O( T log( T )), even for arbitrarily small ǫ. A pleasant byproduct: an exhaustive list of minimum SS leading to a lower bound tight in both order and multiplicity. Wang, Kulkarni, & Poor p. 23/24

Some Notes Complexity of evaluating UB: O( T log( T )), even for arbitrarily small ǫ. A pleasant byproduct: an exhaustive list of minimum SS leading to a lower bound tight in both order and multiplicity. Complexity and performance tradeoff. Wang, Kulkarni, & Poor p. 23/24

Some Notes Complexity of evaluating UB: O( T log( T )), even for arbitrarily small ǫ. A pleasant byproduct: an exhaustive list of minimum SS leading to a lower bound tight in both order and multiplicity. Complexity and performance tradeoff. Punctured vs. Shortened Codes Wang, Kulkarni, & Poor p. 23/24

Some Notes Complexity of evaluating UB: O( T log( T )), even for arbitrarily small ǫ. A pleasant byproduct: an exhaustive list of minimum SS leading to a lower bound tight in both order and multiplicity. Complexity and performance tradeoff. Punctured vs. Shortened Codes A partitioned approach a hybrid search E{ f } = E{A j }E{ f A j }. j Ex: A 1 = {x 3 = 0}, A 2 = {x 3 = 1, x 7 = 0}, and A 3 = {x 3 = 1, x 7 = 1}. Wang, Kulkarni, & Poor p. 23/24

The Tree Converting Algorithm 1: repeat 2: Find the next leaf variable node, say x j. 3: if there exists another non-leaf x j in T then 4: if the youngest common ancestor of the leaf x j and existing non-leaf x j, denoted as yca(x j ), is a check node then 5: Include the new x j in T. 6: else if yca(x j ) is a variable node then 7: Do the pivoting construction. 8: end if 9: end if 10: Construct the immediate children of x j. 11: until the size of T exceeds the preset limit. Wang, Kulkarni, & Poor p. 24/24