Spatially Coupled LDPC Codes Kenta Kasai Tokyo Institute of Technology 30 Aug, 2013
We already have very good codes. Efficiently-decodable asymptotically capacity-approaching codes Irregular LDPC Codes Achievability: proof is limited to BEC but likely achieve any memoryless channels. Drawback: non-universal and high error floors Polar Codes Achievability: proved for any memoryless channels Drawback: poor finite length performance
Optimal Finite-Length Error-Rate Approximation log M (n, P B ) = nc(ϵ) nv (ϵ)q 1 (P B ) + O(log n) ( n(c(ϵ) ) R ) P B = Q (1 + o(1)) V (ϵ) R := 1 n log M (n, P B ) M (n, P B ) is the maximum number of codewords of length-n codes that achieve the block error rate P B ϵ : channel parameter C(ϵ) : channel capacity V (ϵ) : channel dispersion, e.g., V (ϵ) = ϵ(1 ϵ) + log 2 ( ) 1 ϵ ϵ for BSC(ϵ) V (ϵ) = ϵ(1 ϵ) for BEC(ϵ).
Example: Fix BEC(ϵ=0.5) and BSC(ϵ=0.11), C(ϵ) = 0.5 and P B = 10 3. Who wants to fill the gap 1% away from capacity? n = 10 5 is sufficient.
Example: State-of-the-art finite-length codes. Fix R and P B over BEC(ϵ) Channel Capacity C(ε)=1-ε 0.70 0.65 0.60 0.55 Finite Polar Code Infinite (2,4)-NBLDPCC GF(2 8 ) Finite (2,4)-NBLDPCC GF(2 8 ) Infinite (3,6)-BLDPCC Finite (3,6)-BLDPCC Finite Optimal Code 0.50 10 2 10 3 10 4 10 5 10 6 10 7 10 8 Block Length n [bit]
Uncoupled LDPC Code Sparse parity-check matrix (column-weight 2, row-weight 4) H = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0, C = {x Fn 2 Hx T = 0} Sparse graph representation (variable-degree 2, check-degree 4)
Efficient locally-optimal decoding uncoupled LDPC code x 5 + x 7 + x 10 + x 12 = 0 ( ) x 5 x 7 x 10 x 12 Suppose we observed x 5 =?, x 7 = 1, x 10 = 1, x 12 = 0 through the BEC(ϵ), then from ( ) we know x 5 = 1. Iterate this until decoding is stuck. Locally-optimal decoding is efficient with computational cost O(n).
Locally-optimal decoder performance analysis of (d l, d r )-codes over BEC(ϵ) Erasure probability of check node of degree d r q (t) = 1 (1 p (t 1) ) d r 1 Erasure probability of variable node of degree d l Putting together p (t) = ϵ(q (t) ) d l 1 p (t) = ϵ(1 (1 p (t 1) ) d r 1 ) d l 1 = f(g(p (t 1) ); ϵ) How good the (d l, d r )-code is measured by threshold ϵ BP (d l, d r ) = sup{ϵ lim t p (t) = 0}
Globally optimal decoder performance analysis x : codeword of (l, r) code y : received output of channel ˆx MAP i (y) = argmax P (x i y) x i F 2 { ϵ MAP = sup ϵ 1 n n Pr(X i i=1 } MAP ˆX i (Y )) = 0 Globally optimal decoding is computationally complex with cost O(2 nr ).
Graphical interpretation of ϵ BP (d l, d r ) via potential U(x; ϵ) Potential function U(x; ϵ) = xg(x) G(x) F (g(x); ϵ) ϵ BP (d l, d r ) = sup{ϵ U (x; ϵ) > 0 x(0, 1]} ϵ POT BEC (d l, d r ) := sup{ϵ min U(x; ϵ) > 0} = ϵ MAP (d l, d r ) x [0,1] Example: (d l = 3, d r = 6) uncoupled code. ϵ BP (d l, d r ) ϵ POT (d l, d r ) Reprinted from [Yedla et al.]
How to couple L uncoupled (d l, d r ) codes to make a (d l, d r, L) coupled code 1. Copy L (d l, d r ) uncoupled codes 1 2 3 L 2. Re-connect edges randomly with w neighboring codes
Locally-optimal decoder performance analysis of (d l, d r, L)-coupled codes over BEC(ϵ) x i : check node erasure probability of i-th uncoupled code. DEMO. vector form: x (t+1) i = 1 w w 1 k=0 ( w 1 1 f w j=0 ) g(x (t) i+j k ); ϵ i k x (t+1) = A T f(ag(x (t) ); ϵ) (A is some upper-triangle matrix)
(d l, d r, L, w) coupled codes achieve ϵ POT (d l, d r ) Definition: potential function of (d l, d r, L, w) coupled codes U(x; ϵ) = g(x) T x G(x) F (Ag(x); ϵ) Theorem [Yedla et al., 2012] An x [0, 1] L is a stationary point of U(x; ϵ) if and only if x is a fixed point of BP iteration x = A T f(ag(x); ϵ). For ϵ < ϵ POT (d l, d r ), if w is large enough, the only stationary point of U(x; ϵ) is 0.
History of Coupled LDPC Codes Felström and Zigangirov constructed coupled codes as LDPC convoluional codes and found they were better than uncoupled codes. Lentmaier found (4, 8, L) coupled codes had low error rate at ϵ above the BP threshold (4, 8) uncoupled codes. Sridharan et al. found the BP threshold of coupled codes are very close to the MAP threshold of the uncoupled codes.
History of Coupled LDPC Codes (Cont.) Kudekar et al. proved lim w ϵbp (d l, d r, L) = ϵ MAP (d l, d r ). Yedla et al. proved lim w ϵbp (d l, d r, L, w) = ϵ MAP (d l, d r ) = ϵ POT (d l, d r ). Kudekar et al. reported empirical evidence that this phenomenon also occurs for MBIOS channels other than BEC. Kudekar et al. reported empirical evidence that this phenomenon occurs for channels with memory and multi-access channels.
Practical Definition of Coupled Codes via Protograph Protograph A protograph is a bipartite graph defined by a set of variable nodes V = {v 1,..., v n } a set of check nodes C = {c 1,..., c m },and a set of edges E. Base matrix A protograph is defined by a base matrix B F m n 2. v 1 v 2 v 3 v 4 v 5 v 6 c 1 1 1 1 0 0 0 c 2 0 1 1 1 0 0 c 3 1 0 0 0 1 1 c 4 0 0 0 1 1 1
Protograph Codes A protograph code is defined by a parity-check matrix H F mm nm 2. C := {x F nm 2 Hx T = 0} a parity-check matrix H is derived from a base matrix B. 1 = M M size random binary permutation matrix. 0 = M M size zero matrix. The parameter M is referred to as lifting number. v 1 v 2 v 3 v 4 v 5 v 6 c 1 P 1 P 2 P 3 0 0 0 c 2 0 P 4 P 5 P 6 0 0 c 3 P 7 0 0 0 P 8 P 9 c 4 0 0 0 P 10 P 11 P 12
(d l, d r, L) coupled codes The base matrix B(d l, d r, L) is a (L + d l 1) kl binary band matrix of band size d r d l. B(d l = 4, d r = 12, L = 7) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
... Ex. parity-check matrix for (d l =4, d r = 12, L = 7).. P 1,10P 1,11P 1,12 P 2,7 P 2,8 P 2,9 P 2,10P 2,11P 2,12 P 3,4 P 3,5 P 3,6 P 3,7 P 3,8 P 3,9 P 3,10P 3,11P 3,12 P 4,1 P 4,2 P 4,3 P 4,4 P 4,5 P 4,6 P 4,7 P 4,8 P 4,9 P 4,10P 4,11P 4,12 P 5,1 P 5,2 P 5,3 P 5,4 P 5,5 P 5,6 P 5,7 P 5,8 P 5,9 P 5,10P 5,11P 5,12 P 6,1 P 6,2 P 6,3 P 6,4 P 6,5 P 6,6 P 6,7 P 6,8 P 6,9 P 6,10P 6,11P 6,12 P 7,1 P 7,2 P 7,3 P 7,4 P 7,5 P 7,6 P 7,7 P 7,8 P 7,9 P 7,10P 7,11P 7,12 P 8,1 P 8,2 P 8,3 P 8,4 P 8,5 P 8,6 P 8,7 P 8,8 P 8,9 P 9,1 P 9,2 P 9,3 P 9,4 P 9,5 P 9,6 P 10,1P 10,2P 10,3 Denote a codeword by (x 1,...,x 21 ). The parity-check equations are written by dr j=1 P i,j x ik dr+j =0 (for i =0,...,10) (1).. P i,j s are M M random permutation matrices. M 1000 Rate of (d l, d r ) uncoupled codes R(d l, d r ) = k 1 k, k := d r d l Rate of (d l, d r, L) coupled code R(d l, d r, L) = k 1 k d l 1 kl
.. Encoding of Coupled Codes. Ex. parity-check matrix for (d l =4, d r = 12, L = 7).. P 1,10P 1,11P 1,12 P 2,7 P 2,8 P 2,9 P 2,10P 2,11P 2,12 P 3,4 P 3,5 P 3,6 P 3,7 P 3,8 P 3,9 P 3,10P 3,11P 3,12 P 4,1 P 4,2 P 4,3 P 4,4 P 4,5 P 4,6 P 4,7 P 4,8 P 4,9 P 4,10P 4,11P 4,12 P 5,1 P 5,2 P 5,3 P 5,4 P 5,5 P 5,6 P 5,7 P 5,8 P 5,9 P 5,10P 5,11P 5,12 P 6,1 P 6,2 P 6,3 P 6,4 P 6,5 P 6,6 P 6,7 P 6,8 P 6,9 P 6,10P 6,11P 6,12 P 7,1 P 7,2 P 7,3 P 7,4 P 7,5 P 7,6 P 7,7 P 7,8 P 7,9 P 7,10P 7,11P 7,12 P 8,1 P 8,2 P 8,3 P 8,4 P 8,5 P 8,6 P 8,7 P 8,8 P 8,9 P 9,1 P 9,2 P 9,3 P 9,4 P 9,5 P 9,6 P 10,1P 10,2P 10,3 Denote a codeword by (x 1,...,x 21 ). The parity-check equations are written by dr j=1 P i,j x ik dr+j =0 (for i =0,...,10) (1).. The computational cost of the sequential encoding is O(LM). While the computational cost of the termination is O(M 2 ).
Termination Termination calculates values of parity bits x 17, x 18, x 19, x 20, x 21 so that constraints c 6, c 7, c 8, c 9, c 10 are satisfied. The termination is equivalent to solve this equation. P 6,11 P 6,12 x 17 s 6 P 7,8 P 7,9 P 7,10 P 7,11 P 7,12 x 18 P 8,5 P 8,6 P 8,7 P 8,8 P 8,9 x 19 P 9,2 P 9,3 P 9,4 P 9,5 P 9,6 x 20 = s 7 s 8 s 9 P 10,1 P 10,2 P 10,3 To solve this equation involves multiplication of 5M 5M size not sparse matrix, so one needs O(M 2 ) computational cost to finish the termination. s i denote syndrome of c i which contains information of previously decided bits. x 21 s 10
... Modified (d l, d r, L) codes.. Modified (d l, d r, L) codes is protograph codes. The protograph of modified (d l, d r, L) codes is obtained by removing d l 2check nodes of the protograph of (d l, d r, L) codes. kl (L + 1) The design rate is = k 1 1.. kl k kl Parity-check matrix of modified (4,12,7) codes P 1,10P 1,11P 1,12 P 2,7 P 2,8 P 2,9 P 2,10P 2,11P 2,12 P 3,4 P 3,5 P 3,6 P 3,7 P 3,8 P 3,9 P 3,10P 3,11P 3,12 P 4,1 P 4,2 P 4,3 P 4,4 P 4,5 P 4,6 P 4,7 P 4,8 P 4,9 P 4,10P 4,11P 4,12 P 5,1 P 5,2 P 5,3 P 5,4 P 5,5 P 5,6 P 5,7 P 5,8 P 5,9 P 5,10P 5,11P 5,12 P 6,1 P 6,2 P 6,3 P 6,4 P 6,5 P 6,6 P 6,7 P 6,8 P 6,9 P 6,10P 6,11P 6,12 P 7,1 P 7,2 P 7,3 P 7,4 P 7,5 P 7,6 P 7,7 P 7,8 P 7,9 P 7,10P 7,11P 7,12 P 8,1 P 8,2 P 8,3 P 8,4 P 8,5 P 8,6 P 8,7 P 8,8 P 8,9. The modified coupled codes still have the same BP threshold as the unmodified coupled codes.
Sliding Window Decoding Performance Approaches Full Decoding Performance Theorem [Iyengar et al., 2012] Consider windowed decoding of the (d l, d r, w, L) spatially coupled ensemble over the binary erasure channel. Then for a target erasure rate δ < δ, there exists a positive integer W min (δ) such that when the window size W W min (δ) the WD threshold satisfies ( ϵ WD (d l, d r, w, W, δ) 1 d ) ld r 2 δ dl 2 d l 1 ( ϵ BP (d l, d r, L =, w) e 1 B ( W w 1 A ln ln D δ C)). Here A, B, C, D and δ are strictly positive constants that depend only on the ensemble parameters d l, d r and w. This theorem says that the WD thresholds approach the BP threshold ϵ BP (d l, d r, L =, w) at least exponentially fast in the ratio of the size of the window W to the coupling length w for a fixed target erasure probability δ < δ.
Multi Dimensional Extension Background Spatially-Coupled (SC) codes are constructed by coupling many regular low-density parity-check codes in a chain. Strongness: Spatial coupling improves BP threshold. Drawback: The decoding chain of SC codes aborts when facing burst erasure. Goal In this paper, we introduce multi-dimensional (MD) SC codes to circumvent the burst erasures.
Definition: 1D-SC (d l, d r, L, w, Z) Codes Consider L sections on 1D discrete torus Z L. Put M bit nodes of degree d l and d l d r M check nodes of degree d r at each section i {0,..., L 1}. Connect edges randomly between bit nodes at section i and check nodes at section i + j (j {0,..., w 1}) with Md l /w edges. Shorten all the bit nodes at section i Z = {0,..., w 1} 0 1 2 3 4 Figure : (d l = 3, d r = 6, L = 5, w = 2, Z = {0, 1}) code
We Introduce Simple Representation of Connection in Order to Emphasize Symmetry of Decoding Messages w = 2 w = 3 In right figure, nodes are connected if the corresponding sections can be reached within 2 edges in left figure.
Definition: (d l, d r, L, w, Z) 2D-SC Code Consider L 2 sections on 2D-torus Z 2 L. Put M bit nodes of degree d l and d l d r M check nodes of degree d r at each section i {0,..., L 1} 2. Connect randomly bit nodes at section i and check nodes at section i + j (j {0,..., w 1} 2 ) with Md l /w 2 edges. Shorten all the bit nodes at section i Z Z 2 L. Example Figure : w = 2 Figure : w = 3
Density Evolution Predicts BP Erasure Probability p (l) i round l at section i at p (0) i = p (l) i = p (0) i 0 (i Z) ϵ (i / Z) ϵ + α ( burst erasures occur at section i) ( j 1 ( w D 1 (1 k 1 ) ) w D p(l 1) i+j k ) dr 1 d l 1 We are interested in BP threshold { ϵ such that } BP threshold: ϵ = sup ϵ > 0 lim P (l) b = 0 l BP erasure probability: P (l) b = 1 L D p d l i #Z i Z D L
2D-SC Codes with Line Segment Zero Domain are Equivalent to 1D-SC Codes Proposition: ϵ 2D(d l, d r, L, w, Z 2D ) = ϵ 1D(d l, d r, L, w, Z 1D ) { } Z 2D = (i, j) i {0, 1,..., w 1}, j [L] Z 1D = {0, 1,..., w 1} Example Left figure shows initial constellation p (0) of 2D-SC codes. (i 1,i 2 ) There are L 2 = 201 2 sections Black line at middle represents zero domain Z Red sections are set with ϵ = 0.480 Demonstrate the density evolution of (3,6) 2D-SC codes
Burst Error is a Problem for SC Codes SC codes have a nice structure for windowed encoding and decoding. Interleaving over whole coded bits kills the structure. Burst errors stops the decoding chain. Increasing coupling number L does not help. Increasing randomized window size w does help. But large w is not preferable. We assume interleaver in each section. Then burst erasures can be viewed as output from BEC(ϵ + α). We assume burst erasures occur randomly at some O(1) sections.
i 1D-SC decoding is a chain reaction like dominoes. If one of dominoes is too heavy to be toppled, the decoding chain reaction aborts. Example: (d l = 3, d r = 6, L = 101, w = 4, Z = { 1, 0, +1}) BEC(ϵ = 0.48) + burst erasures BEC(ϵ + α = 0.52) Erasure Probability p (l) 0.7 0.6 0.5 0.4 0.3 0.2 iteration= 0 iteration= 50 iteration= 100 iteration= 300 iteration= 700 iteration=1100 0.1 0.0-40 -20 0 20 40 Section i Example:
2D-SC Codes are More Robust to Burst Erasures than 1D-SC Codes 0.5 BP threshold ϵ 0.4 0.3 0.2 Uncoupled BP threshold Uncoupled MAP threshold 0.1 1D with 1 burst section erasure 1D with 2 burst section erasures 2D with 1 burst section erasure 2D with 2 burst section erasures 0.0 0 5 10 15 20 25 30 35 Number of coupled neighboring sections: Adjacent Section Number w or w 2 Theorem: Necessity Condition of Decoding Burst Erasure The MD (d l, d r, L, ω, Z) code of dimension D cannot recover any burst section erasure if ϵ + α > ϵ BP (d l, d r )w D, where ϵ BP (d l, d r ) is the BP threshold of uncoupled (d l, d r ) codes.
Square Zero Domain Mitigates Rate-loss 2D-SC codes have L 2 sections. But 2D-SC codes with line segment zero domain have rate-loss O(1/L) which is worse than 1D-SC codes. Rate-loss of 2D-SC codes with square zero domain with size z turns out O(1/L 2 ) Demonstrate density evolution Square zero domain size z has to be set large to grow. Zero domain surface needs to be flat to grow in the discrete world. Small size zero domain degrades BP threshold.
Square Zero Domain Mitigates Rate-loss 0.49 0.48 0.47 BP threshold ϵ 0.46 0.45 0.44 0.43 0.42 0.41 0.40 Uncoupled MAP threshold 2D Coupled BP Threshold 10 20 30 40 50 60 70 80 90 100 z
Conclusion We proposed multi-dimensional SC codes. 2D-SC codes achieve the same BP threshold as 1D-SC codes when line-segment zero domains are used as the shortened domain. 2D-SC codes are more robust against burst section erasures than 1D-SC codes.
Good points of spatially-coupled LDPC codes Minimum distance and water-fall tradeoff Irregular LDPC codes often suffer from the dilemma between minimum distance and water-fall performance. Spatially-coupled LDPC codes have, at the same time, large minimum distance and capacity-achieving water-fall performance. Universality Sason et al. proved that, in the limit of large column-weight and row-weight, regular LDPC codes achieve the capacity of any MBIOS channels under MAP decoding. lim d l,d r ϵmap (d l, d r ) = ϵ Shannon (d l, d r ) Hence, it is expected that coupled codes universally achieve the capacity.
Drawback of coupled codes Drawback Infinite column-weight and row-weight are required to strictly achieve the capacity, which leads to infinite computation per iteration. Bad news Sason and Urbanke showed the codes achieving 1 ϵ of capacity under MAP decoding have average degree at least O(ln 1 ϵ ) This result is valid only for the codes without punctured bits.
Capacity-Achieving Codes with Bounded Degree MacKay-Neal (MN) Codes Murayama et al. Tanaka and Saad observed by replica method that MN codes were capacity-achieving under MAP decoding with bounded degree for BSC and AWGNC. Hsu-Anastasopoulos (HA) Codes Hsu and Anastasopoulos proved HA codes achieve the capacity of any MBIOS channels under MAP decoding with bounded degree.
Couped MN codes achieve Shannon limit For any d l, d r, d g 2, we observed the BP thresholds of spatially coupled (d l, d r, d g )-MN codes are very close to the Shannon limit for the BEC. Obata et al. proved that ϵ POT MN (d l = 2, d r, d g = 2) = ϵ Shannon MN (d l = 2, d r, d g = 2). Protograph of spatially coupled (8,4,5)-MN code
Couped HA codes achieve Shannon limit For any d l, d r, d g 2, we observed the BP thresholds of spatially coupled (d l, d r, d g )-HA codes are very close to the Shannon limit for the BEC. Obata et al. proved that ϵ POT HA (d l, d r = 2, d g = 2) = ϵ Shannon HA (d l, d r = 2, d g = 2). Protograph of spatially coupled (4,8,5)-HA code
Uncoupled Codes and How to Couple Them (d l, d r )-regular codes x d r x d l Density evolution updates scalers x (l) R, (l = 0, 1,... ) (d l, d r, L, w)-coupled codes Shortened {}}{ 0 {}}{ L 1 {}}{ Shortened {}}{ x d r x d l Density evolution updates vectors x (l) R L, (l = 0, 1,... )
(d l, d r, d g, L, w) Coupled Hsu-Anastasopoulos (HA) Codes Achieve Capacity with BP Decoding with Bounded Degree Achieve the capacity of BEC with bounded degree (proved). Numerical results show universality over BMS channels, e.g., BAWGNC [Mitchell et al., 12]. x d r Shortened {}}{ 0 {}}{ L 1 {}}{ Shortened {}}{ x d l x d g x d g
Block Codes and Rateless Codes Block codes Block codes are designed to generate n fixed-length coded bits from k information bits so that the coding rate R = k n approaches the capacity C with vanishing decoding error probability. Rateless Codes Rateless codes are designed to generate limitless sequence of coded bits from k information bits so that receivers can recover the k information bits from arbitrary (1 + α)k/c =: n received bits with vanishing overhead α and with vanishing decoding error probability. Two preceding rateless codes 1. Raptor Codes 2. Coupled Rateless LDGM Codes
Raptor Codes Have Unbounded Degree Scheme [Shokrollahi, 03] High-rate regular LDPC pre-code Each output bit is the sum of d randomly selected pre-coded bits w.p. Ω d Optimized irregular output degree distribution Ω(x) = d Ω dx d What is strong Raptor codes achieve capacity of BEC(ϵ). Drawback x d r x d l e l avg(x 1) Ω(x) Raptor codes need to have unbounded maximum degree d of Ω(x) to achieve capacity, which leads to a computation complexity problem both at encoders and decoders. Being optimized for only BEC, raptor codes are not universal for BMSC.
Coupled Rateless LDGM Codes Have Non-zero Error Probability Shortened 0 L 1 {}}{{}}{{}}{ Shortened {}}{ e l avg(x 1) x d Scheme [Aref and Urbanke, 11] Coupled rateless LDGM code No precode What is strong Achieves area threshold of the underlying code Coupled rateless LDGM codes are likely universal Drawback Degree 0 information bit nodes exist, which bounds decoding error probability away from 0.
We Propose Using Coupled HA Codes for Rateless Coding x d r Shortened {}}{ 0 {}}{ L 1 {}}{ Shortened {}}{ x d l e l avg(x 1) x d g Do the codes achieve capacity of BEC or achieve zero-overhead? If so how small the degree d l, d r, d g can be?
Density Evolution Predicts Error Probability i: section and l: BP iteration round p (l) i : erasure probability of BP messages sent from bit nodes to check nodes s (l) i : erasure probability of BP messages sent from bit nodes to channel nodes p (0) i = s (0) i = ( p (l+1) 1 i = w ( 1 Λ w ( s (l+1) 1 i = w ( 1 Λ w w 1 j=0 w 1 j=0 w 1 j=0 w 1 j=0 { 0 i / [0, L 1] 1 i [0, L 1] ( 1 (1 1 w w 1 k=0 ( 1 (1 ϵ)(1 1 w ( 1 (1 1 w w 1 k=0 ( 1 (1 ϵ)(1 1 w Λ(x) = exp[ l avg (1 x)], p (l) i+j k )d r 1 )) d l 1 w 1 k=0 s (l) i+j k )d g 1 )), p (l) i+j k )dr 1)) d l w 1 s (l) i+j k )d g 1 ))
We are interested in minimum overhead α such that decoding erasure probability goes to 0. From density evolution, overhead threshold αl is calculated as follows α L := inf { α > 0 P ( ) b (L) = 0 }, P (l) b l avg = := 1 L L 1 p (l) i, i=0 d g 1 ϵ LR pre (L)(1 + α) L + w 1 From the rateloss effect, we need large L to have α L = 0 Do the proposed codes achieve zero overhead threshold lim L α L = 0 If so how small the degree d l, d r, d g can be?
Observed proposed codes achieve zero overhead threshold with some parameters d l, d r, d g. How small can they be? We numerically observed that the proposed codes achieve zero overhead threshold lim L α L = 0 if d l 3, d g 3 and d r 4. When d l = 2, we numerically observed that the proposed codes achieve zero overhead threshold with some parameter tuples (d l = 2, d r, d g ). Overhead Threshold α L 0.5 d l = 2, d r = 4, d g = 2 d l = 2, d r = 3, d g = 2 d l = 2, d r = 3, d g = 3 d l = 2, d r = 4, d g = 3 0.0 10 1 10 2 10 3 L
Necessary Condition of Capacity-Achieving Coupled Precoded Regular Rateless Codes Theorem: For capacity-achieving (d l = 2, d r, d g, L, w) precoded rateless codes have to satisfy d g d r ln(d r 1). d r 2 This implies d r > 2 or d g > 2 is necessary. Outline of Proof: Taylor expansion around 0 gives p (l+1) = P L p (l) + o(p (l) ), where P L is an L L band matrix as follows { w i j (P L ) i,j = w (d 2 r 1)λ(ϵ) ( i j w) 0 ( i j > w) Evaluate the spectrum radius ρ(p L ) of P L gives a necessary condition. x T P L x 1 > ρ(p L ) = max x R L :x 0 x T x 1T P L 1 1 T 1 L = (dr 1)e (l avg) L (1 ϵ),
Conclusion Achievement Empirically observed that precoded rateless codes achieve the capacity over the BEC. Derive a necessary condition of capacity-achieving codes. Ongoing work Prove proposed codes achieve capacity of BEC via potential function. (completed) An extension to BMS channels and a proof for universal capacity-achievability.