Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel

Similar documents
EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

Optimal Rate and Maximum Erasure Probability LDPC Codes in Binary Erasure Channel

Bounds on the Performance of Belief Propagation Decoding

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

An Efficient Maximum Likelihood Decoding of LDPC Codes Over the Binary Erasure Channel

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

LOW-density parity-check (LDPC) codes were invented

Convergence analysis for a class of LDPC convolutional codes on the erasure channel

Lecture 4 : Introduction to Low-density Parity-check Codes

Slepian-Wolf Code Design via Source-Channel Correspondence

Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity

Iterative Encoding of Low-Density Parity-Check Codes

Practical Polar Code Construction Using Generalised Generator Matrices

An Introduction to Algorithmic Coding Theory

An Introduction to Low Density Parity Check (LDPC) Codes

On the Block Error Probability of LP Decoding of LDPC Codes

Time-invariant LDPC convolutional codes

Belief-Propagation Decoding of LDPC Codes

Low-density parity-check codes

On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications

On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation

On the Design of Fast Convergent LDPC Codes for the BEC: An Optimization Approach

Bounds on the Maximum Likelihood Decoding Error Probability of Low Density Parity Check Codes

Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity

Recent Results on Capacity-Achieving Codes for the Erasure Channel with Bounded Complexity

Joint Iterative Decoding of LDPC Codes and Channels with Memory

On Bit Error Rate Performance of Polar Codes in Finite Regime

ECEN 655: Advanced Channel Coding

LDPC Codes. Slides originally from I. Land p.1

Lecture 4 Noisy Channel Coding

Lecture 4: Proof of Shannon s theorem and an explicit code

Analysis of Sum-Product Decoding of Low-Density Parity-Check Codes Using a Gaussian Approximation

B I N A R Y E R A S U R E C H A N N E L

Why We Can Not Surpass Capacity: The Matching Condition

Decoding Codes on Graphs

5. Density evolution. Density evolution 5-1

Capacity-approaching codes

LP Decoding Corrects a Constant Fraction of Errors

Codes designed via algebraic lifts of graphs

arxiv:cs/ v1 [cs.it] 16 Aug 2005

Bounded Expected Delay in Arithmetic Coding

Graph-based codes for flash memory

Lecture 9 Polar Coding

On Concentration of Martingales and Applications in Information Theory, Communication & Coding

An Introduction to Low-Density Parity-Check Codes

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

LDPC Codes. Intracom Telecom, Peania

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Decomposition Methods for Large Scale LP Decoding

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

Lecture 8: Shannon s Noise Models

2012 IEEE International Symposium on Information Theory Proceedings

SPARSE graph codes have been of great interest in the

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

Enhancing Binary Images of Non-Binary LDPC Codes

Capacity-Approaching PhaseCode for Low-Complexity Compressive Phase Retrieval

LDPC codes for the Cascaded BSC-BAWGN channel

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

BOUNDS ON THE MAP THRESHOLD OF ITERATIVE DECODING SYSTEMS WITH ERASURE NOISE. A Thesis CHIA-WEN WANG

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels

Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity

On the Application of LDPC Codes to Arbitrary Discrete-Memoryless Channels

LDPC Code Design for Distributed Storage: Balancing Repair Bandwidth, Reliability and Storage Overhead

THE seminal paper of Gallager [1, p. 48] suggested to evaluate

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

Bounds on Mutual Information for Simple Codes Using Information Combining

THERE is a deep connection between the theory of linear

Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes and Applications

Successive Cancellation Decoding of Single Parity-Check Product Codes

Bifurcations in iterative decoding and root locus plots

Two-Bit Message Passing Decoders for LDPC. Codes Over the Binary Symmetric Channel

Lecture 2 Linear Codes

Polar Codes are Optimal for Lossy Source Coding

THIS paper provides a general technique for constructing

56 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 1, JANUARY 2007

Message Passing Algorithm with MAP Decoding on Zigzag Cycles for Non-binary LDPC Codes

Fountain Uncorrectable Sets and Finite-Length Analysis

Asynchronous Decoding of LDPC Codes over BEC

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Construction of LDPC codes

On the Design of Raptor Codes for Binary-Input Gaussian Channels

Adaptive Decoding Algorithms for Low-Density Parity-Check Codes over the Binary Erasure Channel

Error Exponent Region for Gaussian Broadcast Channels

Graph-based Codes and Iterative Decoding

1 Background on Information Theory

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

The Compound Capacity of Polar Codes

LECTURE 10. Last time: Lecture outline

Making Error Correcting Codes Work for Flash Memory

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

On the Secrecy Capacity of the Z-Interference Channel

Asymptotic redundancy and prolixity

Linear and conic programming relaxations: Graph structure and message-passing

Structured Low-Density Parity-Check Codes: Algebraic Constructions

On the Duality between Multiple-Access Codes and Computation Codes

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Transcription:

Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel Ohad Barak, David Burshtein and Meir Feder School of Electrical Engineering Tel-Aviv University Tel-Aviv 69978, Israel Abstract We derive upper bounds on the maximum achievable rate of low-density parity-check (LDPC) codes used over the binary erasure channel (BEC) under Gallager s decoding algorithm, given their right-degree distribution. We demonstrate the bounds on the ensemble of right-regular LDPC codes and compare them with an explicit left-degree distribution constructed from the given right-degree. Index Terms - Low-density parity-check (LDPC) codes, Binary erasure channel (BEC), Iterative decoding. I Introduction Low-density parity-check (LDPC) codes were invented by Gallager in 963 [], but were relatively ignored for three decades. Their revival started when they were rediscovered in the mid 9 s, and since then they have attracted a great deal of interest. The binary erasure channel (BEC), To be published, IEEE Transactions on Information Theory, vol. 5, no., October 24 (submitted July 22, revised July 24) This research was supported by the Israel Science Foundation, grant No. 22/-.

presented by Elias [2] already in 955, has lately become increasingly popular, as it may be used to model communication via packet-loss networks, such as the Internet. Luby et. al. [3] suggested an iterative algorithm for decoding LDPC codes over the BEC and showed that the proposed scheme can approach channel capacity arbitrarily close. This algorithm is Gallager s soft decoding algorithm [] when applied to the BEC. Throughout the paper we shall consider irregular LDPC codes with left, right edge-degree distribution polynomials l max λ(x) = λ i x i i=l min r max, ρ(x) = ρ i x i i=r min respectively, where λ i (ρ i ) is the fraction of edges in the bipartite graph that have left (right) degree i. The same code profile may alternatively be described in terms of node-degree distribution (the degree distribution from the node perspective). polynomials are l max λ(x) = λi x i, ρ(x) = ρ i x i i=l min i=r min The left, right node-degree distribution respectively, where λ i ( ρ i ) is the fraction of left (right) nodes in the bipartite graph that have degree i. We denote the number of left (variable) nodes by n and the number of right (check) nodes by n k. The designed rate of the code is r max R = k n = ρ λ () where λ = λ(x)dx = i λ i i, and ρ = ρ(x)dx = i ρ i i. We shall also consider a BEC with the input X taking values from {, } and the output Y taking values from {, ε, }. Let the loss fraction (probability of erasure) be, thus P {Y = ε X = } = P {Y = ε X = } = and P {Y = X} =. The LDPC code is transmitted over the BEC, and is iteratively decoded using Gallager s softdecoding message-passing algorithm, also named belief-propagation (which for the BEC assumes a particularly simple form described, for instance, in [4, 3]). It has been shown [3] that the beliefpropagation decoding scheme can correct a fraction of erasures (losses) in the channel if and 2

only if λ( ρ( x)) < x x (, ]. (2) This inequality shall henceforth be called the successful decoding condition. The paper is organized as follows. In Section II we derive a measure for how close the rate of an LDPC code is to the capacity of the worst BEC over which the code is successfully decodable, given some right-degree distribution. In Section III we use this measure to derive the zero-order bound on the maximal fraction of errors that can be corrected by a rate R LDPC, with a given ρ(x). This bound has already been shown before [5, Theorem ]. In Section IV we derive the first-order bound which always improves the zero-order bound. In Section V we derive the secondorder bound, which is sometimes better than the first-order bound. In Section VI we demonstrate the bounds on the ensemble of right-regular codes, and compare them to the achievable rate of an actual code sequence. Section VII concludes the paper. II How close is the code rate to capacity? In this section we show the relation between the fulfillment of the successful decoding condition and a limit on the code rate. We start with the following lemma: Lemma For any LDPC code characterized by degree distributions λ(x) and ρ(x) and rate ρ R = λ, given a BEC with erasure probability, and having the function f(x) defined such that Then: f(x) = ρ ( x) λ(x). (3) C R = a r f(x)dx (4) where ρ ( ) is the inverse function of ρ( ), a r =/ ρ is the average degree of right (check) nodes and C = is the capacity of the channel. Proof: Note that since λ(x) and ρ(x) are both polynomials with non-negative coefficients that sum to one (and they are not constant), they are strictly monotonically increasing functions of 3

x for x. Hence they are both invertible over the above range [3]. In addition, they both equal to for x =, and to for x =. We define z =ρ ( x) or, equivalently, ρ(z) = x. Thus dρ(z) dz = dx dz. We get ρ ( x)dx = + z dρ(z) dz (5) dz = + [zρ(z)] ρ(z)dz = ρ. Integrating (3) over [, ] and substituting (5) and () results in ρ f(x)dx = λ = [ ρ C ] R thus proving the lemma. Implication: From this lemma we can derive a measure of how close the rate of the code is to the capacity of the worst BEC for which the code is successfully decodable. This measure is in terms of the average degree of check (right) nodes a r and of the function f(x) we have defined here. Note that defining a new variable x = ρ( ˆx) and substituting it into the successful decoding condition, λ( ρ( ˆx)) < ˆx, yields λ(x) < ρ ( x) (6) < x. (7) First we see from (7) that for a successfully decodable code, f(x) is positive over the range [, ], verifying that the rate must be less than the capacity. But in addition, any upper bound on f(x)dx provides an upper bound on the maximal fraction of errors that can be corrected, and we see that the closer the non-negative integral f(x)dx is to zero, the closer the rate R is to the capacity C. Consider the following design problem. We wish to design a code so as to maximize its rate R subject to the requirement that it should be successfully decodable for a BEC with erasure probability. Thus, given the right-degree distribution ρ(x), it is desired to find a left-degree distribution λ(x) so that the integral of f(x) over the range [, ] is minimized, subject to (7). The smaller this integral is, the higher the achievable code rate is. In the sequel we shall see that these constraints on λ(x) limit the maximum achievable rate away from the channel capacity even farther than has been stated thus far. 4

III The zero-order bound Theorem (The zero-order bound) [5, Theorem ] The rate of an LDPC code with a fixed right edge-degree distribution ρ( ), which is successfully decodable under iterative decoding over a BEC with an erasure probability, never exceeds the following upper bounds:. The simple bound 2. The tighter bound R ( ) a r R ρ( ) (8) (9) where ρ( ) is the corresponding right node-degree distribution. In general, the idea behind the zero-order bound is based on the fact that the expression ( ρ ( x)) is greater than for some interval near x =, while λ(x). Hence calculating the area of this specific part that exceeds renders a value by which R is bounded away from C. In Figure we illustrate this value by the painted region. The complete proof of these bounds is deferred to Appendix A. We note that Burshtein et. al. [6] presented an upper-bound on the achievable rate of LDPC codes over a general binary-input symmetric-output channel under maximum likelihood (ML) decoding. Let X {, } denote the channel input, let Y denote the channel output, and let φ(y) =P (Y = y X = ) = P (Y = y X = ). They defined a crossover probability ɛ of the channel (the probability of estimation error in optimal decoding) ɛ = 2 min(φ(y), φ( y))dy, and proved that for a right-regular code with right node-degree d = D +, the rate is bounded as follows R C h(ɛ d ) where ɛ d = 2 ( ( 2ɛ)d ) and h( ) is the binary entropy function. Applying this bound to the specific case of BEC with loss-fraction, we substitute /2 for the crossover probability ɛ 5

and thus have ɛ d = 2 ( ( )d ), hence setting ρ(x) = x d in the zero-order bound (9) yields R C 2ɛ d. Recalling that h(x) 2x in [, 2 ], the zero-order bound is tighter than the one stated in [6]. This is not surprising as it considers iterative decoding and not ML decoding and as it only considers the special case of transmitting over a BEC. Recently, it has been shown in [7] that the zero-order bound also applies to ML decoding over the BEC and is valid for every particular code with the given (right) profile. This result does not necessarily apply to the new bounds we present in Section IV and in Section V. IV The first-order bound The zero-order bound is actually not new. Shokrollahi proved a completely equivalent inequality [5, Theorem ]. Our approach differs however from the one given in that paper, and it enables us to tighten the bound. We start by asserting a few auxiliary lemmas. Lemma 2 Let ρ( ) be a right degree distribution, let ρ ( ) be its inverse function, and let y(x) = ρ ( x). () Then both y(x) and dy dx Proof: Defining ŷ =y, we write are monotonically increasing. x = ρ( ŷ) = ρ i ( ŷ) i i 2 and differentiate it with respect to x = dŷ dx (i )ρ i ( ŷ) i 2. i 2 Recalling that ρ i are all non-negative and that they sum to (therefore at least one of them is strictly positive) we conclude that dy dx expression once again, we obtain = d2 ŷ dx 2 (i )ρ i ( ŷ) i 2 i 2 is strictly positive over the range (, ). Differentiating the ( ) dŷ 2 (i )(i 2)ρ i ( ŷ) i 3. dx i 3 6

Having proved that dŷ dx is strictly positive, we conclude that d2 y dx 2 is also strictly positive over (, ). Lemma 3 Let y(x) be defined as in Lemma 2 with < <. Suppose that dy <. Then x= there exists exactly one tangent to y that passes through the point (, ), and this tangent touches y(x) within the range (, ). dx Proof: We recall that y() = and that y() >. Given that it is convex (Lemma 2) and that its slope is less than at x = it is easy to see that there exists a unique tangent to y(x) that passes through (, ), and that the tangent point lies within the relevant interval (, ). Notes pertaining to finding the tangent point in Lemma 3 can be found in Appendix B. Lemma 2 and Lemma 3 allow us to state a tighter bound on the achievable rate: Theorem 2 (The first-order bound) For any LDPC code successfully decodable under iterative decoding over a BEC with erasure probability, if the first derivative of y(x) (as defined in Lemma 2) at x = is less than, then the code rate R is bounded as follows.. The simple bound 2. The tighter bound R R ( y a ) a r + ar ( y a )( x a ) 2 ρ( y a ) + a r ( y a )( x a ) 2. () where (x a, y a ) is the tangent point defined in Lemma 3, and a r is the average right degree / ρ. (2) Proof: We denote by a(x) the tangent line defined in Lemma 3. a(x) = y a x a x + y a x a x a. We recall that λ(x) is convex for x (, ). In addition, λ() = and λ(x a ) < y a. We conclude that λ(x) < a(x) for x [x a, ). 7

Recalling (3), we obtain f(x) ρ ( x) a(x) x a x. Thus we have f(x)dx [ ρ ( x) ( y ] a)x + (y a x a ) dx x a x a (a) ρ = ( x a )y a + ρ(ρ ( x a )) ( x a)( + y a ) 2 = ρ( y a) ρ 2 ( x a)( y a ) where (a) follows by (2) in Appendix A. The tighter bound (2) now follows from Lemma after rewriting (4) as R = a r whereas the simple bound () then follows from (2) in Appendix A. f(x)dx, (3) Note that the above expressions hold for any (x a, y a ) on the curve y(x). Thus it is easy to see that the zero-order bound is a special case of the first-order bound, when y a =. However, (x a, y a ) is chosen to be the above mentioned tangent point in order to obtain the tightest bound on R. It is easy to see that this bound necessarily improves on the zero-order bound. The painted area in Figure 2 denotes the difference in our bound to f(x)dx between the zero-order and the first-order bounds. Note: In the last theorem, we excluded the case in which the first derivative of y(x) is at least. In this case, simply choosing λ(x) = x meets the condition for successful decoding (7). As this choice maximizes λ given the fixed right-degree distribution ρ(x), this yields the highest code ρ rate possible, which renders in this case R = λ = 2 ρ. From a design point of view this is not an interesting case, and the consequent code will tolerate a higher erasure probability than the considered one. Practically, it means that ρ(x) has been chosen ineffectively as it is not suited to the relatively low value of, hence it could have been chosen such that the achievable rate would have been higher and closer to the capacity. This can also be seen as follows. It is easy to verify that dy =. On the other hand, the stability condition [8], which is a x= dx ρ () 8

necessary condition for successful iterative decoding states that λ ()ρ () < should hold. It is also known [8] that for any sequence of capacity-achieving degree distributions over the BEC, the stability condition becomes tight. Thus as λ () < for these capacity-achieving ensembles, dy λ () <. x= dx V The second-order bound Before stating the second-order bound, we need the following lemma. Lemma 4 Let λ(x), ρ(x) be the degree distributions of an LDPC code used over a BEC. Let a = d dx y(x) x= (where y(x) is defined above). Suppose that a < and that the code is successfully decodable for a BEC with erasure-probability. Let b(x) =ax + ( a)x 2. Then λ(x) b(x) x [, ]. Proof: We prove the lemma by contradiction. Let us assume that the statement of the lemma does not hold. Thus there exists some point x (, ) such that λ(x ) > b(x ). We recall that λ() = b() = and that λ() = b() =. Hence there exist x (, x ) and x 2 (x, ) such that The derivative of the parabola b(x) is the straight line λ (x ) > b (x ), λ (x 2 ) < b (x 2 ). (4) b (x) = a + 2( a)x. As λ () a (the stability condition, [5, 8]), we have that λ () b (). (5) Recalling that all the derivatives of λ(x) are positive, monotonically increasing and convex in (, ], the following must hold λ (x ) (a) = λ (αx 2 ) 9

(b) αλ (x 2 ) + ( α)λ () (c) αb (x 2 ) + ( α)b () (d) = b (αx 2 ) = b (x ) where in (a) we defined α = x x 2 <, (b) follows by the convexity of λ (x), (c) follows (4) and (5), and (d) holds as b (x) is a straight line. This is impossible as it contradicts (4). Theorem 3 (The second-order bound) Consider an LDPC code with left, right degree distribution λ(x), ρ(x), respectively, successfully decodable under iterative decoding over a BEC with erasure probability. Let b(x) be the parabola defined in Lemma 4, and let x b be the minimum value of ˆx b such that b(x) y(x) x [ˆx b, ], where y(x) is defined in Lemma 2. Let a(x) be the tangent line defined in Lemma 3. We define c(x) to be the convex envelope of a(x) in the interval [x a, ], and the parabola b(x) in the interval [x b, ]. Suppose further that dy <. Then x= R a r α f b(x) where f b (x) =y(x) c(x), α = min(x a, x b ) and a r denotes the average right degree / ρ. Note: The existence of an interval [x b, ] in which b(x) y(x) is guaranteed as b() < y(). The proof of the theorem follows from (3), recalling that λ(x) is a convex function that satisfies both λ(x) b(x) and λ(x) a(x), and using the definition of a convex envelope (e.g. [9, p. 25]). The painted area in Figure 3 denotes the difference in our bound to f(x)dx between the first-order and the second-order bounds. It is not guaranteed that the second-order bound improves on the first-order bound. This depends on whether the parabola b(x) is in some interval below a(x). Otherwise, c(x) = a(x) and the two bounds coincide. As we demonstrate in the next section, both cases are possible. dx (6) VI Examples: right-regular codes We examined the behavior of these bounds with right-regular codes, and compared them to the rate achieved by some specific code profiles. right-regular sequence introduced in [5, ]. The code profile we consider is a variant of the

For a given right-degree d = D + (i.e., right generator polynomial ρ(x) = x D ) and erasure probability we set the left generator polynomial to λ(x) = i I a i x i + bx I+ (7) where a i are the first I Taylor coefficients of the function ( ( x)/d ) (which are all positive), I is the highest integer such that I i= a i and b is set such that λ() =. The rate R can ρ then be calculated by R = λ. Since (2) can be written as λ(x) < ( ρ ( x)) x (, ], it can easily be seen that the above sequence meets this requirement by definition. It has been shown that the designed rate of this sequence approaches the capacity for D. Figure 4 shows the maximum achievable rate of right-regular codes according to each of the bounds versus the right degree, for two values of the fraction-loss. The upper-bounds are also compared with the rate achieved by the code sequence defined in (7). The code sequence in (7) produced higher rates than the code sequence in [5]. The figure demonstrates that while all bounds approach the channel capacity C = for D, the first-order bound is below the zero-order bound for all right degrees. We can also see that the second-order bound improves on the first-order bound only for small values of D (for larger values of D, where the second-order bound does not improve, we omitted it from the figure). One may figure out that the improvement of our new bounds on the previously known bound (the zero-order bound) is substantial for low right degrees, exactly where achievable rates are farthest from channel capacity, and that they are very close to the achievable rate of the code sequence defined in (7). VII Conclusion We derived improved upper bounds on the rate of LDPC codes used over the BEC under iterative decoding, for which reliable communication is achievable, given their right-degree distribution. While our novel bounds are not provably tight, they are practically tight. This was demonstrated for several right-regular degree profiles, by showing that they are close to the rate of some

actual left-degree profile. Appendices A Proof of Theorem The node-perspective left, right degree-distribution is the set { λ i }, { ρ i }, respectively, where λ i = λ i/i λ ; ρ i = ρ i/i ρ, (8) or by means of the polynomials λ(x) = x λ(ˆx)dˆx x ; ρ(x) = ρ(ˆx)dˆx. (9) λ ρ Claim: For any < α < and β = ρ ( α) [ ρ ] ( x) ρ dx = ( α)β + ρ(ρ ( α)) α Proof: Let z(x) = ρ ( x), thus z() = and dx dz = dρ(z) dz. We obtain ρ ( x)dx = α + z dρ(z) dz dz α ρ ( α) ρ = α + zρ(z) ρ ( α) + ( α) ρ(z)dz (a) = ( α)( ρ ( α)) + ρ(ρ ( α)) ρ (2) where in (a) we utilize (9). We now prove the zero-order bound, starting with the tighter one (9) then proceeding to (8). Proof: (of Theorem ) We start with (3) and note that λ(x). Thus f(x) ρ ( x) in which the right hand side is non-negative for x ρ( ). As f(x) is non-negative over the whole range [, ], we have f(x)dx ρ( ) [ ρ ] ( x) dx (a) = [ρ( )( ( )) + ρ( ) ρ] ρ( ) = ρ( ) ρ 2

where (a) follows by (2). Now (9) follows by (4) To prove the weaker bound (8) we recall [5] that since ρ i form a probability distribution, we have ρ(x) = i ρ i x i x i i ρ i = x / ρ = x a r, (2) which leads to the desired bound. B Finding the Tangent Point of Lemma 3 In this appendix we show how the tangent point in Theorem 2 can be found by solving a polynomial equation. Presenting () as x = ρ( y), the tangent point (x a, y a ) is determined by the two equations x a = ρ( y a ) x a y a = ρ ( y a ) which, by substituting u = y a, can be presented as and ρ(u) = ( + u) ρ (u) (22) x a = ρ(u) y a = u In order to calculate the tangent point (x a, y a ) one first solves (22), which is a polynomial equation in u, and finds the unique solution over (, ). The existence of this unique solution has already been proved in Lemma 3. The tangent point is then calculated using (23). In the particular case of a right-regular code discussed in Section VI, where ρ(x) = x D and D + is the right-degree, there exists a closed-form solution to the tangent point, namely: Acknowledgment x a = ( y a = ( ( )D D ) D ( )D D The authors would like to thank Simon Litsyn and Avner Dor for some fruitful discussions. They would also like to thank Igal Sason for his constructive comments. ) (23) 3

References [] R. G. Gallager, Low Density Parity Check Codes, M.I.T. Press, Cambridge, Massachusetts, 963. [2] P. Elias, Coding for two noisy channels, in Information Theory, Third London Symposium, London, England, 955, Butterworth s Scientific Publications. [3] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. A. Spielman, Efficient erasure correcting codes, IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 569 584, Feb. 2. [4] A. Shokrollahi, Capacity-achieving sequences, in IMA volumes in Mathematics and its Applications, B. Marcus and J. Rosenthal, Eds., 2, vol. 23, pp. 53 66. [5] M. A. Shokrollahi, New sequences of linear time erasure codes approaching the channel capacity, in Proceedings of AAECC-3. 999, vol. 79, pp. 65 76, Lecture Notes in Computer Science. [6] D. Burshtein, M. Krivelevich, S. Litsyn, and G. Miller, Upper bounds on the rate of LDPC codes, IEEE Transactions on Information Theory, vol. 48, no. 9, pp. 2437 2449, Sept. 22. [7] I. Sason and R. Urbanke, Parity-check density versus performance of binary linear block codes over memoryless symmetric channel, IEEE Transactions on Information Theory, vol. 49, no. 7, pp. 6 635, July 23. [8] T. Richardson, A. Shokrollahi, and R. Urbanke, Design of capacity-approaching irregular low-density parity-check codes, IEEE Transactions on Information Theory, pp. 69 637, Feb. 2. [9] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming, Theory and Applications, Wiley, second edition, 993. [] P. Oswald and M. A. Shokrollahi, Capacity-achieving sequences for the erasure channel, IEEE Transactions on Information Theory, vol. 48, no. 2, pp. 37 328, Dec. 22. 4

List of Figures Zero-order bound..................................... 6 2 First-order bound..................................... 7 3 Second-order bound................................... 8 4 Bounds on LDPC codes that are right-regular, for two values of loss-fraction, compared to the rates achieved by the code profiles defined in (7).......... 9 5

4 Zero order bound 3.5 ρ(x) = x D D = 5 =.25 3 2.5 2 [ ρ ( x)]/.5.5..2.3.4.5.6.7.8.9 x Figure : Zero-order bound 6

4 First order bound 3.5 ρ(x) = x D D = 5 =.25 3 2.5 2 [ ρ ( x)]/.5.5 a(x)..2.3.4.5.6.7.8.9 x Figure 2: First-order bound 7

4 Second order bound 3.5 ρ(x) = x D D = 5 =.25 3 2.5 2 [ ρ ( x)]/.5.5 a(x) b(x)..2.3.4.5.6.7.8.9 x Figure 3: Second-order bound 8

=.5 ρ(x) = x D.8 R (Code rate).75.7 zero order bound first order bound second order bound actual left degree profile 5 2 25 3 D =.25 ρ(x) = x D.7 R (Code rate).65.6 zero order bound first order bound second order bound actual left degree profile 5 5 2 25 3 D Figure 4: Bounds on LDPC codes that are right-regular, for two values of loss-fraction, compared to the rates achieved by the code profiles defined in (7). 9