EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Size: px
Start display at page:

Download "EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes"

Transcription

1 EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check (LDPC) codes was introduced by Gallager in his thesis in 1961 [2]. Until the invention of turbo codes, they were largely neglected, but received a lot of interest lately, due to the possibility of designing LDPC codes within db of the Shannon limit. We review in this project techniques used to analyze the performance of LDPC codes under a message-passing decoding algorithm, namely belief propagation (BP) [4][5]. This analysis gives us the tools to design good LDPC codes. We will first take the simpler example of transmission over the class of binary erasure channels (BEC). We will then see how it is possible to extend the results for the BEC to the much more general class of symmetric channels. 1

2 1 Low-Density Parity-Check Codes 1.1 Definition The class of codes called low-density parity-check codes was first introduced by Gallager in his thesis in 1961 [2]. In the period between Gallager s thesis and the invention of turbo codes, LDPC codes were largely neglected. The interest of these codes stems from their near Shannon performance, their simple descriptions and implementation, and the possibility to conduct a rigorous theoretical analysis of their properties. It is helpful to describe low-density parity-check codes in terms of a bipartite graph. We refer to the nodes on the left as the message nodes and to the nodes on the right as the check nodes. A bipartite graph with n nodes on the left and m nodes on the right gives rise to a linear code of dimension k n m and block length n in the following way: the bits of a codeword are indexed by the message nodes. A binary vector x = (x 1,, x n ) is a codeword if and only if Hx T = 0, where H is the m n incidence matrix of the graph whose rows are indexed by the check nodes and whose columns are indexed by the message nodes. In other words, (x 1,, x n ) is a codeword if and only if for each check node the exclusive-or of its incident message nodes is zero. 1.2 Regular and Irregular LDPC codes Regular LDPC codes are those for which all nodes of the same type have same degree. Let d v and d c be the degree of the variable nodes and the check nodes respectively. The number of edges emanating from the variable nodes is nd v. It is equal from the number of edges emanating from the check nodes which is md c. Thus the design rate of the regular LDPC code is r = n m n = 1 d v d c For irregular LDPC codes, the degrees of each set of nodes are chosen according to some distribution. We introduce now a notation that will lead to compact descriptions of the main results. We say that a polynomial γ(x) of the form γ(x) = γ i x i 1 i 2 is a degree distribution if γ(x) has nonnegative coefficients and γ(1) = 1. Let λ i and ρ i be the fraction of edges emanating from variable nodes of degree i and check nodes of degree i respectively. Then the polynomials λ(x) and ρ(x) defined like γ(x) are degree distributions that specify the λ i and ρ i s. Let E be the total number of edges in the bipartite graph of an irregular LDPC code having a degree distribution pair (λ, ρ). Then, the number of 2

3 variable nodes is equal to n = d v i=2 λ ie nodes is equal to n = d c i=2 ρ i E i i. Similarly the number of check. Thus we get the equality n 0 λ(x)dx = m 1 0 ρ(x)dx And the design rate for the irregular LDPC code is 1 r = n m n = ρ(x)dx 1 0 λ(x)dx There is a heuristic reason to consider irregular LDPC codes. A variable nodes would like to have a high degree in order to obtain as much information as possible from other nodes. On the other hand, a check node would prefer having a low degree to be able to correct errors. We see here that there is a trade-off since the degree pair (λ, ρ) is coupled for a given design rate. However, it makes sense to have a few variable nodes of high degree that will be corrected first, the other variable nodes of lower degree being corrected afterwards using an iterative decoding algorithm. 1.3 The decoding algorithm Received sequences are decoded using the belief-propagation (BP) algorithm. BP is an iterative algorithm that works as follows. At each iteration, messages are passed along the edges of the graph from variable nodes to their incident check nodes and back. The messages are typically real valued but can also take on the values ±, reflecting the situation where some bits are known with absolute certainty. We will use the binary map 0 1 and 1 1 throughout. Generally, messages which are sent in the lth iteration will be denoted by m (l). By m (l) vc we denote the message sent from the variable node v to its incident check node c, while by m (l) cv we denote the message sent from the check node c to its incident variable node v. Each message represents a log-likelihood ratio log p(x = 1 y) p(x = 1 y) where x is the random variable describing the bit value and y is the random variable describing all the information incorporated into this message. Let m 0 be the log-likelihood ratio of the codeword bit x = ±1 associated to the variable node v conditioned only on the channel observation of this bit. The update equation for the messages under belief propagation are the following: m (l) vc = { m0 if l = 0 m 0 + c V v\{c} m(l) c v if l 1 (1) 3

4 cv = γ 1 m (l) γ(m (l 1) v c ) v V c\{v} (2) where C v is the set of check nodes incident to variable node v, and V c is the set of variable nodes incident to check node c, and γ is the following hyperbolic change of measure γ(x) = (sgn(x), log tanh x 2 ). After a certain number of iterations, we can make a decision for the decoding based on the value of the log-likelihood. It is important to node that if the neighborhood of size 2l around a node is tree-like, then BP will compute after l iterations the maximum a posteriori (MAP) estimate of the bit corresponding to that node, where the known information are all the variable nodes and check nodes in the neighborhood of size 2l. However, if there are loops, we do not have that result any more. There is nevertheless a result of concentration of the bit-error probability to the case of tree-like neighborhood, so we will in the sequel assume that we are in that particular case. 1.4 The design problem We consider the problem of transmitting data over a noisy channel. Shannon gives a limit on the data rate at which we can transmit reliably over a channel [1]. It is the capacity C defined as C = max p(x) I(X; Y ) where X is the input, Y is the output, I(X; Y ) is the mutual information of X and Y. We want to find the distribution over X that will maximize the mutual information between X and Y, so that we maximize our chances to recover X from Y. Shannon proved that their exist codes with rates arbitrarily close to the capacity that achieve a probability of error arbitrarily close to 0. However, there is no practical mechanism to build such a code. We will see in the sequel that LDPC codes with rates close to capacity can achieve low probability of error. Given a channel with capacity C, we want to design a code with rate close to capacity achieving a low probability of error. This is not very practical and we consider instead the dual problem. Given a class of codes of rate r, we are interested in the worst channel under which we can transmit data reliably. To do so we need to consider classes of channel that are parameterized so that we can compare them. By example, the binary erasure channel is parameterized by the probability of erasure, the binary symmetric channel by the cross-over probability, and the binary additive white Gaussian noise channel by the standard deviation. More precisely, let C n (λ, ρ) be the ensemble of LDPC codes with degree distribution pair (λ, ρ) and of block length n. We claim that there exists a 4

5 threshold δ, i.e. maximum channel parameter, with the following properties. For any ɛ > 0 and δ < δ, there exists a length n(ɛ, δ) and a number l(ɛ, δ) such that almost every code in C n (λ, ρ), with n > n(ɛ, δ) has bit error probability smaller than ɛ assuming that transmission takes place over the channel with parameter δ and that l(ɛ, δ) iterations of message-passing decoding are performed. Conversely, for any fixed upper bound on the number of iterations, if the transmission takes place over a channel with parameter δ > δ, then almost every code in C n (λ, ρ) has bit-error probability larger than some constant γ = γ(δ), where γ does not depend on the number of iterations performed. δ is a function of (λ, ρ), so we want to maximize δ as a function of (λ, ρ) that have the same design rate. Once we find the threshold for a class of LDPC codes, we can compare it to the optimal threshold for a code of same rate. By example, if we consider codes of rate one half over the binary symmetric channel, the worst channel will be solution of 1 h(δ opt ) = 1 2 This gives us δ opt = We can judge the performance of a code by how close δ is to δ opt. 2 LDPC codes for the Binary Erasure Channel In this section, we do the analysis to determine δ in the particular case of the binary erasure channel (BEC). We will see in the next section that many ideas can be extended to the more general class of symmetric channels, which is a rather surprising fact. 2.1 Density evolution for the BEC Since the BEC is symmetric, we can assume that the all-one codeword was sent. Let δ the parameter of the BEC, i.e. the probability of getting an erasure. At the channel output, we either get the symbol one or an erasure. We will decode successfully the received sequence to the all-one codeword if the fraction of nodes that are still erasures converges to zero with the number of iterations of BP that we run. Thus, the parameter that we have to keep track of is x l, the expected fraction of erasure messages that are passed in the lth iteration assuming tree-like neighborhoods for each variable node. Let s derive now a recursion for x l. For that, let s write the update equations of BP. In the case of the BEC, BP takes a more intuitive form, so we will derive for purpose of clarity the update equations by using these simpler rules that the formulas (1) and (2) that will work for every class of channel. A message from a check node c to a variable node v will be an erasure if at least one of the incoming messages at that check node except the one from v to c is an erasure. Assuming a tree-like neighborhood, and since 5

6 the BEC is memoryless, all the messages coming to the check node c are independent. Furthermore, we assumed that the all ones codeword was sent, so they are all identically distributed, the probability of a message being an erasure being x l 1. Conditioning on the degree of c, the probability that the message from c to v will be an erasure with probability d c i=2 ρ i (1 (1 x l 1 ) i 1 ) = 1 ρ(1 x l 1 ) A message from a variable node v to a check node c will be an erasure if the received value at that node is an erasure and all incoming messages are erasures. Conditioning on the degree of v, the probability that the message from v to c is an erasure is d v i=2 λ i δ(1 ρ(1 x l 1 )) i 1 = δλ(1 ρ(1 x l 1 )) So, the expected fraction of erasure messages emitted in the lth iteration is given by x l = x l (δ) = δλ(1 ρ(1 x l 1 )) (3) 2.2 Stability The threshold δ is the supremum of all values δ, 0 δ 1 such that x l (δ) converges to zero as l tends to infinity. We remark that 0 is fixed point of the recursion. Zero has to be stable so that x l (δ) can converge to zero. If we expand the recursion (3) into a Taylor series around zero we get x l = δλ (0)ρ (1)x l 1 + O(x 2 l 1 ) Thus, if λ (0)ρ (1) > 1 δ, then there exists a constant ξ = ξ(λ, ρ, δ), ξ > 0, such that for all l, x l (δ) > ξ. On the other hand, if λ (0)ρ (1) < 1 δ then there exists a constant ξ = ξ(λ, ρ, δ), ξ > 0, such that if for some n, x l (δ) ξ then x l (δ) converges to 0 as l tends to infinity. This gives immediately the bound δ 1 λ (0)ρ (1) For the BEC, it can be proven that that bound is tight for any capacityapproaching sequence (λ n, ρ n ) [7]. 2.3 Fixed-point characterization of the threshold for the BEC The stability condition is a local result around the fixed point zero. However, we can derive global results from the update equation. Let s introduce the 6

7 function f such that x l (δ) = f(x l 1, δ) = δλ(1 ρ(1 x l 1 )) We have f(0, δ) = 0 and f(1, δ) = δ. It is clear that f is an increasing function of x. We have x 0 = δ, so we can conclude that x l (δ) is nonincreasing and therefore converges to a point, call it x that verifies x = f(x, δ). So, if there are no fixed points in the interval (0, δ], then x l (δ) will converge to zero. Thus we have the following fixed-point characterization of the threshold δ = sup{δ [0, 1] : x = f(x, δ) has no solution x in (0, δ]} f(x, δ) is an increasing function of δ, so the threshold is precisely the point at which other fixed points appear between 0 and δ. This explains why the rate of convergence of x l (δ) is highly variable. Indeed, if we pick a channel parameter arbitrarily close to the optimal, we will have some point x such that x is arbitrarily close to f(x, δ). The convergence will be slow around this almost fixed point, but once it is passed, x l (δ) will converge faster to zero. 2.4 Capacity-achieving sequences Another formulation of the fixed-point characterization of the threshold is the following: we decode successfully after an initial fraction of erasures δ if δλ(1 ρ(1 x)) < x for x (0, δ] (4) An information theoretic argument imposes that if a δ satisfies this equation, then δ can be at most 1 r, where r is the rate of the code defined by the degree pair (λ, ρ), since it is impossible to communicate reliably at rates above capacity. It is interesting to note that we can derive that same result by elementary integration. We call a sequence (λ n, ρ n ) of rate r capacity-achieving if for every δ < 1 r, there exists n 0 = n 0 (δ) such that for all n n 0 we have 1 H(D) D i=1 xi i ρ n (1 δ n λ(1 x) > x for x (0, 1] Such sequences do exist. Let s see the example of Heavy-Tail/Poisson sequences, also known as Tornado codes. Fix a parameter D and let λ D (x) = where H(D) is the harmonic sum D i=1 1 i. Let ρ D(x) = e µ(x 1), where µ is the unique solution to the equation 1 µ (1 e µ ) = 1 r H(D) (1 1 D + 1 ) 7

8 Since 1 0 λ D(x)dx = 1 H(D) (1 1 D+1 ) and 1 0 ρ D(x)dx = 1 µ (1 e µ ), the sequence (λ D, ρ D ) D 1 gives rise to codes of rate at least r. Further, we have δλ D (1 ρ D (1 x)) = δλ D (1 e µx ) δ H(D) log (e µx ) = δµx H(D) Hence, successful decoding is possible if the fraction of erasures is no more. This quantity equals than H(D) µ Hence we have that 1 r 1 (1 1 e µ D + 1 ) > (1 r)(1 1 D ) (1 r)(1 1 D )λ D(1 ρ D (1 x)) < x for x (0, (1 r)(1 1 D )] This shows that the sequence is capacity-achieving, since (1 r)(1 1 D ) tends to 1 r as D tends to infinity. 3 Density Evolution for Symmetric Channels In this section we will extend the results that we were able to get in the case of the BEC to the much more general case of symmetric channels. 3.1 A necessary condition on the maximum check node degree Gallager proved in his thesis a theorem that imposes, at least for the binary symmetric channel (BSC), a necessary condition on LDPC codes that would achieve capacity: their maximum check degree d c must tend to infinity. Although this bounds the performance of LDPC codes away from capacity, the gap is extremely small and the gap converges to zero exponentially fast in d c. Thus LDPC codes perform still very well for the BSC. Let s enounce the theorem: Let C C n (λ, ρ) be an LDPC code of rate r and block-length n. Let C be used over a BSC with crossover probability δ and assume that each codeword is used with equal probability. If r > 1 h(δ) h(p ), where p = 1+(1 2δ)dc 2 then the block or bit-error probability is bounded away from zero by a constant which is independent of n. To prove that theorem, let s consider a systematic encoding for that code. Let (x 1,, x n ) be the transmitted codeword and (y 1,, y n ) be the received sequence. The x i, 1 i nr are the information digits, and the 8

9 x i, nr < i < n, are the parity checks. Each codeword is used with equal probability, so H(x 1,, x n ) = nr. (x 1,, x n ) is transmitted over the BSC, so H(y 1,, y n x 1,, x n ) = nh(δ). Let s find an upper bound for the entropy of (y 1,, y n ). Since we consider a systematic encoding of the code, each check nodes in the bipartite graph have exactly one edge going to a variable node y i, nr < i < n, where y i is a parity check, and all the other edges are connected to variable nodes y i, 1 i nr. Thus, specifying (y 1,, y n ) is equivalent to specifying (y 1,, y nr ) and the values of the n(1 r) check nodes. A check node is equal to 0 if an even number of the variable nodes it is connected too are corrupted during transmission, and this happens with probability p. Thus we have the upper bound H(y 1,, y n ) nr + n(1 r)h(p ). We have the equality I(x 1:n ; y 1:n ) = H(y 1:n ) H(y 1:n x 1:n ) = H(x 1:n ) H(x 1:n y 1:n ) From our upper bound on H(y 1,, y n ) we get the inequality H(x 1:n y 1:n ) nr +nh(δ) nr n(1 r)h(p ) = n(h(δ) (1 r)h(p )) > 0 This is strictly positive our assumption that r > 1 h(δ) h(p ). We can conclude by Fano s inequality that the block or bit-error probability is bounded away from zero by a constant which is independent of n. 3.2 Density evolution In the case of the BEC and if the input was the all-one codeword, the possible outputs of the channels were either the symbol one or an erasure. Thus, to analyze the performance of LDPC codes under message-passing algorithm we needed to keep track of the fraction of erasure, which is a real parameter. In the more general case of symmetric channels, the output of the channel can be discrete and finite, discrete and countable, or even continuous. We need thus to keep track of the density of the log-likelihood ratio of the value of the bit at each variable node, which makes the problem more complicated than in the case of a real number. Nevertheless, it remains possible to conduct a rigorous analysis and obtain similar results as with the BEC. Let s derive now the equivalent of (3), which we will call density evolution as in [4][5]. The symbols P l and Q l will be shorthand notations for the densities of loglikelihood ratios of the random variables m (l) vc and m (l) cv respectively. if z is a random variable with distribution φ, we denote Γ(φ) the distribution of γ(z). With probability ρ i, the sum in (2) has (i 1) terms, corresponding to the edges connecting c to all its neighbors other than v. We conclude that, in this case, the density of m (l) cv is equal to Γ 1 (Γ(P l 1 ) (i 1) ), thus by total probability theorem the density of m (l) cv equals Q l = Γ 1 (ρ(γ(p l 1 ))) 9

10 A recursion for P l in terms of Q l is derived similarly. The density of the message passed from check node c to variable node v at round l is equal to Q l. At v the incoming messages from all check nodes other than c are added to m 0, the received value for v, and the result is sent back to c. Since by the independence assumption the random variables describing these messages are independent, the density of this message equals P l = P 0 λ(q l ) where P 0 is the initial message density of log-likelihood ratios, assuming that the all-one codeword was transmitted. Combining those two last equations we obtain the desired recursion for P l in terms of P l 1. P l = P 0 λ(γ 1 (ρ(γ(p l 1 )))) (5) Zero was a fixed point of (3). The zero distribution is a fixed point of (5), and we call it. Since we sent the all-one codeword, we would like the log-likelihood ratio to be very large. Thus, if P l converges to, we are certain that the bit that was sent was a one. 3.3 Symmetric densities We say that the density f is symmetric if f(x) = e x f( x) for x R. Let z be the density such that z (x) = δ(x z), with corresponding to the 0 distribution such as in the previous section. 0 and are symmetric distributions. Moreover, we claim that in the case of a symmetric channel, P l is a symmetric density. Let s see why this is true for P 0. Recall that since the channel is symmetric, we have Therefore L(y) = log p(y x = 1) p( y x = 1) = log p(y x = 1) p( y x = 1) = L( y) e u P 0 ( u) = e u p(y L 1 ( u) x = 1) = e u p( y L 1 (u) x = 1) = e u p(y L 1 (u) x = 1) = p(y L 1 (u) x = 1) = P 0 (u) We conclude that P 0 is a symmetric function. All the operation involved in the recursion (5) conserve the symmetry. Thus, for every l, P l is a symmetric density. Let F be the distribution associated to the density f. We define the errorprobability operator P e (f) = F (0) + F (0) 2 10

11 For a symmetric density, P e (f) = 0 if and only if f =. Indeed, P e (f) = 0 imposes that f(x) = 0 for x R, and by the definition of symmetry f(x) = e x f( x) = 0 for x R. Thus f =. The property of symmetry is conserved by all operations involved in (5), and since P 0 is symmetric, we conclude that P l is symmetric for each l. Because P e is a continuous operator, we see that P e (P l ) converges to zero if and only if P l converges to. We now claim the following lemma: a symmetric density f is uniquely determined by {P e (f g z ): z 0}, where g z = e z z + ez 1 + e z z To prove this, we observe that the derivative (1 + z)p e (f g z ) allows us to recover the values of F (z), where F is the distribution associated to f. 3.4 Stability of To get our stability results in the case of the BEC, we derived an asymptotic development around zero of (3). We are interested now in the stability of, so we want to make an asymptotic development around. The derivation is in [5], and we get a similar stability condition as for the BEC. Let ( ) r = log P 0 (x)e x 2 dx R If λ (0)ρ (1) > e r then there exists a constant ξ = ξ(λ, ρ, P 0 ), ξ > 0, such that for all l N, P e (P l ) > ξ. On the other hand, if λ (0)ρ (1) < e r, then there exists a constant ξ = ξ(λ, ρ, P 0 ), ξ > 0, such that if for some l N, P e (P l (P 0 )) ξ, thenp e (P l ) converges to zero as l tends to. As for the BEC, this condition gives rise to an upper bound on the threshold. Assume that the channel family is parameterized by the real parameter δ and assume further that there exists a unique number δ such that P0 δ (x)e x 1 2 dx = λ (0)ρ (1) R where P0 δ (x) is the message density of the received values corresponding to the channel with parameter δ. Then δ δ. It is interesting to note that for cycle codes the stability condition determines the threshold exactly. However, it is not known whether it remains the case for any capacityachieving degree distribution. 3.5 Convergence of P l We will prove in that section that P l converges to a fixed point of (5) under the independence assumption, i.e. the neighborhoods of size 2l of the variable nodes are tree-like. We saw that for every l, the density P l is symmetric. 11

12 The set of symmetric densities is compact, so to prove that P l converges it is sufficient to prove that the set of the clustering points of P l consists of only one point. The density of the bit log-likelihood ratios, conditioned on all the information contained in P l, and the independent observation of the same bit passed through a channel such that p(y x = 1) = g z (y), has density P l g z. Since the new density corresponds again to a maximum a posteriori estimate conditioned on information which is nondecreasing in l, we conclude that P e (P l g z ) is a nonincreasing sequence and thus converges. Suppose P l has two distinct clustering points P 1 and P 2. Then we have, for each z 0, P e (P 1 g z ) = P e (P 2 g z ). By the lemma we proved previously, we conclude that P 1 = P 2, which is a contradiction. Thus P l converges to a fixed point of (5) by an argument of continuity. 3.6 Fixed point characterization of the threshold We enounce here a similar result as in the case of the BEC of a fixed point characterization of the threshold. We just saw that for any symmetric density P 0, the sequence of densities P l converges to a symmetric distribution P which is a fixed-point solution to (5). Therefore, if there does not exist a symmetric density P such that 0 < P e (P g z ) P e (P 0 g z ) for all z 0 and such that P is a fixed point to (5), then P e (P l ) converges to zero as l tends to infinity, or, equivalently, P =. 3.7 Design of capacity-approaching sequences Unlike with the BEC, there is no explicit example of capacity-approaching sequences. However, by using optimization technique such as differential evolution, it is possible to design very good codes. The non-homogeneous character of the convergence to of the message density is used to speed up the optimization of the degree pair (λ, ρ). It is actually useful to remember the points at which the convergence was slow, i.e. the almost fixed points of (5). Then, when we examine a slightly different degree pair, we will take those almost fixed points as inputs and check whether we still have convergence to. References [1] C E Shannon, A Mathematical Theory of Communication, Bell Syst. Tech. J., July 1948 [2] R G Gallager, Low-Density Parity-Check Codes, Cambridge, MA: MIT press,

13 [3] M G Luby. M Mitzenmacher, M, A Shokrollahi, and D A Spielman, Improved Low-Density Parity-Check Codes Using Irregular Graphs, IEEE Trans. Inform. Theory, Vol. 47, No. 2, February 2001 [4] T J Ridchardson, R L Urbanke, The Capacity of Low-Density Parity- Check codes Under Message-Passing Decoding, IEEE Trans. Inform. Theory, Vol. 47, No. 2, February 2001 [5] T J Ridchardson, M A Shokrollahi, and R L Urbanke, Design of Capacity-Approaching Irregular Low-Density Parity-Check Codes, IEEE Trans. Inform. Theory, Vol. 47, No. 2, February 2001 [6] T J Ridcharson, R L Urbanke, Modern Coding Theory, book in progress [7] A Shokrollahi, Capacity-achieving sequences, in Codes, Systems, and Graphical Models (IMA volumes in Mathematics and its Applications), Springer-Verlag,

Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel

Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel Ohad Barak, David Burshtein and Meir Feder School of Electrical Engineering Tel-Aviv University Tel-Aviv 69978, Israel Abstract

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels 2012 IEEE International Symposium on Information Theory Proceedings On Generalied EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels H Mamani 1, H Saeedi 1, A Eslami

More information

An Introduction to Algorithmic Coding Theory

An Introduction to Algorithmic Coding Theory An Introduction to Algorithmic Coding Theory M. Amin Shokrollahi Bell Laboratories Part : Codes - A puzzle What do the following problems have in common? 2 Problem : Information Transmission MESSAGE G

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

Slepian-Wolf Code Design via Source-Channel Correspondence

Slepian-Wolf Code Design via Source-Channel Correspondence Slepian-Wolf Code Design via Source-Channel Correspondence Jun Chen University of Illinois at Urbana-Champaign Urbana, IL 61801, USA Email: junchen@ifpuiucedu Dake He IBM T J Watson Research Center Yorktown

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

LOW-density parity-check (LDPC) codes were invented

LOW-density parity-check (LDPC) codes were invented IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

5. Density evolution. Density evolution 5-1

5. Density evolution. Density evolution 5-1 5. Density evolution Density evolution 5-1 Probabilistic analysis of message passing algorithms variable nodes factor nodes x1 a x i x2 a(x i ; x j ; x k ) x3 b x4 consider factor graph model G = (V ;

More information

Low-density parity-check codes

Low-density parity-check codes Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Capacity-approaching codes

Capacity-approaching codes Chapter 13 Capacity-approaching codes We have previously discussed codes on graphs and the sum-product decoding algorithm in general terms. In this chapter we will give a brief overview of some particular

More information

Optimal Rate and Maximum Erasure Probability LDPC Codes in Binary Erasure Channel

Optimal Rate and Maximum Erasure Probability LDPC Codes in Binary Erasure Channel Optimal Rate and Maximum Erasure Probability LDPC Codes in Binary Erasure Channel H. Tavakoli Electrical Engineering Department K.N. Toosi University of Technology, Tehran, Iran tavakoli@ee.kntu.ac.ir

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

An Efficient Maximum Likelihood Decoding of LDPC Codes Over the Binary Erasure Channel

An Efficient Maximum Likelihood Decoding of LDPC Codes Over the Binary Erasure Channel IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 5, NO. 11, NOVEMBER 24 1 An Efficient Maximum Likelihood Decoding of LDPC Codes Over the Binary Erasure Channel David Burshtein and Gadi Miller School of Electrical

More information

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

An Introduction to Low-Density Parity-Check Codes

An Introduction to Low-Density Parity-Check Codes An Introduction to Low-Density Parity-Check Codes Paul H. Siegel Electrical and Computer Engineering University of California, San Diego 5/ 3/ 7 Copyright 27 by Paul H. Siegel Outline Shannon s Channel

More information

Belief-Propagation Decoding of LDPC Codes

Belief-Propagation Decoding of LDPC Codes LDPC Codes: Motivation Belief-Propagation Decoding of LDPC Codes Amir Bennatan, Princeton University Revolution in coding theory Reliable transmission, rates approaching capacity. BIAWGN, Rate =.5, Threshold.45

More information

BOUNDS ON THE MAP THRESHOLD OF ITERATIVE DECODING SYSTEMS WITH ERASURE NOISE. A Thesis CHIA-WEN WANG

BOUNDS ON THE MAP THRESHOLD OF ITERATIVE DECODING SYSTEMS WITH ERASURE NOISE. A Thesis CHIA-WEN WANG BOUNDS ON THE MAP THRESHOLD OF ITERATIVE DECODING SYSTEMS WITH ERASURE NOISE A Thesis by CHIA-WEN WANG Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the

More information

Bounds on the Performance of Belief Propagation Decoding

Bounds on the Performance of Belief Propagation Decoding Bounds on the Performance of Belief Propagation Decoding David Burshtein and Gadi Miller Dept. of Electrical Engineering Systems Tel-Aviv University Tel-Aviv 69978, Israel Email: burstyn@eng.tau.ac.il,

More information

On the Block Error Probability of LP Decoding of LDPC Codes

On the Block Error Probability of LP Decoding of LDPC Codes On the Block Error Probability of LP Decoding of LDPC Codes Ralf Koetter CSL and Dept. of ECE University of Illinois at Urbana-Champaign Urbana, IL 680, USA koetter@uiuc.edu Pascal O. Vontobel Dept. of

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

Convergence analysis for a class of LDPC convolutional codes on the erasure channel

Convergence analysis for a class of LDPC convolutional codes on the erasure channel Convergence analysis for a class of LDPC convolutional codes on the erasure channel Sridharan, Arvind; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil Published in: [Host publication title

More information

Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity

Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity Henry D. Pfister, Member, Igal Sason, Member, and Rüdiger Urbanke Abstract We present two sequences of ensembles of non-systematic

More information

Time-invariant LDPC convolutional codes

Time-invariant LDPC convolutional codes Time-invariant LDPC convolutional codes Dimitris Achlioptas, Hamed Hassani, Wei Liu, and Rüdiger Urbanke Department of Computer Science, UC Santa Cruz, USA Email: achlioptas@csucscedu Department of Computer

More information

On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble

On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble 5 Conference on Information Sciences and Systems The Johns Hopkins University March 16 18 5 On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble C.-C. Wang S.R. Kulkarni and H.V. Poor

More information

Revision of Lecture 5

Revision of Lecture 5 Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

Graph-based Codes and Iterative Decoding

Graph-based Codes and Iterative Decoding Graph-based Codes and Iterative Decoding Thesis by Aamod Khandekar In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California

More information

B I N A R Y E R A S U R E C H A N N E L

B I N A R Y E R A S U R E C H A N N E L Chapter 3 B I N A R Y E R A S U R E C H A N N E L The binary erasure channel (BEC) is perhaps the simplest non-trivial channel model imaginable. It was introduced by Elias as a toy example in 954. The

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Bounds on Mutual Information for Simple Codes Using Information Combining

Bounds on Mutual Information for Simple Codes Using Information Combining ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar

More information

Analysis of Sum-Product Decoding of Low-Density Parity-Check Codes Using a Gaussian Approximation

Analysis of Sum-Product Decoding of Low-Density Parity-Check Codes Using a Gaussian Approximation IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 2, FEBRUARY 2001 657 Analysis of Sum-Product Decoding of Low-Density Parity-Check Codes Using a Gaussian Approximation Sae-Young Chung, Member, IEEE,

More information

Low-Density Parity-Check Codes

Low-Density Parity-Check Codes Department of Computer Sciences Applied Algorithms Lab. July 24, 2011 Outline 1 Introduction 2 Algorithms for LDPC 3 Properties 4 Iterative Learning in Crowds 5 Algorithm 6 Results 7 Conclusion PART I

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Lecture 3: Channel Capacity

Lecture 3: Channel Capacity Lecture 3: Channel Capacity 1 Definitions Channel capacity is a measure of maximum information per channel usage one can get through a channel. This one of the fundamental concepts in information theory.

More information

Successive Cancellation Decoding of Single Parity-Check Product Codes

Successive Cancellation Decoding of Single Parity-Check Product Codes Successive Cancellation Decoding of Single Parity-Check Product Codes Mustafa Cemil Coşkun, Gianluigi Liva, Alexandre Graell i Amat and Michael Lentmaier Institute of Communications and Navigation, German

More information

On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications

On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications Igal Sason, Member and Gil Wiechman, Graduate Student Member Abstract A variety of communication scenarios

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Lecture 14 February 28

Lecture 14 February 28 EE/Stats 376A: Information Theory Winter 07 Lecture 4 February 8 Lecturer: David Tse Scribe: Sagnik M, Vivek B 4 Outline Gaussian channel and capacity Information measures for continuous random variables

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

Asynchronous Decoding of LDPC Codes over BEC

Asynchronous Decoding of LDPC Codes over BEC Decoding of LDPC Codes over BEC Saeid Haghighatshoar, Amin Karbasi, Amir Hesam Salavati Department of Telecommunication Systems, Technische Universität Berlin, Germany, School of Engineering and Applied

More information

Lecture 11: Polar codes construction

Lecture 11: Polar codes construction 15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

X 1 : X Table 1: Y = X X 2

X 1 : X Table 1: Y = X X 2 ECE 534: Elements of Information Theory, Fall 200 Homework 3 Solutions (ALL DUE to Kenneth S. Palacio Baus) December, 200. Problem 5.20. Multiple access (a) Find the capacity region for the multiple-access

More information

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES RETHNAKARAN PULIKKOONATTU ABSTRACT. Minimum distance is an important parameter of a linear error correcting code. For improved performance of binary Low

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Fountain Uncorrectable Sets and Finite-Length Analysis

Fountain Uncorrectable Sets and Finite-Length Analysis Fountain Uncorrectable Sets and Finite-Length Analysis Wen Ji 1, Bo-Wei Chen 2, and Yiqiang Chen 1 1 Beijing Key Laboratory of Mobile Computing and Pervasive Device Institute of Computing Technology, Chinese

More information

Joint Iterative Decoding of LDPC Codes and Channels with Memory

Joint Iterative Decoding of LDPC Codes and Channels with Memory Joint Iterative Decoding of LDPC Codes and Channels with Memory Henry D. Pfister and Paul H. Siegel University of California, San Diego 3 rd Int l Symposium on Turbo Codes September 1, 2003 Outline Channels

More information

Decomposition Methods for Large Scale LP Decoding

Decomposition Methods for Large Scale LP Decoding Decomposition Methods for Large Scale LP Decoding Siddharth Barman Joint work with Xishuo Liu, Stark Draper, and Ben Recht Outline Background and Problem Setup LP Decoding Formulation Optimization Framework

More information

Decoding Codes on Graphs

Decoding Codes on Graphs Decoding Codes on Graphs 2. Probabilistic Decoding A S Madhu and Aditya Nori 1.Int roduct ion A S Madhu Aditya Nori A S Madhu and Aditya Nori are graduate students with the Department of Computer Science

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation Anatoly Khina EE Systems Dept., TAU Tel Aviv, Israel Email: anatolyk@eng.tau.ac.il Yair Yona Dept. of EE, UCLA

More information

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family The Least Degraded and the Least Upgraded Channel with respect to a Channel Family Wei Liu, S. Hamed Hassani, and Rüdiger Urbanke School of Computer and Communication Sciences EPFL, Switzerland Email:

More information

On Bit Error Rate Performance of Polar Codes in Finite Regime

On Bit Error Rate Performance of Polar Codes in Finite Regime On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 13, 2015 Lucas Slot, Sebastian Zur Shannon s Noisy-Channel Coding Theorem February 13, 2015 1 / 29 Outline 1 Definitions and Terminology

More information

On the Design of Fast Convergent LDPC Codes for the BEC: An Optimization Approach

On the Design of Fast Convergent LDPC Codes for the BEC: An Optimization Approach On the Design of Fast Convergent LDPC Codes for the BEC: An Optimization Approach Vahid Jamali, Yasser Karimian, Student Member, IEEE, Johannes Huber, Fellow, IEEE, and Mahmoud Ahmadian, Member, IEEE 1

More information

Noisy channel communication

Noisy channel communication Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei COMPSCI 650 Applied Information Theory Apr 5, 2016 Lecture 18 Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei 1 Correcting Errors in Linear Codes Suppose someone is to send

More information

Multi-Edge Type LDPC Codes

Multi-Edge Type LDPC Codes 1 Multi-Edge Type LDPC Codes Tom Richardson Senior member, IEEE Rüdiger Urbanke Abstract We introduce multi-edge type LDPC codes, a generalization of the concept of irregular LDPC codes that yields improvements

More information

LECTURE 13. Last time: Lecture outline

LECTURE 13. Last time: Lecture outline LECTURE 13 Last time: Strong coding theorem Revisiting channel and codes Bound on probability of error Error exponent Lecture outline Fano s Lemma revisited Fano s inequality for codewords Converse to

More information

4216 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Density Evolution for Asymmetric Memoryless Channels

4216 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Density Evolution for Asymmetric Memoryless Channels 4216 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER 2005 Density Evolution for Asymmetric Memoryless Channels Chih-Chun Wang, Sanjeev R. Kulkarni, Fellow, IEEE, and H. Vincent Poor,

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Iterative Decoding for Wireless Networks

Iterative Decoding for Wireless Networks Iterative Decoding for Wireless Networks Thesis by Ravi Palanki In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California

More information

Introducing Low-Density Parity-Check Codes

Introducing Low-Density Parity-Check Codes Introducing Low-Density Parity-Check Codes Sarah J. Johnson School of Electrical Engineering and Computer Science The University of Newcastle Australia email: sarah.johnson@newcastle.edu.au Topic 1: Low-Density

More information

Spatially Coupled LDPC Codes

Spatially Coupled LDPC Codes Spatially Coupled LDPC Codes Kenta Kasai Tokyo Institute of Technology 30 Aug, 2013 We already have very good codes. Efficiently-decodable asymptotically capacity-approaching codes Irregular LDPC Codes

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

P(c y) = W n (y c)p(c)

P(c y) = W n (y c)p(c) ENEE 739C: Adanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 9 (draft; /7/3). Codes defined on graphs. The problem of attaining capacity. Iteratie decoding. http://www.enee.umd.edu/

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

On the Design of Raptor Codes for Binary-Input Gaussian Channels

On the Design of Raptor Codes for Binary-Input Gaussian Channels 1 On the Design of Raptor Codes for Binary-Input Gaussian Channels Zhong Cheng, Jeff Castura, and Yongyi Mao, Member, IEEE Abstract This paper studies the problem of Raptor-code design for binary-input

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

6.02 Fall 2011 Lecture #9

6.02 Fall 2011 Lecture #9 6.02 Fall 2011 Lecture #9 Claude E. Shannon Mutual information Channel capacity Transmission at rates up to channel capacity, and with asymptotically zero error 6.02 Fall 2011 Lecture 9, Slide #1 First

More information

THE seminal paper of Gallager [1, p. 48] suggested to evaluate

THE seminal paper of Gallager [1, p. 48] suggested to evaluate IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 11, NOVEMBER 2004 2657 Extrinsic Information Transfer Functions: Model and Erasure Channel Properties Alexei Ashikhmin, Member, IEEE, Gerhard Kramer,

More information

Factor Graphs and Message Passing Algorithms Part 1: Introduction

Factor Graphs and Message Passing Algorithms Part 1: Introduction Factor Graphs and Message Passing Algorithms Part 1: Introduction Hans-Andrea Loeliger December 2007 1 The Two Basic Problems 1. Marginalization: Compute f k (x k ) f(x 1,..., x n ) x 1,..., x n except

More information

LP Decoding Corrects a Constant Fraction of Errors

LP Decoding Corrects a Constant Fraction of Errors LP Decoding Corrects a Constant Fraction of Errors Jon Feldman Columbia University (CORC Technical Report TR-2003-08) Cliff Stein Columbia University Tal Malkin Columbia University Rocco A. Servedio Columbia

More information

arxiv:cs/ v1 [cs.it] 16 Aug 2005

arxiv:cs/ v1 [cs.it] 16 Aug 2005 On Achievable Rates and Complexity of LDPC Codes for Parallel Channels with Application to Puncturing Igal Sason Gil Wiechman arxiv:cs/587v [cs.it] 6 Aug 5 Technion Israel Institute of Technology Haifa

More information

Error-Correcting Codes:

Error-Correcting Codes: Error-Correcting Codes: Progress & Challenges Madhu Sudan Microsoft/MIT Communication in presence of noise We are not ready Sender Noisy Channel We are now ready Receiver If information is digital, reliability

More information

Information Theory - Entropy. Figure 3

Information Theory - Entropy. Figure 3 Concept of Information Information Theory - Entropy Figure 3 A typical binary coded digital communication system is shown in Figure 3. What is involved in the transmission of information? - The system

More information

Bifurcations in iterative decoding and root locus plots

Bifurcations in iterative decoding and root locus plots Published in IET Control Theory and Applications Received on 12th March 2008 Revised on 26th August 2008 ISSN 1751-8644 Bifurcations in iterative decoding and root locus plots C.M. Kellett S.R. Weller

More information

Linear and conic programming relaxations: Graph structure and message-passing

Linear and conic programming relaxations: Graph structure and message-passing Linear and conic programming relaxations: Graph structure and message-passing Martin Wainwright UC Berkeley Departments of EECS and Statistics Banff Workshop Partially supported by grants from: National

More information

Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes

Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes Brian M. Kurkoski, Kazuhiko Yamaguchi and Kingo Kobayashi kurkoski@ice.uec.ac.jp Dept. of Information and Communications Engineering

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

Two-Bit Message Passing Decoders for LDPC. Codes Over the Binary Symmetric Channel

Two-Bit Message Passing Decoders for LDPC. Codes Over the Binary Symmetric Channel Two-Bit Message Passing Decoders for LDPC 1 Codes Over the Binary Symmetric Channel Lucile Sassatelli, Member, IEEE, Shashi Kiran Chilappagari, Member, IEEE, Bane Vasic, arxiv:0901.2090v3 [cs.it] 7 Mar

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information