On the Block Error Probability of LP Decoding of LDPC Codes

Similar documents
LP Decoding Corrects a Constant Fraction of Errors

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Convergence analysis for a class of LDPC convolutional codes on the erasure channel

TURBO codes [7] and low-density parity-check (LDPC)

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

A Universal Theory of Pseudocodewords

Linear and conic programming relaxations: Graph structure and message-passing

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

Time-invariant LDPC convolutional codes

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

Slepian-Wolf Code Design via Source-Channel Correspondence

Pseudocodewords from Bethe Permanents

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Analysis of Sum-Product Decoding of Low-Density Parity-Check Codes Using a Gaussian Approximation

Graph-based codes for flash memory

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Exact MAP estimates by (hyper)tree agreement

Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel

LOW-density parity-check (LDPC) codes were invented

Pseudocodewords of Tanner Graphs

Iterative Encoding of Low-Density Parity-Check Codes

Message-Passing Algorithms and Improved LP Decoding

Optimal Deterministic Compressed Sensing Matrices

Message Passing Algorithms and Improved LP Decoding

Optimal Rate and Maximum Erasure Probability LDPC Codes in Binary Erasure Channel

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels

Decomposition Methods for Large Scale LP Decoding

Lecture 4 : Introduction to Low-density Parity-check Codes

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Lecture 2 Linear Codes

Fountain Uncorrectable Sets and Finite-Length Analysis

Quasi-Cyclic Asymptotically Regular LDPC Codes

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

LDPC Codes. Slides originally from I. Land p.1

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation

An Introduction to Low Density Parity Check (LDPC) Codes

Practical Polar Code Construction Using Generalised Generator Matrices

Belief Propagation, Information Projections, and Dykstra s Algorithm

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

2376 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Note that conic conv(c) = conic(c).

Successive Cancellation Decoding of Single Parity-Check Product Codes

Enhancing Binary Images of Non-Binary LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Girth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes

Stopping, and Trapping Set Analysis

LDPC Codes. Intracom Telecom, Peania

Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1

Decoding Codes on Graphs

Bounds on the Performance of Belief Propagation Decoding

Low-density parity-check codes

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

arxiv: v1 [cs.it] 6 Oct 2009

Integrated Code Design for a Joint Source and Channel LDPC Coding Scheme

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes

Conditions for a Unique Non-negative Solution to an Underdetermined System

An Efficient Maximum Likelihood Decoding of LDPC Codes Over the Binary Erasure Channel

On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble

Low-Density Parity-Check Codes

Belief-Propagation Decoding of LDPC Codes

ECEN 655: Advanced Channel Coding

Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Channel Polarization and Blackwell Measures

THE seminal paper of Gallager [1, p. 48] suggested to evaluate

Codes designed via algebraic lifts of graphs

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

On Bit Error Rate Performance of Polar Codes in Finite Regime

Ma/CS 6b Class 25: Error Correcting Codes 2

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Combinatorial Optimization

Low-complexity error correction in LDPC codes with constituent RS codes 1

Linear Codes, Target Function Classes, and Network Computing Capacity

Towards Universal Cover Decoding

RECENTLY, there has been substantial interest in the

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Error Exponent Region for Gaussian Broadcast Channels

Recent Developments in Low-Density Parity-Check Codes

Decoding of LDPC codes with binary vector messages and scalable complexity

LDPC Decoding Strategies for Two-Dimensional Magnetic Recording

Analysis of Absorbing Sets and Fully Absorbing Sets of Array-Based LDPC Codes

Quasi-cyclic Low Density Parity Check codes with high girth

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 2, FEBRUARY

Analytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes

RECURSIVE CONSTRUCTION OF (J, L) QC LDPC CODES WITH GIRTH 6. Communicated by Dianhua Wu. 1. Introduction

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming

An algorithm to improve the error rate performance of Accumulate-Repeat-Accumulate codes Tae-Ui Kim

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

Two-Bit Message Passing Decoders for LDPC. Codes Over the Binary Symmetric Channel

Probabilistic Graphical Models

An Introduction to Algorithmic Coding Theory

Low-density parity-check (LDPC) codes

Belief propagation decoding of quantum channels by passing quantum messages

Asynchronous Decoding of LDPC Codes over BEC

Optimal Deterministic Compressed Sensing Matrices

Making Error Correcting Codes Work for Flash Memory

Transcription:

On the Block Error Probability of LP Decoding of LDPC Codes Ralf Koetter CSL and Dept. of ECE University of Illinois at Urbana-Champaign Urbana, IL 680, USA koetter@uiuc.edu Pascal O. Vontobel Dept. of EECS Massachusetts Institute of Technology Cambridge, MA 039, USA pascal.vontobel@ieee.org Abstract In his thesis, Wiberg showed the existence of thresholds for families of regular low-density parity-check codes under min-sum algorithm decoding. He also derived analytic bounds on these thresholds. In this paper, we formulate similar results for linear programming decoding of regular low-density parity-check codes. I. INTRODUCTION The goal of this paper is to shed some light on the connection between min-sum algorithm MSA decoding and the formulation of decoding as a linear program. In particular, we address the problem of bounding the performance of linear programming LP decoding with respect to word error rate. The bounds re ect similar analytic bounds for MSA decoding of low-density parity-check LDPC codes due to Wiberg [] and establish the existence of an SNR threshold for LP decoding. While highly ef cient and structured computerbased evaluation techniques, such as density evolution see e.g. [], [3], [4], [5], [6], provide excellent bounds on the performance of iterative decoding techniques, to the best of our knowledge, the best analytic performance bound in the case of MSA decoding is still the bound given by Wiberg in his thesis based on the weight distribution of a tree-like neighborhood of a vertex in a graph. A similar bound was also derived by Lentmaier et al. [5]. We derive the equivalent bound for LP decoding of regular LDPC codes. II. NOTATION AND BASICS In this paper we are interested in binary LDPC codes where a binary LDPC code C of length n is de ned as the nullspace of a sparse binary parity-check matrix H, i.e. C {x F n Hx T = 0 T }. In particular, we focus on the case of regular codes: an LDPC code C is called J, K-regular if each column of H has Hamming weight J and each row has Hamming weight K. The rate of a J, K-regular code is lower bounded by J/K. To an M N parity-check matrix H we can naturally associate a bipartite graph, the so-called Tanner graph TH. This graph contains two classes of nodes: variable nodes V v and check nodes V c. Both variable nodes and check nodes are identi ed with subsets of the integers. Variable nodes are denoted as V v {0,,..., N } and check nodes are denoted as V c {0,,..., M }. Whenever we want to express that an integer belongs to the set of variable nodes we write i V v ; similarly, when an integer belongs to the set of check nodes we write j V c. The Tanner graph TH contains an edge i, j between node i V v and j V c if and only if the entry h is non-zero. The set of neighbors of a node i V v is denoted as i; similarly, the set of neighbors of a node j V c is denoted as j. In the following, E {i, j V v V c i V v, j i} = {i, j V v V c j V c, i j} will be the set of edges in the Tanner graph TH. The convex hull of a set A R n is denoted by conva. If A is a subset of F n then conva denotes the convex hull of the set A after A has been canonically embedded in R n. The inner product between vectors x and y is denoted as x, y = l x ly l. Finally, we de ne the set of all binary vectors of length K and even weight as B K. In the rest of this paper we assume that the all-zeros word was transmitted an assumption without any essential loss of generality because we only consider binary linear codes that are used for data transmission over a binary-input outputsymmetric channel. Given a received vector y we de ne the vector λ λ 0, λ,..., λ N of log-likelihood ratios by λ i = log PY X y i 0. P Y X y i III. LP DECODING Maximum likelihood ML decoding may be cast as a linear program once we have translated the problem into R N. To this end we embed the code into R N by straightforward identi cation of F = {0, } with {0, } R. In other words, a code C is identi ed with a subset of {0, } N R N. Maximum Likelihood Decoding Minimize: λ, x Subject to: x convc This description is usually not practical since the polytope convc is typically very hard to describe by hyperplanes or as a convex combination of points. Given a parity-check matrix H, the linear program is relaxed to [7], [8]

LP Decoding Minimize: λ, x Subject to: x PH Here, PH is the so-called fundamental polytope [7], [8], [9], [0] which is de ned as where PH M j=0 convc j, C j C j H { x F n h j x T = 0 mod }, where h j is the j-th row of H. Since 0 is always a feasible point, i.e. 0 PH holds, zero is an upper bound on the value of the LP in LP decoding. In fact, we can turn this statement around by saying that whenever the value of the linear program equals zero then the all-zeros codeword will be among the solutions to the LP. Thus, motivated by the assumption that the all-zeros codeword was transmitted, we focus our attention on showing that, under suitable conditions, the value of the LP is zero which implies that the all-zeros codeword will be found as a solution. For simplicity we only consider channels where the channel output is a continuous random variable. In this case a zero value of the LP implies that the zero word is the unique solution with probability one. The main idea now is to show that the value of the dual linear program is zero. This technique, dubbed dual witness by Feldman et al. in [] will then imply the correct decoding. First, however, we need to establish the dual linear program. To this end, for each i, j E, we associate the variable τ with the edge between variable node i and check node j in the Tanner graph TH. In other words, we have a variable τ if and only if the entry h is non-zero. For each j V c we de ne the vector τ j that collects all the variables {τ } i j. Also, for each j V c, we associate the variable θ j with the check node j. We have Dual LP Maximize: M j=0 θ j Subject to: θ j x, τ j j V c, x B K j i τ = λ i i V v The dual program has a number of nice interpretations. Any θ j is bounded from above by zero and can only equal zero if the vector τ j has minimal correlation with the all-zeros codeword. Thus the dual program will only get a zero value if we nd an assignment to τ such that the local all-zeros words are among the best words for all j. We are constraint in setting the τ -values by the second equality constraint. In the formal dual program the equality constraint j i τ = λ i is an inequality. However, there always exists a maximizing assignment of dual variables that satis es this conditions with equality. In a generalized LDPC code setting, the local code B K would have to be replaced by the corresponding code. IV. MSA DECODING While MSA decoding is not the focus of interest in this paper, it turns out that the MSA lies at the core of the proof technique that we will use. The MSA is an algorithm that is being run until a predetermined criterion is reached. With each edge in the graph we associate two messages: one message is going towards the check-node and one is directed towards the variable node. Let the two messages be denoted by µ and ν, respectively, where, as in the case of the single variable τ in the section above, variables are only de ned if the entry h is non-zero. The update rules of MSA are then Min-Sum Algorithm MSA Initialize all variables ν to zero. For all i, j E, let µ := λ i + For all i, j E, let ν := i j\{i} j i\{j} signµ j,i ν j,i. min { µ j,i : i j \ {i}}. Rather than the quantity ν we will consider its negative value. Moreover, we keep track of the messages that were sent by message numbers in the superscript. Thus we modify the MSA update equations as Modi ed Min-Sum Algorithm modi ed MSA Initialize all variables ν to zero. For all i, j E, let µ s := λ i For all i, j E, let := ν s+ i j\{i} j i\{j} signµ s j,i ν s j,i. { } µ s min j,i : i j \ {i}. Clearly, the sign change leaves the algorithmic update steps essentially unchanged. Note that e.g. when all {µ s } i j are non-negative then all {ν s } i i will be non-positive. Still, we may e.g. write µ s + j i\{j} νs j,i = λ i which more closely re ects the structure of the dual program above. We will need the notion of a computation tree CT []. We can distinguish two types of CTs, rooted either at a variable node or at a check node. Our CTs will be rooted at check nodes which is more natural when dealing with the dual program. A CT of depth L consists of all nodes in the universal cover of the Tanner graph that are reachable in L steps. In

particular, we will most of the time assume that the leaves in the CT are variable nodes. Assume we have run the MSA for L iterations, corresponding to a CT of depth L. For the moment let us also assume that the underlying graph has girth larger than 4L. Based on the iterations of the MSA and xed CT root node j 0 V c we can assign values to the dual variables in the following way. Was assign values to τ according to the distance of the edge i, j to the root node of the CT. So, if i, j is at distance l + from the root node j 0 then τ is assigned the value µ L l and if i, j is at distance l + from the root node j 0 then τ is assigned the value ν L l. 3 Let us denote this assignment to variables τ as τ j 0, L 4. Note that the assignment τ j 0, L does not satisfy the constraints of the dual linear program, i.e. itself it is not dual feasible. In particular, any edge of distance more than L from the root is assigned the value 0 and hence at any variable node i at distance more than L from the root we do not satisfy the constraint τ = λ i, j i unless λ i happens to be 0. However, we have the following lemma. Lemma : For each j 0 V c let an assignment τ j 0, L be given based on L iterations of the MSA. The sum τ L j 0 V c τ j 0, L is a multiple of a dual feasible point. More precisely, for the number T L L l= J[ K J ] l the vector T L τ L Proof: Each variable node i V v is part of L l= J[ K J ] l CTs for different root nodes j0 and so one can verify that we must have τ L = j 0 V c τ j 0, L = L l= J[ K J ] l λi. Using the abbreviation T L L l= J[ K J ] l we see that T L τ L The above lemma gives a structured way to derive dual feasible points for LP decoding from the messages passed during the operation of the MSA. However, these points are not very good since the overall assignment τ L is again dominated by the leaves of the CT with all the pertaining problems. The problem becomes obvious when we write out the assignment τ L as a function of the MSA messages 3 Edges incident to the root are said to be at distance one. If the distance of the edge i, j to the root j 0 is larger than L then τ 0. 4 The j 0 indicates that the assignment is based on the CT rooted at node j 0. directly. If we perform L steps of iterative decoding, for any edge i, j E we can write τ L = µ L + J ν L + K µl + J K ν L + K µ L +. Written in form of a telescoping sum we get τ L = µ L + J ν L + K µ L + J ν L + K µ L +. While the above sums show that the dual feasible point can be easily computed alongside the MSA recursions it also shows the problem that messages µ l and νl are weighted exponentially more for small values of l. We will have to attenuate the in uence of the leaves in the CTs in order to make interesting statements. To this end, let α be a vector with positive entries of length L and let a generalized assignment τ j 0, L, α to dual variables be derived from τ j 0, L by multiplying the message on each edge at distance l + or l + by α l. 5 In other words, values assigned to edges at distance three or four from the root node are multiplied with α, values at distance ve and six are multiplied with α etc. Again we can form the multiple of a dual feasible point as is shown in the next lemma. Lemma : For each j 0 V c let an assignment τ j 0, L be given based on L iterations of the MSA. The sum τ L, α τ j 0, L, α j 0 V c is a multiple of a dual feasible point. Proof: Each variable node i V v is part of L l= J[ K J ] l CTs for different root nodes j0. Because all edges incident to a variable node are attenuated in the same way, one can verify that we must have τ L, α = j 0 V c τ j 0, L, α = L l= α l J [ K J ] l λi. Using the abbreviation T L L l= α l J [ K J ] l we see that T L τ L Optimizing the vector α gives us some freedom and we want to choose the vector α appropriately. First we have to learn more about the dual feasible point that we construct in this way. While we kept the feasibility of an assignment τ L, α by identically scaling the values τ that are adjacent to a variable node in a CT, we scale values τ that are adjacent to check nodes differently. Given a vector α, the dual feasible 5 An edge that is incident to a node j is said to be at distance one from j; α 0 is set to one.

point may be easily computed together with the messages of the MSA. To this end de ne a vector β with components β l α l α l. Writing again the dual variable τ L, α as functions of µ l and νl we get τ L, α = µ L + J ν L + K α µ L + J K α ν L + K α µ L +. Written in form of a telescoping sum we obtain τ L, α = µ L + J ν L + β K µ L + J ν L + β K µ L +. A particularly interesting choice for β l is β l K. The main reason for this choice is given in the following lemma. Lemma 3: Let K > and x some j V c. Assume the MSA yields messages where µ l is positive for all i j for some l. The inner product i j b i µ l + νl+ is non-negative for all b B K, in particular it is positive for all b B K \ {0}. 6 Proof: Recall that ν l is negative for all i, j E this is in line with the modi cation of the MSA. One can easily verify the following fact about the vector containing µ l +νl+ for all i j: there is only one negative entry and the absolute value of this entry matches the absolute value of the smallest positive entry. The statement follows. With the choice of α i K i, which results in β i =, we get the following expression for the dual feasible K point τ L, α = µ L + J ν L + µl or + J ν L τ L, α = µ L + J ν L + J ν L + µ L + J + µl + µ L + + J L ν + µ. We are still in a situation where µ is weighted by a factor that grows exponentially fast in L. However, we note that, once the MSA has converged, µ l also grows exponentially fast in 6 We assume that the indices of b are given by j. l and this offsets, to some extend, the exponential weighing of µ. In order to exploit this fact more systematically we initialize the MSA s check to variable messages ν, i, j E, with U, where U is a large enough positive number. With this initialization we can guarantee for K > for all i, j E that the value of µ l is strictly positive.7 Thus we can apply Lemma 3. It remains to offset the choice ν with µl. To this end we consider a CT of depth L rooted at check node j 0. Consider the event A j0 that the all-zeros word on this CT is more likely than any word that corresponds to a local nonzero word assigned to the root node. 8. Lemma 4: Let K > and assume the event A j0 is true. i j Moreover, assume that we initialize the MSA with check to variable messages ν = U, i, j E, for a large enough number U. The inner product b i µ L + J L ν is non-negative for all b B K, in particular it is nonnegative for all b B K \ {0}. Proof: We exploit the fact that summaries sent by the MSA can be identi ed with cost differences of log-likelihood ratios. on edge i, j 0. This message may be written as µ L 0 = ρ i J L ν 0 for some ρ i. Since the MSA propagates cost summaries along edges, we Consider a message µ L 0 can interpret ρ i as the summary of the cost due to the λ i inside the subtree that emerges along the edge i, j 0. Similarly, J L ν 0 is the cost contributed by the leaf nodes of this sub-tree. Here we use the fact that the minimal codeword which accounts for a one-assignment in edge i, j 0 contains exactly J L leaf nodes with a oneassignment. But then the vector µ L,j 0, µ L,j 0,..., µ L j 0,j 0 + J L ν,j 0, ν,j 0,..., ν j 0,j 0 equals the vector ρ ρ, ρ,..., ρ j0. The event A j0 is true only if the inner product ρ, b is positive for all b B K \ {0}. Hence event A j0 implies the claim of the lemma. Let τ be the averaged assignment to the dual variables obtained from the MSA messages with ν set to U. Lemmas 3 and 4 imply that the sum, b i τ 0 i j has a non-negative value for any b B K, and, in particular, the value equals zero for b = 0. It follows that θ j0 in the dual LP can be chosen as zero. For each check node j for which event A j is true we can be sure that the correlation of any codeword in B K with τ j is non-negative. If we can be sure that the event A j is true for all check nodes we would, thus, have exhibited a dual witness for the optimality of the all-zeros codeword. We have to estimate the probability of the event A j and set it in relation to the 7 We may choose as any number greater than minλ i /J. 8 Event A j0 is de ned on the CT without the change in initialization

number of checks in the graph TH. In order to estimate the latter we employ a result by Gallager [] that guarantees the existence of J, K-regular graphs in which we can conduct L steps of MSA decoding without closing any cycles provided that L satis es logn L logj K κ where the term κ in this expression is independent of N. Finally, we can estimate the probability of the event A j from the known weight distribution of the code on the CT provided the underlying graph has girth at least 4L. The minimal codewords have weight +J +J + J L and there are a total of KK K J K J K J L = K/K +J +J +...J L minimal-tree codewords. Based on a union bound we thus get an expression P A j < K J L K γ J which means that P A j decreases doubly exponentially in L if the Bhattacharyya parameter γ satis es γ < K. Thus we have proved the following theorem: Theorem 5: Let a sequence of J, K-regular LDPC codes be given that satis es equation. Under LP decoding this sequence achieves an arbitrarily small probability of error on any memoryless channel for which the Bhattacharyya parameter γ satis es γ K. For such a channel the word error probability P W decreases as P W < η ηn logj logj K for some positive parameters η and η. Proof: Most of the proof is contained in the arguments leading up to the theorem. In order to see the explicit form of the word error rate we employ a union bound for the N J K check nodes combining and. We nd that the word error rate is bounded by logn P W < NJ J logj K κ K γ J, where κ does not depend on N. The statement of the theorem is obtained by simplifying this expression. We conclude this paper with an intriguing observation concerning the AWGN channel. In [0] it is proved that no J, K- regular LDPC code can achieve an error probability behavior logj better than P W η 3 η4n logj K for constants η 3 and η 4 that are independent on N. The result of the theorem thus shows that there exist sequences of LDPC codes whose error probability behavior under LP decoding is boxed in as a function of N between: η 3 η4n logj logj K P w η ηn REFERENCES logj logj K [] N. Wiberg, Codes and Decoding on General Graphs. PhD thesis, Linköping University, Sweden, 996. [] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, D. A. Spielman, and V. Stemann, Practical loss-resilient codes, in Proc. 9th Annual ACM Symp. on Theory of Computing, pp. 50 59, 997. [3] M. G. Luby, M. Mitzenmacher, and M. A. Shokrollahi, Analysis of random processes via and-or tree evaluation, in Proc. 9th Annual ACM- SIAM Symp. on Discrete Algorithms, pp. 364 373, 998. [4] T. J. Richardson and R. L. Urbanke, The capacity of low-density paritycheck codes under message-passing decoding, IEEE Trans. on Inform. Theory, vol. IT 47, no., pp. 599 68, 00. [5] M. Lentmaier, D. V. Truhachev, D. J. Costello, Jr., and K. Zigangirov, On the block error probability of iteratively decoded LDPC codes, in 5th ITG Conference on Source and Channel Coding, Erlangen, Germany, Jan. 4-6 004. [6] H. Jin and T. Richardson, Block error iterative decoding capacity for LDPC codes, in Proc. IEEE Intern. Symp. on Inform. Theory, Adelaide, Australia, pp. 5 56, Sep. 4 9 005. [7] J. Feldman, Decoding Error-Correcting Codes via Linear Programming. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, 003. Available online under http://www.columbia.edu/ jf89/pubs.html. [8] J. Feldman, M. J. Wainwright, and D. R. Karger, Using linear programming to decode binary linear codes, IEEE Trans. on Inform. Theory, vol. IT 5, no. 3, pp. 954 97, 005. [9] R. Koetter and P. O. Vontobel, Graph covers and iterative decoding of finite-length codes, in Proc. 3rd Intern. Symp. on Turbo Codes and Related Topics, Brest, France, pp. 75 8, Sept. 5 003. [0] P. O. Vontobel and R. Koetter, Graph-cover decoding and finite-length analysis of message-passing iterative decoding of LDPC codes, submitted to IEEE Trans. Inform. Theory, available online under http:// www.arxiv.org/abs/cs.it/05078, Dec. 005. [] J. Feldman, T. Malkin, C. Stein, R. A. Servedio, and M. J. Wainwright, LP decoding corrects a constant fraction of errors, in Proc. IEEE Intern. Symp. on Inform. Theory, Chicago, IL, USA, p. 68, June 7 July 004. [] R. G. Gallager, Low-Density Parity-Check Codes. M.I.T. Press, Cambridge, MA, 963. Available online under http://web.mit.edu/ gallager/www/pages/ldpc.pdf.