Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Similar documents
An Introduction to Low Density Parity Check (LDPC) Codes

ECEN 655: Advanced Channel Coding

LDPC Codes. Intracom Telecom, Peania

Low-density parity-check (LDPC) codes

Lecture 4 : Introduction to Low-density Parity-check Codes

LDPC Codes. Slides originally from I. Land p.1

Low-density parity-check codes

Low-Density Parity-Check Codes

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

An Introduction to Low-Density Parity-Check Codes

Codes on graphs and iterative decoding

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Codes on graphs and iterative decoding

Coding Techniques for Data Storage Systems

Making Error Correcting Codes Work for Flash Memory

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

Belief-Propagation Decoding of LDPC Codes

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Quasi-cyclic Low Density Parity Check codes with high girth

Introducing Low-Density Parity-Check Codes

A Class of Quantum LDPC Codes Derived from Latin Squares and Combinatorial Design

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields

ECC for NAND Flash. Osso Vahabzadeh. TexasLDPC Inc. Flash Memory Summit 2017 Santa Clara, CA 1

On the minimum distance of LDPC codes based on repetition codes and permutation matrices 1

Low-Density Parity-Check codes An introduction

Iterative Encoding of Low-Density Parity-Check Codes

One-Bit LDPC Message Passing Decoding Based on Maximization of Mutual Information

Analytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes

Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes

An Introduction to Algorithmic Coding Theory

Graph-based codes for flash memory

Graph-based Codes and Iterative Decoding

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

16.36 Communication Systems Engineering

5. Density evolution. Density evolution 5-1

Design and Analysis of Graph-based Codes Using Algebraic Lifts and Decoding Networks

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Lecture 12. Block Diagram

Quasi-Cyclic Asymptotically Regular LDPC Codes

Channel Codes for Short Blocks: A Survey

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Low-complexity error correction in LDPC codes with constituent RS codes 1

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Codes designed via algebraic lifts of graphs

Analysis of Absorbing Sets and Fully Absorbing Sets of Array-Based LDPC Codes

Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes

The E8 Lattice and Error Correction in Multi-Level Flash Memory

An algorithm to improve the error rate performance of Accumulate-Repeat-Accumulate codes Tae-Ui Kim

Fountain Uncorrectable Sets and Finite-Length Analysis

On the Block Error Probability of LP Decoding of LDPC Codes

Turbo Codes are Low Density Parity Check Codes

Capacity-approaching codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Decoding Codes on Graphs

Trapping Set Enumerators for Specific LDPC Codes

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes

EFFICIENT DECODING ALGORITHMS FOR LOW DENSITY PARITY CHECK CODES

LDPC codes based on Steiner quadruple systems and permutation matrices

Decoding of LDPC codes with binary vector messages and scalable complexity

Chapter 7: Channel coding:convolutional codes

Community Detection in Signed Networks: an Error-Correcting Code Approach

Quasi-Cyclic Low-Density Parity-Check Codes With Girth Larger Than

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 1, JANUARY

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

A Class of Quantum LDPC Codes Constructed From Finite Geometries

LOW-density parity-check (LDPC) codes were invented

Belief propagation decoding of quantum channels by passing quantum messages

9 Forward-backward algorithm, sum-product on factor graphs

Lecture 8: Shannon s Noise Models

Linear and conic programming relaxations: Graph structure and message-passing

Lecture 4: Linear Codes. Copyright G. Caire 88

Copyright Warning & Restrictions

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

ABSTRACT. The original low-density parity-check (LDPC) codes were developed by Robert

Low Density Lattice Codes

Distributed Source Coding Using LDPC Codes

Introduction to Convolutional Codes, Part 1

AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES

State-of-the-Art Channel Coding

Practical Polar Code Construction Using Generalised Generator Matrices

Stopping, and Trapping Set Analysis

Communication by Regression: Sparse Superposition Codes

One Lesson of Information Theory

Part III Advanced Coding Techniques

A Proposed Quantum Low Density Parity Check Code

Iterative Decoding for Wireless Networks

LP Decoding Corrects a Constant Fraction of Errors

Efficient design of LDPC code using circulant matrix and eira code Seul-Ki Bae

Distance Properties of Short LDPC Codes and Their Impact on the BP, ML and Near-ML Decoding Performance

Construction of Type-II QC LDPC Codes Based on Perfect Cyclic Difference Set

Asynchronous Decoding of LDPC Codes over BEC

On Bit Error Rate Performance of Polar Codes in Finite Regime

Transcription:

Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp

Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code construction Bipartite Graphs the Tanner Graph Overview of decoding algorithms Bit Flipping decoding Soldier Counting problem Message passing decoding

The Communications Problem Encoder x Channel y Decoder x^ Figure 1. The problem is to communicate reliably over a noisy channel. Let x be the transmitted data, and y be the received sequence: x = (1 0 0 0) y = (1 0 0 1). The approach to solve the problem is to add redundancy: x = (1 0 0 0 1 1 1) y = (1 0 0 1 1 1 1) x ^ = (1 0 0 0 1 1 1). { K { N-K Using this redundancy, the decoder can estimate the original data.

The Communications Problem The problem is solved by using error-correcting codes, ECC. Consider a major class of ECC: linear block codes over the binary field F 2. Block code has length N, and has 2 K codewords which form a subspace of (F 2 ) N. The code s rate is R = K/N. A block code C is defined by a (N-K)-by-N parity check matrix H. The code is the set of length-n vectors x = (x1 x 2 x N ) such that: x H t = 0 Let u = (u1 u 2 u K ) be a length K vector of information. Let G be a K-by-N generator matrix for C: ug = x. Arithmetic is performed over the binary field F 2. + 0 0 0 1 1 All codes have an important property, the minimum distance. 1 1 0 0 1 0 0 0 1 0 1

Parity Check Matrices A binary, linear error-corrrecting code can be defined by a (N K)-by-N parity-check matrix, H. Each row in this matrix is h i : h 1 h 2 H =. h N K For example, the (7,4) Hamming code has: 1 0 1 1 1 0 0 H = 1 1 0 1 0 1 0 1 1 1 1 0 0 1 This code has block length N=7, and information length K=4. The codebook, C, is the set of length N words, x, which satisfy: x H t = 0

Codes Defined by Parity Check Matrices The (7,4) Hamming codebook has 2 4 =16 codewords: yh t = (y 1 y 2... y N ) C = {0000000, 1000111, 0100110, } It is easy to check if a length N vector, y = (y1 y 2 y N ) is a codeword: t h 1 h 2. h N K = 0 y is a codeword Example. For the (7,4) Hamming code: h1: y1 + y 3 + y 4 + y 5 = 0 h 2 : y1 + y 2 + y 4 + y 6 = 0 H = h 3 : y1 + y 2 + y 3 + y 4 + y 7 = 0, 1 0 1 1 1 0 0 1 1 0 1 0 1 0 1 1 1 1 0 0 1 Verify that y = (1 1 1 0 0 1 1) is a (7,4) Hamming codeword: h1: 1 + 1 + 0 + 0 = 0 h 2 : 1 + 1 + 0 + 1 = 1 h 3 : 1 + 1 + 1 + 0 + 1 = 0.

The Communications Problem Given a received word y = (y1 y 2 y N ), the decoder s goal is to find the maximum likelihood decision: Decoder complexity is a serious restriction in using error correcting codes. It is impractical to evaluate the above equation directly, it is exponentially difficult. Various types of codes: Reed-Solomon Codes BCH Codes Convolutional Codes x = arg max x C P (y x) are used in practice not only because they are good codes, but because the decoders have reasonable complexity.

Reducing the Gap to Capacity. Rate R=1/2 Codes Bit Error Probability 1993 9.6 db high complexity 1980 s 5.0 db gain 1947 1948 2001 Shannon Limit AWGN Channel, SNR Eb/N0 (db)

History of Low-Density Parity Check Codes 1948 Shannon published his famous paper on the capacity of channels with noise 1963 Robert Gallager wrote his Ph.D. dissertation Low Density Parity Check Codes. He introduced LDPC codes, analyzed them, and gave some decoding algorithms. Because computers at that time were not very powerful, he could not verify that his codes could approach capacity 1982 Michael Tanner considered Gallager s LDPC codes, and his own structured codes. He introduced the notion of using bipartite graph, sometimes called a Tanner graph. 1993 Turbo codes were introduced. They exceeded the performance of all known codes, and had low decoding complexity 1995 Interest was renewed in Gallager s LDPC codes, lead by David MacKay and many others. It was shown that LDPC codes can essentially achieve Shannon Capacity on AWGN and Binary Erasure Channels.

Low-Density Parity Check Codes A low-density parity check (LDPC) code is a linear block code whose parity check matrix has a small number of one s. H = 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 The number of 1 s in an LDPC code s matrix grows linearly with the size N. The number of 1 s in a regular-density matrix grows as N (M-N).

LDPC Code Constructions Note that a# linear codes have a parity check matrix, and thus can be decoded using message-passing decoding. However, not all codes are well-suited for this decoding algorithm. Semi-random Construction Regular LDPC Codes (1962, Gallager) Irregular LDPC (Richardson and Urbanke, Luby et al.) MacKay Codes Structured Constructions Finite-Geometry based constructions (Kou, Lin, Fossorier) Quasicyclic LDPC Codes (Lin) combinatorial LDPC codes (Vasic, et al.) LDPC array codes (Fan)

Regular LDPC Codes The parity check matrix for a (j,k) regular LDPC Code has j one s in each column, and k one s in each row. Gallager s construction technique: 1. Code parameters N, K, j, k are given. 2. Construct the following matrix with rows and N columns: H 1 = 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 3. Let π(h1) be a pseudo-random column permutation of H1. 4. Construct regular LDPC check matrix by stacking j submatrices: H = H 1 π(h 1 ) π(h 1 ) N K j

Regular LDPC Code Example This code has N=20, K = 5, j=3, k=4. H = 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 The number of ones is N j = (N K) k. From this, it is easy to show that the rate of the code is: R = 1 j k In this example, R = 1-3/4 = 1/4.

Bipartite Graphs A simple undirected graph G: = (V,E) is called a bipartite graph if there exists a partition of the vertex set so that both V1 and V2 are independent sets. We often write G: = (V1 + V2, E) to denote a bipartite graph with partitions V1 and V2. V2 E V1 Undirected Graph (V,E) Bipartite Graph (V1+V2,E) V, vertex set E, edge set V1, V 2 node set E, edge set

Graph Representations of Codes Tanner Graph A Tanner Graph is a bipartite graph which represents the parity check matrix of an error correcting code. H is the (N K)-by-N parity check matrix. The Tanner graph has: N bit nodes (or variable nodes), represented by circles. N K check nodes, represented by squares. There is an edge between bit node i and check node j if there is a one in row i and column j of H. Example: A B H = [ 1 1 0 0 1 0 1 1 1 2 3 4 ] A B 1 2 3 4 Tanner Graph

Cycles and Girth of Tanner Graphs Cycle L=4 L=6 A cycle of length L in a Tanner graph is a path of L edges which closes back on itself The girth of a Tanner graph is the minimum cycle length of the graph. H = 1 1 0 0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 1 0 1 0 0 0 0 1 1 1

Decoding Algorithms Gallager s Bit Flipping algorithm for LDPC codes Message-passing algorithms: Soldier Counting algorithm Probablisitic Decoding of LDPC Codes.

Bit Flipping Decoding: Channel Gallager s bit-flipping algorithm is for decoding on the binary symmetric channel (BSC). The BSC has transition probability p. Encoder x 1-p 0 0 p p 1-p 1 1 y Bit-flipping Decoder Channel: BSC(p) Transmitted Sequence Received Sequence x = 0 0 0 0 0 0 y = 0 0 1 0 1 0 z is the noise sequence: y = x + z. LDPC codes are linear codes: x1 + x2 is a codeword. considering the all-zeros codeword is sufficient.

Bit Flipping Decoding: Example Code Consider the following parity check matrix H: 1 1 0 0 0 0 0 0 1 1 0 0 0 0 H = 0 1 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 This code has N=7 bits, and K=6 parity checks. It has rate R=6/7, and only two codewords {0000000,1111111} (H is one possible parity check matrix for the repeat code) The Tanner graph corresponding to the parity check matrix:

Bit Flipping Decoding: Decoding Algorithm Gallager s Bit-Flipping Algorithm 1. Compute each parity check, for received y 2. For each bit, count the number of failing parity checks 3. For the bit(s) with the largest number of failed parity checks, flip the associated input y. 4. Repeat steps 1-3 until all the parity checks are satisfied, or a stopping condition is reached. x = 0 0 0 0 0 0 0 y = 0 1 0 0 1 0 0 1 2 1 1 3 1 1 0 1 0 0 1 0 0 0

Bit Flipping Decoding: Decoding Algorithm Gallager s Bit-Flipping Algorithm 1. Compute each parity check, for received y 2. For each bit, count the number of failing parity checks 3. For the bit(s) with the largest number of failed parity checks, flip the associated input y. 4. Repeat steps 1-3 until all the parity checks are satisfied, or a stopping condition is reached. x = 0 0 0 0 0 0 0 y = 0 1 0 0 1 0 0 1 3 2 1 1 0 0 0 1 0 0 0 0 0 0

Bit Flipping Decoding: Decoding Algorithm Gallager s Bit-Flipping Algorithm 1. Compute each parity check, for received y 2. For each bit, count the number of failing parity checks 3. For the bit(s) with the largest number of failed parity checks, flip the associated input y. 4. Repeat steps 1-3 until all the parity checks are satisfied, or a stopping condition is reached. x = 0 0 0 0 0 0 0 y = 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 VALID CODEWORD!

Message Passing Problem, Soldier Counting The Soldier Counting Problem: Each soldier in a row wants to know the total number of soldiers. Each soldier can only communicate with his neighbors. How to communicate the total number to each soldier? 2+3+1 = 6! Solution: Message Passing. 1. When a soldier receives a number from his left, he adds one (for himself) and passes it to his left. 2. Similarly, for messages passing from the right. 3. A soldier with only one neighbor passes the number one to his neighbor. Soldier figures due to William Ryan

Soldiers in a Y For soldiers in more complicated formations, a solution is still possible. U2 U1 U3 V3 The soldier with three neighbors receives two messages U1, U 2. The message that he sends on is V 3 = U1 + U 2 + 1 Important: U3 is not used in computing V3. Sum Product Update Rule.!

Soldiers in A Loop For soldiers in a loop, there is no simple message-passing solution.!

Message-Passing Decoding Important decoding algorithm has various names: Message-passing decoding Sum-product decoding Probabilistic decoding An instance of belief propagation Instead of bit-flipping, the algorithm passes probability messages or soft information Not just BSC, but message-passing decoding works for a variety of channels, for example AWGN, binary erasure channel.

Probabilistic Decoding Probabilistic decoding is an instance of message passing It is an effective way to decode LDPC Codes x = y = Encoder x1... x N Channel y1... y N P (x1 y 1 ) P (x i y i ) P (x N y N ) Decoder

Message Passing Messages are probabilities: P(x i ). u ij = P(x i ) is the message passed from the bit node i to check node j lv ij = P(x i ) is the passed from the check node j to bit node i. 1 2 3 4 5 u11 lv11 u 23 lv 23 lv 53 u 53 lv 54 u 54 1 2 3 4 5 6 The bit node computes its output u, from inputs v and P(x i y i ): lu iml lv ij lv i& The check node computes its output v, from inputs u: P (x i y i ) lv i& lu ijl lu ikl lu inl

The Sum-Product Update Rule The Sum-Product Update Rule [Kschischang, et al.]: The message sent from a node N on an edge elis the product of the local function at N with all messages received at N on edges other than e, summarized for the variable associated with e. lv ijl lu ikl lu inl lu ijl lv ikl lu inl lu ijl lu ikl lv inl One message must be computed for each edge, per check node

BSC Channel A Posteriori Probability: P( x i y i ) Binary Symmetric Channel Errors are independent The a priori information about x i are P(x = 0) = P(x = 1) = 0.5. P(y = 0) = P(y = 1) = 0.5 because of the symmetry of the channel. x 1-p 0 0 p p 1-p 1 1 Channel: BSC(p) y An error occurs with probability p means: Using Bayes s Rule: P(y = 1 x = 0) = p P(y = 0 x = 0) = 1-p P (x = 0 y = 1) = P (y = 1 x = 0) P (x=0) P (y=1) P (x = 0 y = 1) = p 0.5 0.5 = p P (x = 0 y = 0) = 1 p

AWGN Channel A Posteriori Probability: P( x i y i ) Additive White Gaussian Noise (AWGN) Channel Assume the z i are independent. z i N (0, σ 2 ) f Z (z) = 1 2πσ e z2 /2σ 2 x i {0, 1} { 1, +1} y i P (x = 0 y) = P (y x = 0) P (x=0) P (y) P (x = 0 y) = ke (y+1)2 /2σ 2 P (x = 1 y) = ke (y 1)2 /2σ 2 k = 1 2 2πσP (y) F ind k by P (x = 0 y) + P (x = 1 y) = 1 z i

Bit Node Function At the Bit Node, we have several different estimates about a bit x: '1 = P(x=0 y1), v 2 = P(x=0 y2), v 3 = P(x=0 y3). lu lv3 4 lv1 What is the combined estimate, u 4 =P(x=0 y1y 2 y 3 )? lv 2 Consider this system: x z i z i y 1 y 2 v 1 = P (x = 0 y 1 ) v 2 = P (x = 0 y 2 ) Using the identity: We can show: P (x y 1 y 2 y 3 ) P (x) = P (x y 1) P (x) P (x = 0 y 1, y 2, y 3 ) = P (x y 2 ) P (x) P (x y 3 ) P (x) i P (x = 0 y i) i P (x = 0 y i) + i P (x = 0 y i) y 3 v 3 = P (x = 0 y 3 ) Or: u4 = v 1 v 2 v 3 v 1 v 2 v 3 + (1 v 1 )(1 v 2 )(1 v 3 ) z i

Check Node Function At the Check Node, we know x1 + x 2 + + x n = 0. What is v n = P(x n =0)? What is P(x n =1)? Let ρ n = P(x n =0) P(x n =1). lv ( lu 3l lu 2l lu 3l Let x(k) = k x i. lx ( lx 3l lx 2l lx1l i=1 Note that since x1 + x 2 + + x n = 0: Even: xn = 0 is the same as x(n-1) = 0 Odd: xn = 1 is the same as x(n-1) = 1 P(xn = 0) = P(x(n-1) = 0) By Bayes Rule: P(xn = 0) = P(x(n-2) = 0, xn-1 = 0) + P(x(n-2) = 1, xn-1 = 1) By independence: P(xn = 0) = P(x(n-2) = 0) P(xn-1 = 0) + P(x(n-2) = 1) P( xn-1 = 1) Can show: ρn = ρn-1 ρ1 v n = 1 2 ( ) (2u 1 1) (2u n 1 1) + 1

Message Passing Decoding Algorithm u11 0.5 lv11 u 23 0.5 lv 23 0.5 lv 53 u 53 lv 0.5 54 u 54 P (x 1 y 1 ) P (x i y i ) 0.5 1. Initialize: v ij messages to be P(x i ) = 0.5. 2. Compute the bit-to-check messages u ij from v ij (on the first iteration, we use P(x i y i ) ). 3. Compute the check-to-bit messasges, v ij from u ij. 4. At each node, compute the temporary estimate x^ i. If x ^ H t = 0, then stop decoding, x ^ is a valid codeword. 5. Otherwise repeat until Steps 2-4 until a maximum number of iterations has been reached.