Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Similar documents
LDPC Codes. Intracom Telecom, Peania

Distributed Source Coding Using LDPC Codes

Construction of LDPC codes

Low-density parity-check (LDPC) codes

LDPC Codes. Slides originally from I. Land p.1

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Quasi-cyclic Low Density Parity Check codes with high girth

Coding on a Trellis: Convolutional Codes

Lecture 4 : Introduction to Low-density Parity-check Codes

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

ECEN 655: Advanced Channel Coding

SPA decoding on the Tanner graph

Design of Non-Binary Quasi-Cyclic LDPC Codes by Absorbing Set Removal

Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

Graph-based codes for flash memory

On the minimum distance of LDPC codes based on repetition codes and permutation matrices

Codes designed via algebraic lifts of graphs

Introducing Low-Density Parity-Check Codes

Making Error Correcting Codes Work for Flash Memory

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Message Passing Algorithm with MAP Decoding on Zigzag Cycles for Non-binary LDPC Codes

An Introduction to Low Density Parity Check (LDPC) Codes

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

Factor Graphs and Message Passing Algorithms Part 1: Introduction

On the minimum distance of LDPC codes based on repetition codes and permutation matrices 1

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Analysis of Absorbing Sets and Fully Absorbing Sets of Array-Based LDPC Codes

Low-Density Parity-Check codes An introduction

Which Codes Have 4-Cycle-Free Tanner Graphs?

Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes

Symmetric Product Codes

Status of Knowledge on Non-Binary LDPC Decoders

Searching for Voltage Graph-Based LDPC Tailbiting Codes with Large Girth

Quasi-Cyclic Asymptotically Regular LDPC Codes

5. Density evolution. Density evolution 5-1

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

Codes on graphs and iterative decoding

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Belief-Propagation Decoding of LDPC Codes

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Codes on graphs and iterative decoding

Lecture 4: Linear Codes. Copyright G. Caire 88

Girth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes

On the Construction and Decoding of Cyclic LDPC Codes

RECURSIVE CONSTRUCTION OF (J, L) QC LDPC CODES WITH GIRTH 6. Communicated by Dianhua Wu. 1. Introduction

Compressed Sensing and Linear Codes over Real Numbers

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 1, JANUARY

Capacity-approaching codes

Integrated Code Design for a Joint Source and Channel LDPC Coding Scheme

Spatially Coupled LDPC Codes

On the Block Error Probability of LP Decoding of LDPC Codes

B I N A R Y E R A S U R E C H A N N E L

3. Coding theory 3.1. Basic concepts

The New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the Binary LDPC Codes With Better Cycle-Structures

ABSTRACT. The original low-density parity-check (LDPC) codes were developed by Robert

Low-complexity error correction in LDPC codes with constituent RS codes 1

Joint Iterative Decoding of LDPC Codes and Channels with Memory

Decoding of LDPC codes with binary vector messages and scalable complexity

A Key Recovery Attack on MDPC with CCA Security Using Decoding Errors

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Enhancing Binary Images of Non-Binary LDPC Codes

Expectation propagation for symbol detection in large-scale MIMO communications

Ma/CS 6b Class 25: Error Correcting Codes 2

Construction and Encoding of QC-LDPC Codes Using Group Rings

Which Codes Have 4-Cycle-Free Tanner Graphs?

On Turbo-Schedules for LDPC Decoding

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 64, NO. 10, OCTOBER

Turbo Codes are Low Density Parity Check Codes

Exercise 1. = P(y a 1)P(a 1 )

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Coding Techniques for Data Storage Systems

1 Introduction to information theory

A Class of Quantum LDPC Codes Constructed From Finite Geometries

6.1.1 What is channel coding and why do we use it?

EFFICIENT DECODING ALGORITHMS FOR LOW DENSITY PARITY CHECK CODES

MATH32031: Coding Theory Part 15: Summary

Low-density parity-check codes

Circulant Arrays on Cyclic Subgroups of Finite Fields: Rank Analysis and Construction of Quasi-Cyclic LDPC Codes

LDPC codes based on Steiner quadruple systems and permutation matrices

A Class of Quantum LDPC Codes Derived from Latin Squares and Combinatorial Design

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 2, FEBRUARY

Part III Advanced Coding Techniques

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q)

Quasi-Cyclic Low-Density Parity-Check Codes With Girth Larger Than

An Introduction to Low-Density Parity-Check Codes

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

A Simplified Min-Sum Decoding Algorithm. for Non-Binary LDPC Codes

Belief propagation decoding of quantum channels by passing quantum messages

Partially Quasi-Cyclic Protograph-Based LDPC Codes

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

Transcription:

Codes on Graphs Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 27th, 2008 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 1 / 31

Outline 1 Marginalization of a Function Factor Graphs The Sum-Product Algorithm 2 Codes on Graphs The Iverson Function Graph of a Code Decoding on a Graph: Using the Sum-Product Algorithm 3 LDPC Codes Desirable Properties Construction of LDPC Codes Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 2 / 31

Symbol MAP decoding Assume that we want to minimize the error probability of a single symbol of the code word. In this case, we must do symbol-map decoding. The rule is: where ˆx i = arg max p(x i y) x i p(x i y) = x C i (x i ) The above is a marginalization problem. p(x y) Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 3 / 31

Marginalization (1/2) We associate with a function f (x 1,x 2,...,x n ) its n marginals, defined as follows: f i (x i ) =...... f (x 1,x 2,...,x n ) x 1 x i 1 x i+1 x n For each value of x i, these are obtained by summing the function f over all of its arguments consistent with x i. A more convenient notation for the above is: f i (x i ) = x i f (x 1,...,x n ) Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 4 / 31

Marginalization (2/2) If x i X, then the complexity of the above summation grows as X n 1 If f can be factored as a product of functions, the computation can be simplified. For example, consider the function: f (x 1,x 2,x 3 ) = g 1 (x 1,x 2 )g 2 (x 1,x 3 ) The marginal f 1 (x 1 ) can be computed as: f 1 (x 1 ) = x 1 f (x 1,x 2,x 3 ) = x 2 = x 2 g 1 (x 1,x 2 ) x 3 g 2 (x 1,x 3 ) x 3 g 1 (x 1,x 2 )g 2 (x 1,x 3 ) Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 5 / 31

Factor Graphs (1/2) The above procedure can be graphically represented with a factor graph: The nodes are viewed as processors which compute a function whose arguments label the incoming edges. The edges are channels by which these processors exchange data. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 6 / 31

Factor Graphs (2/2) Every factor corresponds to a unique node, and every variable to a unique edge or half edge. The factors g i are called local functions or constraints. The function f is called the global function. Edges connect nodes, half edges connect variables with nodes. A cycle of length λ is a path that includes λ edges and closes back on itself. The girth of a graph is the minimum cycle length of the graph. In a normal factor graph, no variable appears in more than two factors. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 7 / 31

The Sum-Product Algorithm (1/3) We will now introduce an algorithm for the efficient computation of the marginals of a function described as a (normal) factor graph. This algorithm works when the graph is cycle-free, and yields the marginal function corresponding to each variable associated with an edge. The Sum-Product Algorithm is a message passing algorithm, because at each iteration, messages are passed along the edges of the graph. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 8 / 31

The Sum-Product Algorithm (2/3) Consider the node representing the factor g(x 1,...,x n ) of a global function f (x 1,...,x m ) The message passed along the edge corresponding to x i is: µ g xi (x i ) = x i g(x 1,...,x n ) λ i µ xλ g(x λ ) which is the product of g and all messages towards g along all edges except x i, summed over all the variables except x i. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 9 / 31

The Sum-Product Algorithm (3/3) The messages µ xj g(x j ) are either the values of x j, if we have a half edge, or the message coming from the node at the other end of the edge. The marginal of the global function with respect to x i, is given by the product of all messages exchanged by the SPA over the edge corresponding to x i : f xi (x i ) = j µ gj x i (x i ) Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 10 / 31

The Sum-Product Algorithm (Example) Consider a burglar alarm which is sensitive not only to burglary, but also to earthquakes. There are 3 binary variables: a,b,e (for alarm, burglary, and earthquake respectively). A value of 1 indicates that the corresponding event has occured. We want to infer the probability of the two possible causes, given that the the alarm went off: p(b a = 1) and p(e a = 1) These can be computed by marginalizing p(b,e a = 1) Asumming that b,e are independent, we have: (b,e a = 1) = p(a = 1 b,e)p(b)p(e) p(a = 1) p(a = 1 b,e)p(b)p(e) Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 11 / 31

The Sum-Product Algorithm (Example) So, we have factored the function we want to marginalize and the corresponding factor graph is: where f b (b) p(b) f e (e) p(e) f (b,e) p(a = 1 b,e) We are given the following data: f b (0) = 0.9 f b (1) = 0.1 f e (0) = 0.9 f e (1) = 0.1 and f (0,0) = 0.001 f (1,0) = 0.368 f (0,1) = 0.135 f (1,1) = 0.607 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 12 / 31

The Sum-Product Algorithm (Example) The messages from nodes f b and f e to node f will be: µ fb b(b) = (0.9,0.1) µ fe e(e) = (0.9,0.1) Once node f has received the above messages, it can compute the messages for nodes f b and f e : µ f b (b) = e f (b,e)µ fe e(e) = (0.001 }{{ 0.9 } + } 0.135 {{ 0.1 }, 0.368 }{{ 0.9 } + } 0.607 {{ 0.1 } ) e=0 e=1 e=0 e=1 } {{ } b=0 = (0.0144, 0.3919) } {{ } b=1 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 13 / 31

The Sum-Product Algorithm (Example) Similarly, we can compute µ f e (e) = (0.0377,0.1822) The marginals sought can now be written as: and p(b a = 1) µ fb b(b) µ f b (b) = (0.01296,0.03919) p(e a = 1) µ fe e(e) µ f e (e) = (0.03393,0.01822) After scaling of these vectors so the sum of their elements is 1, we have: p(b a = 1) = (0.249,0.751) p(e a = 1) = (0.651,0.349) Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 14 / 31

The Iverson Function Let P denote a proposition that may be either true or false. The Iverson function is defined as: { 1, P is true [P] = 0, P is false If we have n propositions, we have the factorization: [P 1 and P 2 and... and P n ] = [P 1 ][P 2 ]...[P n ] Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 15 / 31

Graph of a Code (1/4) Consider the parity-check matrix: 1 0 0 1 0 1 1 H = 0 1 0 1 1 1 0 0 0 1 0 1 1 1 of a (7,4,3) Hamming code. All codewords satisfy the following parity checks: x 1 + x 4 + x 6 + x 7 = 0 x 2 + x 4 + x 5 + x 6 = 0 x 3 + x 5 + x 6 + x 7 = 0 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 16 / 31

Graph of a Code (2/4) Using the Iverson function, we can express membership of a codeword x in the code as follows: [x C] = [Hx T = 0] In our case, the above function can be factored as follows: [x C] = [x 1 +x 4 +x 6 +x 7 = 0][x 2 +x 4 +x 5 +x 6 = 0][x 3 +x 5 +x 6 +x 7 = 0] A Tanner graph is a graphical representation of a linear block code corresponding to the set of parity checks that specify the code. Each symbol (variable node) is represented by a filled circle ( ), and every parity check by a check node ( ). Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 17 / 31

Graph of a Code (3/4) Tanner graphs are bipartite, meaning that variable nodes can only be connected to check nodes, and vice versa. A Tanner graph may contain cycles, but since they are bipartite, their minimum girth is 4. For the Hamming code we defined above, the corresponding Tanner graph will be: Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 18 / 31

Graph of a Code (4/4) Since a given code can be represented by several parity-check matrices, the same code can be represented by several Tanner graphs. Some representations may have cycles while others may be cycle-free. Each variable node ( ) corresponds to one bit of the codeword, i.e. to one column of H Each check node ( ) corresponds to one parity check equation, i.e. to one row of H A connection between variable node j and check node i only exists if H ij = 1 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 19 / 31

Decoding on a Graph: Using the Sum-Product Algorithm (1/2) Consider the symbol-map decoding problem stated earlier: p(x y) p(y x)p(x) For a stationary memoryless channel, we have: p(y x) = n p(y i x i ) i=1 Using the Iverson function and assuming that the a priori distribution of codewords is uniform, we can write: p(x) = [x C] 1 C Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 20 / 31

Decoding on a Graph: Using the Sum-Product Algorithm (2/2) From the above we get that: p(x y) [x C] n p(y i x i ) which is in a factored form, so it can be represented by a normal factor graph. To compute p(x i y i ) we can marginalize the above function using the sum-product algorithm on the graph. i=1 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 21 / 31

The Sum-Product Algorithm on Graphs with Cycles On a cycle-free graph, the SPA yields the exact APP distribution of the code word symbols in a finite number of steps. Codes whose Tanner graphs are cycle-free have a rather poor performance. On a graph with cycles, the algorithm may not converge, or it may converge to a wrong result. If short cycles are avoided, in most practical cases, the algorithm does converge and yields the correct answer. For LDPC codes it is proved that as n grows asymptotically large, the assumption of a graph without cycles holds. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 22 / 31

Low-Density Parity-Check Codes (1/2) LDPC codes are long linear block codes. As the name implies, their parity-check matrix has a low density of non-zero entries. Specifically, for a regular LDPC code, H contains a small number of 1s in each column, denoted w c, and a small number of 1s in each row, denoted w r. For irregular LDPC codes, the values of w c and w r are not constant. Since each column corresponds to one bit of the codeword, w c tells us in how many parity check equations that bit participates. Accordingly, since each row corresponds to a parity check equation, w r tells us how many bits participate in each equation. If the block length is n, we say that H characterizes a n,w c,w r LDPC code. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 23 / 31

Low-Density Parity-Check Codes (2/2) If there are m parity check equations, each involving w r bits, and each of the n coded symbols participates in w c equations, it must hold that: nw c = mw r m = nw c w r where m is the number of rows in H. If H is full rank, then the rate of the code is: n m n = 1 w c w r which yields the constraint w c w r The actual rate of the code might be higher than the above, if the parity checks are not independent. We call ρ 1 w c /w r the design rate of the code. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 24 / 31

Desirable Properties The Tanner graph corresponding to the code should have a large girth, for good convergence properties of the iterative decoding algorithm. Regularity of the code eases implementation. For good performance at high SNR on the AWGN channel, the minimum Hamming distance must be large. LDPC codes are known to achieve a large value of d Hmin The techniques for the design of parity-check matrices of LDPC codes can be classified under two main categories: 1 Random constructions. 2 Algebraic constructions. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 25 / 31

Random Constructions (1/2) They are based on generating the parity-check matrix randomly filled with 0s and 1s, while satisfying some constraints. After selecting values for the parameters n,ρ,w c, we can compute the value of w r. For a regular code, row and column weights of H must be exactly w r and w c respectively. Additional constraints can be included, e.g. the number of 1s in common between any two columns (or rows) should not exceed one, in order to avoid length-4 cycles. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 26 / 31

Random Constructions (2/2) A method for the random construction of H was developed by Gallager: The parity-check matrix H of a regular n,w c,w r LDPC code has the form: H 1 H 2 H =. H wc H 1 has n columns and n/w r rows, contains a single 1 in each column, and contains 1s in its i-th row from column (i 1)w r + 1 to column iw r. All other matrices are obtained by randomly permuting the columns of H 1. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 27 / 31

Random Constructions (Example) For example, for ρ = 1/2, w c = 2 and n = 12, we have: ρ = 1 w c w r w r = 4 Using the method mentioned above, we have: 1 1 1 1 0 0 0 0 0 0 0 0 H 1 = 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 The column weight of H 1 is 1, the row weight is w r and the matrix is n/w r n. We will need w c 1 = 1 permutation of this matrix to create H. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 28 / 31

Random Constructions (Example) One possible permutation is: 1 0 0 1 0 0 1 0 0 1 0 0 H 2 = 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 So, the final matrix will be: 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 [ ] H1 0 0 0 0 0 0 0 0 1 1 1 1 H = = H 2 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 29 / 31

Algebraic Constructions (1/2) Algebraic LDPC codes are more easily decodeable than random codes. A simple algebraic construction works as follows: choose p > (w c 1)(w r 1) and consider the p p matrix obtained from the identity matrix I p by cyclically shifting its rows by one position to the right: 0 1 0 0... 0 0 0 1 0... 0 J =........ 0 0 0 0... 1 1 0 0 0... 0 The λ-th power of J is obtained from I p by cyclically shifting its rows by (λ mod p) positions to the right. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 30 / 31

Algebraic Constructions (2/2) Construct the matrix: J 0 J 0 J 0... J 0 J 0 J 1 J 2... J H = J 0 J 2 J 4... J....... J 0 J wc 1 J 2(wc 1)... J wr 1 2(wr 1) (wc 1)(wr 1) where J 0 = I p This matrix has w c p rows and w r p columns. The number of 1s in each row and column is exactly w r and w c respectively. It can be proven that no length-4 cycles are present. Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 31 / 31