Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes

Size: px
Start display at page:

Download "Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes"

Transcription

1 Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Institute of Electronic Systems Signal and Information Processing in Communications Nana Traore Shashi Kant Tobias Lindstrøm Jensen

2

3 The Faculty of Engineering and Science Aalborg University 9 th Semester TITLE: Message Passing Algortihm and Linear Programming Decoding for LDPC and Linear Block Codes PROJECT PERIOD: P9, 4 th September, th January, 2007 PROJECT GROUP: 976 GROUP MEMBERS: Nana Traore Shashi Kant Tobias Lindstrøm Jensen SUPERVISORS: Ingmar Land Joachim Dahl External: Lars Kristensen from Rohde-Schwarz NUMBER OF COPIES: 7 ABSTRACT: Two iterative decoding methods, message passing algorithm (MPA) and linear programming (LP) decoding, are studied and explained for an arbitrary LDPC and binary linear block codes. The MPA, sum-product and maxproduct/min-sum algorithm, perform local decoding operations to compute the marginal function from the global code constraint defined by the parity check matrix of the code. These local operations are studied and the algorithm is exemplified. The LP decoding is based on a LP relaxation. An alternative formulation of the LP decoding problem is explained and proved. An improved LP decoding method with better error correcting performance is studied and exemplified. The performance of two methods are also compared under the BEC. REPORT PAGE COUNT: 100 APPENDIX PAGE COUNT: 6 TOTAL PAGE COUNT: 106

4

5 Preface This 9 th semester report serves as a documentation for the project work of the group 976 in the period from 4 th September, 2006 to 4 th January, It is to comply with the demands at Aalborg University for the SIPCom specialization at 9 th semester with the theme "Systems and Networks". Structure The report is divided into a number of chapters whose contents are outlined here. Introduction part contains the introduction of the project and the problem scope. System Model describes which model that is considered in the report along with assumptions and different types of decoding. The chapter Binary Linear Block Codes describes linear block codes and explain the concept of factor graphs and LDPC codes. In Message Passing Algorithm different message passing algorithms are given along with examples. Performance for decoding in the BEC is proven and exemplified. Linear Programming Decoding is about the formulation of the decoding problem as a linear programming problem. Interpretations of two different formulations are given and decoding in the BEC is examined. A possible improvement of the method is also described using an example. In the chapter Comparison the two decoding methods message passing algorithm and linear programming decoding are compared. Conclusion summarizes everything and points out the results.

6 Reading Guidelines The chapters System Model and Binary Linear Block Codes are considered as background information for the rest of the report. If the reader is familiar with these concepts he/she could skip these chapters and still understand the report. Nomenclature References to literature are denoted by brackets as [] and literature may also contain a reference to a specific page. The number in the brackets refers to the bibliography which can be found at the back of the main report on the page 99. Reference to figures (and tables) are denoted by Figure/Table x.y and equations by equation (x.y) where x is a chapter number and y is a counting variable for the corresponding element in the chapter. A vector is denoted by bold face a and matrix by A, always capital. Stochastic processes are in upper case X, deterministic process are in lower case x, but giving reference to the same variable. From section to section it is considered whether it is more convenient to consider the variable as stochastic or deterministic. Be aware of the differences between the use of the subscripts in x 1 which means the first vector and x 1 which means the first symbol in the vector x. The words in the italic form are used in the text to accentuate the matter. Enclosed Material A CD ROM containing MATLAB source code is enclosed in the back of the report. Furthermore, a Postscript, DVI and PDF version of this report are also included in the CD ROM. A version with hyperlinks is also included in DVI and PDF. 4

7 Nana Traore Shashi Kant Tobias Lindstrøm Jensen 5

8

9 Contents Preface 4 1 Introduction 11 Basic Concepts 2 System Model Basic System Model Channels Assumptions Log Likelihood Ratios Scaling of LLR Types of Decoding Binary Linear Block Codes Definitions Factor Graphs Low Density Parity Check Codes Iterative Decoding Techniques 4 Message Passing Algorithm Message Passing Algorithm and Node Operations Example of Node Operations Definitions and Notations Sum-Product Algorithm Maximum Likelihood The General Formulation of APP The General Formulation of Extrinsic A-Posteriori Probability Intrinsic, A-posteriori and Extrinsic L-values

10 CONTENTS Sum-Product Message Update Rules Example for Sum-Product Algorithm Max-Product / Min-Sum Algorithm Update Rules of Max-Product/Min-Sum Algorithm Message Passing Algorithm for the BEC Node Operations and Algorithm Stopping Sets Linear Programming Decoding Introduction Maximum Likelihood Decoding for LP Linear Programming Formulation Problem Formulation Solution of LP Scaling of λ (noise) Geometric Interpretation The Local Codeword Constraint gives a Convex Hull Possible Solutions Description of the Polytope P Alternative Formulation Exemplification of the Alternative Formulation The Alternative Formulation in General Special Properties for a Degree 3 Check Equation Pseudocodewords and Decoding in the BEC Pseudocodewords Decoding in the BEC Multiple Optima in the BSC Improving Performance using Redundant Constraints Background Information Algorithm of Redundant Parity Check Cuts Example Results 6 Comparison Optimal for Cycle Free Graphs Estimation/Scaling of Noise Decoding in the BEC

11 CONTENTS 6.4 Word Error Rate (WER) Comparison Under BSC Improvement by Adding Redundant Parity Checks Complexity Conclusion 97 Bibliography 99 Appendix A Irregular Linear Codes 101 B Proof of the Box-plus Operator 104 9

12

13 Introduction 1 The channel coding is a rudimentary technique to transmit digital data reliably over a noisy channel. If a user wants to send information reliably using a mobile phone to another user, then the information will be transmitted through some channel to the another user. However, the channel like the air is considered to be unreliable because the transmission path varies, noise is introduced and there is also interference from the other users. So, if the transmitted data is [001] and the received data is [011] which is shown in the Figure 1.1, then the received data was corrupted by the noisy channel. What could be the solution to cope with these problems? [001] [011] Channel? Figure 1.1: Data transmitted and received across a channel without channel coding. The solution to combat the noisy channel is channel coding so as to transmit and receive the information reliably. In other words, the channel coding is a technique which introduces the redundancy in the information before the transmission of the information across a channel. This redundancy is exploited at the receiver in order to detect or correct the errors introduced by the noisy channel. Figure 1.2 shows a basic channel coding system. For example, a repetition code can be chosen having length 3. It means that the repetition code will transmit every bit

14 CHAPTER 1. INTRODUCTION of an information 3 times. If the source transmits the information word say [001], then the encoded data will be [ ]. Therefore, the redundancy is introduced by the encoder. If this encoded data or codeword is sent across a noisy channel which may flip some of the bits, then say the received word may be [ ]. Thereafter, this received word is sent to the decoder which estimates the information word [001] by exploiting the redundancy. So, if there are more 0s than 1s in the received word, then the decoder will estimate the information word as 0 otherwise 1. Using this scheme, as long as no more than 1 out of every block of 3 bits are flipped, the original information can be recovered. Thus, the information from the source can be transmitted successfully across the channel to the receiver s sink. Source [001] [ ] Encoder Channel Sink Decoder [001] [ ] Figure 1.2: The basic channel coding model. The channel codes are also known as error-correcting codes. The study of channel codes began with the pioneered work of Hamming [7] and Shannon [17]. In this project, two iterative methods message passing algorithm (MPA) and linear programming (LP) decoding are studied and compared theoretically for the decoding of LDPC (Low Density Parity Check) codes. However, simple linear binary block codes are usually considered to explain the concepts behind these iterative decoding methods. The LDPC codes are used as a motivation to be decoded by these two methods. LDPC codes were invented in 1960 by R. Gallager [6] but the LDPC codes were obscured before the advent of Turbo-codes in After the invention of Turbo-codes [2], the LDPC codes are now one of the intensely studied area in coding. It s also been proved recently that the LDPC codes have outperformed Turbo-codes while keeping lower complexity [15]. The decoding of the LDPC codes are highly intensive research arena. These are the motivations behind studying the decoding of LDPC codes using iterative techniques. 12

15 System Model 2 In this chapter, we consider the system model and different channel models that are used through out this report. Moreover, we introduce commonly used assumptions and definitions that are heavily used in the following chapters. 2.1 Basic System Model Consider now the system showed in Figure 2.1. Channel coding Source U Channel Encoding X Channel Sink Û Inverse Encoding ˆX Channel Decoding Y Figure 2.1: The basic system model considered in this report. Where, the following definitions are used. All vectors will be column vectors, if it is not specified whether it is column or row vector. The assumptions about the random variables are described in section 2.3. Information word: U = [U 1 U 2... U K ] T F K 2 Codeword: X = [X 1 X 2... X N ] T C Where, X F N 2 and C is the set of all possible (legal) codewords C = 2 K, defined as C = {X : X = enc(u); U F K 2 } Received word: Y = [Y 1 Y 2... Y N ] T ; Domain of Y depends on the channel used Estimated codeword: ˆX = [ ˆX1 ˆX2... ˆXN ] T F N 2

16 CHAPTER 2. SYSTEM MODEL Estimated information word: Û = [Û1 Û2... Û K ] T F K 2 Code rate: R = K/N In this report, only binary codes are considered. The domain of Y is defined by the channel used. In the following section 2.2, the different channels and the corresponding output alphabet for Y are described. Considering Figure 2.1, the information words are generated from the (stochastic) source. The information words are one-to-one mapped (encoded) to a codeword in the set C depending on the information word. The codeword is transmitted across the channel. The received word is decoded to the estimated codeword ˆX and mapped to an information word as Û = enc 1 ( ˆX). A successful transmission is when Û = U or ˆX = X. Random variables are in upper case, deterministic variables are in lower case, but giving reference to the same variable. From section to section it is considered whether it is more convenient to consider the variable as random or deterministic. 2.2 Channels In this report, we consider the following three channels, the Binary Symmetric Channel (BSC), the Binary Erasure Channel (BEC) and the Binary Input Additive White Gaussian Noise Channel (BI-AWGNC). X BSC Y X BEC Y ɛ 1 ɛ ɛ ɛ δ δ δ 1 1 δ 1 Figure 2.2: The BSC with cross over probability ɛ. Figure 2.3: The BEC with erasure probability δ. For the BSC in Figure 2.2 some input values are with probability ɛ flipped at the output, Y {0, 1}. In the BEC Figure 2.3 the input values can be erased ( ) with probability δ, mapping X into Y {0,, 1}. For the (normalized) BI-AWGNC in Figure 2.4, the input X is mapped into X {+1, 1} which is added with Gaussian white noise, resulting in the output Y = X + W, where W N (0, N 0 /2E s ). The conditional distribution of Y is ( Pr( y x ) = Pr W (y x 1 ) = 2π(N0 /2E s ) exp (y ) x ) 2 (N 0 /E s ) (2.1) 14

17 2.3. ASSUMPTIONS BPSK Mapping X 0 +1 X 1 1 W Y Figure 2.4: The normalized BI-AWGNC. The output alphabet for the BI-AWGNC is y R. The Signal to Noise Ratio (SNR) per code symbol is E s N Assumptions The assumptions for a system model may not be true in real system. However, it is common to use some assumptions for a system which will help in the analysis of the system. It is assumed that the source is identically and independently distributed (i.i.d.). So, if the variables {U k } are independent, then, K Pr(U) = Pr(U k ) (2.2) k=1 If the variables {U k } F 2 are identical, then Pr(U k ) = 1 2 k = 1, 2... K (2.3) Combining equation (2.2) and (2.3) yields that all sequences are equiprobable. Pr(U) = 1 2 K (2.4) It is also assumed that the channel is a memoryless channel without feedback. Memoryless mean that the noise/cross over probability/erasure probability is independent for each symbol sent across the channel. Combining this with equation (2.2) yields. N Pr( Y X ) = Pr( Y n X n ) (2.5) n=1 Moreover, a channel without feedback means that Pr( Y n X ) = Pr( Y n X n ). 15

18 CHAPTER 2. SYSTEM MODEL 2.4 Log Likelihood Ratios It is common to use ratio for a (binary) variable being one of two variables. Let us define the log likelihood ratio (LLR) or L-value as: ( ) Pr( y x = 0 ) l = λ = L( y x ) = ln Pr( y x = 1 ) (2.6) The l-value describes how certain it is that x is 0 or 1. If l is positive then Pr( y x = 0 ) > Pr( y x = 1 ), and the estimate should be ˆx = 0. If the value of l is higher, then the reliability of the symbol will be higher. A hard decision rule of l is 0 l 0 ˆx = 1 l < 0 For the different channels, the following l-values can be considered. (2.7) BSC BEC ln 1 ɛ for y = 0 ɛ L BSC ( y x ) = ln ɛ for y = 1 1 ɛ for y = 0 L BEC ( y x ) = 0 for y = for y = 1 (2.8) (2.9) BI-AWGNC L BI-AWGNC ( y x ) = 4 E s N 0 y (2.10) 2.5 Scaling of LLR The scaling of LLR can be important for a channel considered during decoding. Under BSC with ɛ the crossover probability (which also refers to the noise in the channel), the scaling of LLR (λ n ) by a constant β is possible as, ( ) Pr(yn x n = 0) β ln 1 ɛ for y ɛ n = 0 β λ n = β ln = Pr(y n x n = 1) β ln ɛ for y 1 ɛ n = 1 (2.11) So, we can always scale λ n to ±1 by β. Thus, it is not important to know the crossover probability (noise) in the BSC while using any decoding algorithm which allows scaling of LLR. 16

19 2.6. TYPES OF DECODING Similarly, under BEC the scaling of LLR is also possible as, ( ) β for y Pr(yn x n = 0) n = 0 β λ n = β ln = β 0 for y Pr(y n x n = 1) n = β for y n = 1 (2.12) This observation is also valid for AWGN channel. If the LLR (λ n ) for AWGN channel is multiplied by β, β λ n = β ( 4 E ) s y N 0 (2.13) So, scaling signal to noise ratio Es N 0 by β implies that the knowledge of the noise is not necessary while determining λ n for decoding. 2.6 Types of Decoding When the codeword x is transmitted across a channel, the word received is y which is decoded to yield ˆx. What could be the technique to decode y in order to estimate the transmitted codeword x? One possible way to decode y is to maximize the a posteriori probability (MAP), and then guess the transmitted codeword. ˆx = argmax x C Pr( x y ) (2.14) The decoding described in equation (2.14) is called block-wise MAP decoding. It is a block-wise because the entire block y is decoded to ˆx. If the maximization of a-posteriori probability (APP) is done symbol-wise rather than block-wise, then it will be called as symbol-wise MAP: ˆx n = argmax x n C Pr( x n y ) }{{} APP Where, ˆx n is an estimated code symbol and ˆx n F 2. n = 1, 2,..., N (2.15) Now, if Bayes rule is applied to (2.14). ˆx = argmax x C = argmax x C Pr( x, y ) Pr( y ) Pr( y x ) Pr( y ) Pr( x ) (2.16) (2.17) 17

20 CHAPTER 2. SYSTEM MODEL If all the sequences are equiprobable as in equation 2.4, and the encoder is a oneto-one mapping, then Pr( x ) = constant. Since, maximization is only done over x, the received word y can be considered as a constant. ˆx = argmax x C Pr( y x ) (2.18) Equation (2.18) is the block-wise maximum likelihood decoder (ML). Similarly, the ML symbol-wise can be derived from the MAP symbol-wise in the following way. So, from symbol-wise maximum a-posteriori probability (symbol-wise MAP), ˆx n = argmax x n {0, 1} Pr( x n y ) n = 1, 2,..., N (2.19) = argmax x n {0, 1} Pr(x n, y) Pr(y) { } Bayes rule (2.20) = argmax x n {0, 1} Pr( y x n ) Pr(x n ) Pr(y) (2.21) Since, code symbols are equiprobable = argmax x n {0, 1} ( ) Pr(xn ) Pr( y x n ) }{{} Pr(y) ML symbol-wise }{{} constant = α (2.22) The ML block-wise and symbol-wise decoding are mostly pondered over in this report. The four possible decodings using MAP/ML and symbol/block-wise are shown in the Table 2.1. Decoder type ML block-wise ML symbol-wise MAP block-wise MAP symbol-wise Formula ˆx = argmax Pr( y x ) x C ˆx n = argmax Pr( y x n ) n = 1, 2... N x n {0,1} ˆx = argmax Pr( x y ) x C ˆx n = argmax Pr( x n y ) n = 1, 2... N x n {0,1} Table 2.1: Four different decoders. 18

21 Binary Linear Block Codes 3 The goal of this chapter is to first give some definitions of important notions of binary linear block codes. Then, the representation of a factor graph and all its components will be described in section 3.2. Finally, section 3.3 will expound the low density parity check codes. Moreover, it can be accentuated that all the vectors used in this report are column vectors, but some equations are also shown considering row vectors. However, the distinction is made between the equations considering column vectors and row vectors. 3.1 Definitions A (n,k) binary linear block code is a finite set C F N 2 of codewords x. Each codeword is binary with length N. C contains 2 K codewords. The linearity of this code means that any linear combination of codewords is still a codeword [12]. This is an example of a binary block code (7, 4): ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), C = ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ). (3.1) A code is generated by a generator matrix G F K N 2 and an information word u F K 2 according to this formula: where, x and u are row vectors. x = u G (3.2) We work in F 2, so the modulo 2 multiplication is applied.

22 CHAPTER 3. BINARY LINEAR BLOCK CODES As only column vectors are used in the report, equation (3.2) becomes, x = G T u (3.3) where, x and u are column vectors. The set of all u represents all linear independent binary vectors of length K. For our example in the equation (3.1), (0000), (1000), (0100), (0010) (0001), (1100), (1010), (1001) u (0110), (0101), (0011), (1110) (1101), (1011), (0111), (1111) (3.4) The generator matrix G corresponding to the code C is, G = (3.5) Note that for a linear code, the corresponding rows of the generator matrix are the linear independent codewords. The rate of a code is given by the relation: R = K/N (3.6) After the definition of the generator matrix, a definition of the dual matrix of this generator matrix could also be given. Indeed, H is called the parity check matrix and is dual to G because: H belongs to F M N 2 and satisfies: H G T = 0 (3.7) H x = 0 M 1 x C (3.8) The rank of H is N K and often, M N K. The matrix H of our example in the equation (3.1) is: H = (3.9)

23 3.2. FACTOR GRAPHS 3. Finally, let us define the minimum distance d min. The minimum distance for a linear code is the minimum hamming weight of all nonzero codewords of the code, with the hamming weight of a codeword corresponding to the number of 1s in this codeword [8]. For example, the hamming weight of ( ) is 3. d min = argmin w H (x) (3.10) x C\0 Thus, always in our example in the equation (3.1), the minimum distance found is The code C of equation (3.1) is a (7, 4, 3) linear block code. This code, called a Hamming code, will often be used as an example in the report and in the following section 3.2 particularly. 3.2 Factor Graphs A factor graph is the representation with edges and nodes of the parity check equation: H x = 0 M 1 (3.11) Each variable node corresponds to a code symbol of the codeword and each check node represents one check equation. An edge links a variable node to a check node. It corresponds to a 1 in the parity check matrix H. A factor graph is a bipartite graph, which means that there are two kind of nodes and the same kind of nodes are never connected directly with an edge. If the example of the (7, 4, 3) Hamming code is taken again, and it is considered that x n is a code symbol of the codeword x, the set of parity check equation is, chk(a) : x 1 x 2 x 4 x 5 = 0 H x = chk(b) : x 2 x 3 x 4 x 6 = 0 (3.12) chk(c) : x 4 x 5 x 6 x 7 = 0 Figure 3.1 shows two factor graphs of a (7, 4, 3) Hamming code. These factor graphs have 3 check nodes,7 variables nodes and, for example, there is an edge between each variables x 1, x 2, x 4 and x 5 and the check node A. As it can be seen in this Figure 3.1, there are some cycles in this graph, for instance the cycle formed by x 2, A, x 4 and B. The presence of the cycles in a factor graph can sometimes make the decoding difficult, as it will be shown later in the report. And of course, graphs without cycles also exist and can be represented as a tree graph. The 21

24 CHAPTER 3. BINARY LINEAR BLOCK CODES x 1 x x 2 1 x 3 x 2 x 3 x 4 A B A x 4 B x 5 x 5 x 6 x 6 C C x 7 x 7 Figure 3.1: Factor graphs of a (7, 4, 3) Hamming code. Figure 3.2 corresponding to the following parity check matrix is an example of a factor graph without cycle. [ ] H = (3.13) x 1 x1 x 2 x 2 A A x 3 x 3 x 4 B B x 5 x 4 x 5 Figure 3.2: A factor graph without cycles. While using the factor graph, it is frequently useful to have the set of neighbours of both variable and check nodes. So, let us define what these sets of neighbours are. The set of indices of variables nodes, which are in the neighbourhood of a check node m, is called N (m). For instance (7, 4, 3) Hamming code in the Figure 3.1, N (B) = {2, 3, 4, 6} 22

25 3.3. LOW DENSITY PARITY CHECK CODES In the same way, M(n) is defined as the set of check nodes linked to the variable node n. According to Figure 3.1, M(4) = {A, B, C} Factor graphs are a nice graphical way to represent the environment of decoding and is also used for the message passing algorithm which will be described in chapter Low Density Parity Check Codes In the following section, the Low Density Parity Check (LDPC) Codes will be introduced and described. LDPC codes were discovered by Robert Gallager in the 60s. They were forgotten for almost 30 years before being rediscovered again thanks to their most important advantage, which is that they allow data transmission rates close to the Shannon limit, the theoretical rate [21]. A design of a LDPC code, which comes within db of the Shannon limit, has been found [4]. This discovery motivates the interest of researchers on LDPC codes and decoders, as the decoding gives a really small probability of lost information. LDPC codes are becoming now the standard in error correction for applications such as mobile phones and satellite transmission of digital television [18][21]. LDPC codes can be classified in two categories, regular and irregular LDPC codes. A regular LDPC code is characterized by two values: d v, and d c. d v is the number of ones in each column of the parity check matrix H F M N 2. d c represents the number of ones in each row. There are two different rates in LDPC codes. The true rate is the normal rate: R = K/N (3.14) The second rate is called as design rate: The relation between those two rates is: R d = 1 d v d c (3.15) R R d (3.16) 23

26 CHAPTER 3. BINARY LINEAR BLOCK CODES Let us prove the equation(3.16): The number of ones in the parity check matrix H is: Md c = Nd v. In H, some check equations can be repeated; so the number of rows M could be greater or equal to (N-K). Thus, R = K N = N (N K) N = 1 N K N 1 M N = 1 d v d c The following equation represents a regular LDPC code with its parameters: H = (3.17) d v = 2; d c = 3; M = 4; N = 6; rank(h) = 4 K = N rank(h) = 2; R = R d = 1/3. Note that a real LDPC code, as its name tells it, has a small number of ones in rows and columns compare to its really large dimensions. LDPC codes can work well for code length N > 1000 [11], so for instance the dimensions of the code can be N = 1000, M = 500 and the degrees d v = 5, d c = 10. An irregular LDPC code is a code with different numbers of ones in each row and columns. They are known to be better than the regular one [10]. According to this difference, new variables are defined for these irregular LDPC codes (see appendix A). The following example is not a real irregular LDPC code but it is an irregular linear code and will help to understand what can be an irregular LDPC code H = (3.18) We can see in the Figure 3.3 that the number of ones in some columns is 3 and in others it is 2. We have also the same situation for rows, some rows have 3 ones and others have 2 ones. 24

27 3.3. LOW DENSITY PARITY CHECK CODES x 1 x 2 x 3 x 4 x 5 A B C D E x 6 Figure 3.3: Irregular LDPC Code. 25

28 26 CHAPTER 3. BINARY LINEAR BLOCK CODES

29 Message Passing Algorithm 4 Linear block and LDPC codes can iteratively be decoded on the factor graph by Message- Passing Algorithm (MPA) (it is also known as Sum-Product (Belief/Probability Propagation) or Max-Product (Min-Sum) Algorithm [11]). As it is known that a factor graph (cf. section 3.2) represents a factorization of the global code constraint H x = 0 into the local code constraints which are represented by the connection between variable and check nodes. These nodes perform local decoding operations and exchange the messages along the edges of the factor graph. It can be construed that the extrinsic message is a soft-value for a symbol when the direct observation of the symbol is not considered in the computation (local decoding operation) of this specific value. The message passing algorithm is an iterative decoding technique. So, in the first iteration, the incoming messages received from the channel at the variable nodes are directly passed along the edges to the neighbouring check nodes because there are no incoming messages (extrinsic) from the check nodes in the first iteration. The check nodes perform local decoding operations to compute outgoing messages (extrinsic) depending on the incoming messages received from the neighbouring variable nodes. Thereafter, these new outgoing messages are sent back along the edges to the neighbouring variable nodes. The meaning of one complete iteration can be comprehended that the one outgoing message (extrinsic) has passed in both directions along every edge. One iteration is illustrated in the Figure 4.1 for the (7, 4, 3) Hamming code by showing the direction of the message in each direction along every edge. The variableto-check (µ vc ) and check-to-variable (µ cv ) are extrinsic messages which are also shown in the same Figure 4.1. After every one complete iteration, it will be checked whether a valid codeword is

30 CHAPTER 4. MESSAGE PASSING ALGORITHM Messages received from the channel (1) ch ك (2) ch ك (3) ch ك (4) ch ك (5) ch ك (6) ch ك (ء 1) vc ك (1 ء) cv ك 6) ( cv ك Extrinsic messages exchanged along the edges ء (7) ch ك 7 Figure 4.1: To illustrate one complete iteration in a factor graph for (7, 4, 3) Hamming code. The messages for instance µ vc (1, A) and µ cv (A, 1) are extrinsic. µ ch are the messages coming from the channel. found or not. If the estimated code symbols form a valid codeword such that ( where, ˆx is an estimated codeword. ) H ˆx = 0 then the iteration will be terminated otherwise it will continue. After the first complete iteration, the variable nodes will perform the local decoding operations in the same way to compute the outgoing messages (extrinsic) from the incoming messages received from both the channel and the neighbouring check nodes. In this way, the iterations will continue to update the extrinsic messages unless the valid codeword is found or some stopping criterion is fulfilled. 4.1 Message Passing Algorithm and Node Operations Considering any factor graph in general, the message passing algorithm is listed below in order to give an overview of this algorithm. The extrinsic messages which are computed by the local decoding operations at the variable nodes are denoted as µ vc which means message from variable check while at the check nodes are denoted as µ cv which means message from check variable. 28

31 4.1. MESSAGE PASSING ALGORITHM AND NODE OPERATIONS 1. The initial message coming from the channel at variable node n is denoted as µ ch (n). 2. The extrinsic message from the variable to check node is ( µ vc (n, m) = fct v µch (n), µ cv (m, n) ) (4.1) where, n = variable node m M(n): check nodes which are the neighbour of the variable node n. m M(n)\m: check nodes except m which are the neighbour of the variable node n. The new or updated extrinsic message µ vc (n, m) which is computed by the local decoding operation or function fct v, will be sent to the check node m. Therefore, the incoming extrinsic message µ cv (m, n) from the check node m is not considered for updating the message µ vc (n, m). 3. The extrinsic message from the check to variable node is ( µ cv (m, n) = fct c µvc (n, m) ) (4.2) where, fct c is the local decoding operation at a check node and n N (m)\n: variable nodes except n which are the neighbour of the check node m. 4. The final message that is computed at the variable node n in order to estimate the code symbol. ( µ v (n) = fct v µch (n), µ cv (m, n) ) (4.3) 5. The estimation of a code symbol X n can be done by hard decision 0 if Pr( X n = 0 µ v (n) ) Pr( X n = 1 µ v (n) ) ˆx n = 1 else (4.4) 6. If these symbol-wise estimated code symbols are stacked to form vector ˆx of length N, then it can be checked whether the ˆx is a valid codeword by H ˆx = 0 (4.5) 7. If the above equation (4.5) is satisfied or the current number of iteration is equal to some defined maximum number of iterations then stop the iteration otherwise repeat the algorithm from step 2 to step 7. 29

32 CHAPTER 4. MESSAGE PASSING ALGORITHM Example of Node Operations Considering the factor graph of (7, 4, 3) Hamming code which is shown in Figure 4.1, the above steps the algorithm is shown. of algorithm can be seen. At the variable node 2: The initial message at the variable node 2 is µ ch (2). In general, the incoming (extrinsic) messages at the node 2: µ ch (2), µ cv (A, 2), µ cv (B, 2) (4.6) In general, the outgoing (extrinsic) message from the node 2: µ vc (2, A), µ vc (2, B) (4.7) The local decoding operation at the node 2 to compute the extrinsic(outgoing) message say µ vc (2, B): ( µ vc (2, B) = fct v µch (2), µ cv (A, 2) ) (4.8) So, the message µ cv (B, 2) is excluded for the computation of µ vc (2, B). At the check node B: The incoming (extrinsic) messages at the check node B: µ vc (2, B), µ vc (3, B), µ vc (4, B), µ vc (6, B) (4.9) The outgoing (extrinsic) message from the check node B: µ cv (B, 2), µ cv (B, 3), µ cv (B, 4), µ cv (B, 6) (4.10) the local decoding operation at the check node B to compute the extrinsic (outgoing) message say µ cv (B, 2): ( µ cv (B, 2) = fct c µvc (3, B), µ vc (4, B), µ vc (6, B) ) (4.11) It can be noticed that the message µ vc (2, B) is excluded for the computation of µ cv (B, 2). It shall be noted that these messages µ can be in terms of either probabilities or log likelihood ratios LLR. 30

33 4.2. DEFINITIONS AND NOTATIONS 4.2 Definitions and Notations Some terms and notations are introduced here to show extrinsic, a-posteriori and intrinsic probabilities only for the simplification of further derivations and proofs. We assume that these short notations won t be repellent to the reader. However, these notations are easy to assimilate as this report goes on. If codeword x having length N is sent across any memoryless channel and y is a received word such that the extrinsic a-posteriori probability after decoding is given in equation (4.12) for the n th symbol to be b = {0, 1} p b e,n = Pr( X n = b y \n ) n = 1, 2,..., N; b = 0, 1 (4.12) where, y \n = [y 1 y 2... y n 1 y n+1 y N ] T y \n means y n is excluded from the received word y However, the a-posteriori probability (APP) after decoding is given in equation (4.13) to show the disparity between APP and extrinsic a-posteriori probability, p b p,n = Pr( X n = b y ) n = 1, 2,..., N; b = 0, 1 (4.13) So, the difference between APP and extrinsic a-posteriori probability can easily be seen that only y n is excluded from the received word y in the formulation of extrinsic a-posteriori probability. Now, one more definition thrusts here to introduce intrinsic probability before decoding which is defined as p b ch,n = Pr( y n X n = b ) n = 1, 2,..., N; b = 0, 1 (4.14) The channel is assumed to be binary-input memoryless symmetric channel. The channel properties are reiterated here to prove the independency assumption in the factor graph. So, the channel being binary-input means that the data transmitted is a discrete symbol from Galois Field F 2 i.e., {0, 1}, memoryless means that each symbol is affected independently by the noise in the channel and symmetric means the noise in the channel affects the 0s and 1s in the same way. As, there is no direct connection between any two variable nodes in the factor graph, the decoding of the code symbol can be pondered on each variable node independently. It means that the local decoding operations can be performed independently at both variable and check nodes side in the factor graph. If the factor graph is cycle free, then the independency assumption is valid whereas if the factor graph has cycles, then the assumption will be valid for few iterations until the messages have travelled the entire cycles. 31

34 CHAPTER 4. MESSAGE PASSING ALGORITHM The MPA algorithm is optimal ( i.e., maximum likelihood (ML) decoding ) for those codes whose factor graph is cycle free otherwise sub-optimal due to cycles in the factor graph. So, if the codes have cycles then still the MPA decoder will perform close to ML decoder [9]. Furthermore, the overall decoding complexity is linear with the code length [15]. So, these are the motivations behind studying the MPA decoder. In this chapter, sum-product and max-product/min-sum algorithms are described and the performance of the message passing algorithm is also explained under binaryerasure channel (BEC). The BEC is considered because it is easy to explain the concepts behind the update rules. The stopping set (a set of code symbols which is not resolvable) is also explained in detail under BEC. 4.3 Sum-Product Algorithm The sum-product algorithm was invented by Gallager [6] as a decoding algorithm for LDPC codes which is still the standard algorithm for the decoding of LDPC codes [11]. The sum-product algorithm operates in a factor graph and attempts to compute various marginal functions associated with the global function or global code constraint by iterative computation of local functions or local code constraints. In this section, the sum-product algorithm is shown as a method for maximum likelihood symbol-wise decoding. The update rules for independent and isolated variable and check nodes in terms of probabilities and LLR (L values) are derived. The algorithm is explained in an intuitive way such that it can show the concepts behind it Maximum Likelihood The property of the sum-product algorithm is that for cycle free factor graphs it performs maximum likelihood (ML) symbol-wise decoding. In this section, the sumproduct formulation is derived from ML symbol-wise (cf. section 2.6 for types of decoding). Considering the linear block code C F N 2, where C = 2 K, information word length K, code rate R = K N. The code is defined by a parity check matrix H F2 M N, M N K. 32

35 4.3. SUM-PRODUCT ALGORITHM Formally, { } C = x : x = enc(u), information word u F K 2 { } = x F N 2 : H x = 0 If the code C has cycle free factor graph and a codeword x C is transmitted through any binary-input memoryless channel then the sum-product algorithm can decode or estimate an optimal (ML) codeword from the received word y. It is assumed that the code symbols and codewords are equiprobable. In general, if the code symbols are equiprobable then maximum a-posteriori probability (MAP) Pr( x n y ) and maximum likelihood (ML) Pr( y x n ) are same. Let s contemplate that ˆx i is an estimated code symbol, such that ˆx i F 2. So, symbol-wise ML: ˆx n = argmax x n {0, 1} Pr( y x n ) n = 1, 2,..., N (4.15) = argmax x n {0, 1} Pr(x n, y) Pr(x n ) }{{} constant { Applying Bayes rule; } code symbols x n are equiprobable (4.16) = argmax x n {0, 1} Pr(x n, y) (4.17) = argmax x n {0, 1} x C x n fixed Pr(x, y) (4.18) = argmax x n {0, 1} x C x n fixed Pr( y x ) Pr(x) }{{} constant = argmax x n {0, 1} x C x n fixed Pr( y x ) { } since, codewords x are equiprobable (4.19) 33

36 CHAPTER 4. MESSAGE PASSING ALGORITHM Since, channel is assumed to be memoryless N ˆx n = argmax x n {0, 1} x C j = 1 x n fixed }{{}}{{}}{{} Decision Sum P roduct Pr( y j x j ) (4.20) So, the sum-product formulation can be derived from the maximum likelihood symbol-wise for any code. However, if there are cycles in the factor graph of the code, then the factor graph will not have tree structure and the sum-product algorithm will be sub-optimal but close to the ML decoder [10] The General Formulation of APP The general form of APP can easily be construed from equation (4.20) and (2.22) i.e., (APP) p xn p,n = Pr( X n = x n y ) (4.21) N { } = α Pr( y j X j = x j ) where, α is scaling factor x C j=1 x n fixed (4.22) = α N p x j ch,j (4.23) x C j=1 x n fixed So, the above equation can also be written as p 0 p,n = Pr( X n = 0 y ) = α x C x n = 0 N j=1 p x j ch,j (4.24) p 1 p,n = Pr( X n = 1 y ) = α x C x n = 1 N j=1 p x j ch,j (4.25) with scaling factor α such that p 0 p,n + p 1 p,n = 1. 34

37 4.3. SUM-PRODUCT ALGORITHM The General Formulation of Extrinsic A-Posteriori Probability The general formulation of extrinsic a-posteriori probability can be derived from the general form of APP using equation (4.22). (APP) p xn p,n = α x C x n fixed N Pr( y n X n = x n ) j = 1 j n p x j ch,j (4.26) = Pr( y n X n = x n ) α x C x n fixed N j = 1 j n p x j ch,j (4.27) N = Pr( y n X n = x n ) α Pr( y }{{} j X j = x j ) p xn ch,n x C j = 1 x n fixed j n Intrinsic Probability }{{} Pr( X n = x n y \n ) Extrinsic Probability (4.28) So, Extrinsic a-posteriori probability in general is p xn e,n = Pr( X n = x n y \n ) (4.29) = α x C x n fixed N j = 1 j n Pr( y j X j = x j ) (4.30) = α x C x n fixed N j = 1 j n p x j ch,j (4.31) 35

38 CHAPTER 4. MESSAGE PASSING ALGORITHM Moreover, the extrinsic a-posteriori probability can be rewritten as p 0 e,n = α x C x n = 0 N j = 1 j n p x j ch,j (4.32) p 1 e,n = α x C x n = 1 with scaling factor α such that p 0 e,n + p 1 e,n = 1. N j = 1 j n p x j ch,j (4.33) Intrinsic, A-posteriori and Extrinsic L-values Before getting further, it can be accentuated that log likelihood ratios (LLR or L- values) of the intrinsic, a-posteriori and extrinsic can be shown now as log ratios of the intrinsic, a-posteriori and extrinsic probabilities formulations respectively. In fact, L-values have lower complexity and are more convenient to use than the messages in terms of probabilities. Moreover, the log likelihood ratio is a special L-value [9]. Intrinsic L-value: l ch,n = L( y n X n ) = ln A-posteriori L-value: l p,n = L( X n y ) = ln ( ) Pr( yn X n = 0 ) = ln Pr( y n X n = 1 ) ( ) Pr( Xn = 0 y ) = ln Pr( X n = 1 y ) ( p 0 ch,n p 1 ch,n ( p 0 ) p,n p 1 p,n Extrinsic L-value: ( ) ( Pr( Xn = 0 y \n ) p 0 ) e,n l e,n = L( X n y \n ) = ln = ln Pr( X n = 1 y \n ) p 1 e,n ) (4.34) (4.35) (4.36) It should be noted that the intrinsic, a-posteriori and extrinsic probabilities can also be found, if the respective L-values (LLR) are given. For the convenience, all the notations in the subscripts can be removed from both L-values (as, l) and probabilities (as, p) to derive the relation i.e., Pr( X l ). Since, p 0 + p 1 = 1 (4.37) and ( p 0 ) l = ln p 1 (4.38) 36

39 4.3. SUM-PRODUCT ALGORITHM So, from the above two relations (4.37) and (4.38), l = ln p 0 1 p 0 (4.39) ( 1 p 0) e l = p 0 (4.40) e l = ( 1 + e l) p 0 (4.41) p 0 = e l/2 e e ( l ) (4.42) l/2 1 + e l p 0 = Pr( X = 0 l ) = e ( +l/2 e l/2 + e +l/2) (4.43) Similarly, it can be found for p 1 = Pr( X = 1 l ) = e ( l/2 e l/2 + e +l/2) (4.44) As it is known that the sum-product algorithm does local decoding operations at both variable nodes and check nodes individually and independently in order to update the extrinsic messages iteratively unless the valid codeword is found or some other stopping criterion is fulfilled. In the next section, the update rules are shown for both variable and check nodes which are the sum-product formulation basically Sum-Product Message Update Rules The property of the message passing algorithm is that the individual and independent variable nodes have repetition code constraints while the check nodes have single parity check code constraints which is proved in this subsection. The Forney factor graphs (FFG) [11] are considered to prove the repetition code constraints at the variable nodes and the single parity check code constraints at the check nodes because of simplicity. It shall be accentuated that the Forney style factor graphs and the factor graphs/tanner graph/tanner-wiberg graph [9] have same code constraints at variable and check nodes, so they are same but with different representations. For more information regarding Forney style factor graphs refer [11]. In this subsection, the the update rules are also shown in terms of probabilities and log likelihood ratios (L-values) in an intuitive way considering small examples before it is generalized. 37

40 CHAPTER 4. MESSAGE PASSING ALGORITHM Message Update Rules for Variable Nodes The standard coding model is considered in which a codeword x having length N is selected from a code C F N 2, C = 2 K and transmitted across a memoryless channel with the corresponding received word y having length N. A small example is considered whose code is represented by a parity-check matrix [ ] H = (4.45) and the Forney style factor graph with the coding model is shown in Figure 4.2 and the individual variable node 3 of the Figure 4.2 is shown in Figure 4.3. The repetition code constraint for a variable node is proved in an intuitive way. Memoryless Channel Model f 1 (x 1, x 1A ) y 1 Pr( y 1 x 1 ) x 1 = x1a Received codeword y 2 y 3 y 4 Pr( y 2 x 2 ) Pr( y 3 x 3 ) Pr( y 4 x 4 ) x 2 x 3 x 4 = f 3 (x 3, x 3A, x 3B ) = = x 3B x 3A g A (x 1A, x 2A, x 3A ) + A + B y 5 Pr( y 5 x 5 ) x 5 = MPA Decoder Figure 4.2: Forney style factor graph (FFG) with the coding model of the code defined by the check matrix in equation (4.45). x 3 f 3 (x 3, x 3A, x 3B ) = x 3A x 3B Figure 4.3: The individual and isolated variable node 3 of the Figure

41 4.3. SUM-PRODUCT ALGORITHM The indicator function of a variable node 3 is defined as f 3 (x 3, x 3A, x 3B ) [11] where x 3, x 3A and x 3B are the variables, such that 1, if x 3 = x 3A = x 3B f 3 (x 3, x 3A, x 3B ) = (4.46) 0, otherwise So, the equation (4.46) implies that all the edges which are connected to a variable node have got the same value in the variables i.e., x 3 = x 3A = x 3B = 0 or x 3 = x 3A = x 3B = 1 like a repetition code. Thus, the message update rules of the variable nodes can be defined considering the variable node has got the repetition code constraint. Now a small example of repetition code having length N = 3 is considered to explain the message update rules for variable nodes having degree 3. Thereafter, the message update rules for variable nodes are generalized. It is also shown that these message update rules are instances of sum-product formulation. Let the repetition/repeat code be C = { 000, 111 } such that the codeword x = [x 1 x 2 x 3 ] T C is transmitted across the memoryless channel and the received word is y = [y 1 y 2 y 3 ] T. The FFG and Factor graph of the repetition code of length N = 3 is shown in Figure 4.4 which is represented by the parity-check matrix, [ ] H = (4.47) x 1 = A + x 1 A x 2 = B x 2 B + x 3 = x 3 Forney style factor graph Factor graph / Tanner graph / Tanner Wiberg graph Figure 4.4: Forney factor graph (FFG) and the factor graph of the repetition code of length N = 3. 39

42 CHAPTER 4. MESSAGE PASSING ALGORITHM The extrinsic message of the code symbol x 2 is considered to explain the update rules for the repetition code of length N = 3. So, from equation (4.32) and (4.33), Pr( x 2 = 0 y \2 ) = Pr( x 2 = 0 y 1, y 3 ) = p 0 e,2 }{{} µ 2 (0) (4.48) = α x C x 2 = 0 N = 3 j = 1 j 2 Pr( y j X j = x j ) (4.49) = α x = [000] Pr( y 1 X 1 = 0 ) Pr( y 3 X 3 = 0 ) (4.50) = α Pr( y 1 X 1 = 0 ) Pr( y 3 X 3 = 0 ) (4.51) p 0 e,2 = α p 0 ch,1 p 0 ch,3 }{{}}{{} µ 1 (0) µ 3 (0) (4.52) Similarly, Pr( x 2 = 1 y \2 ) = Pr( x 2 = 1 y 1, y 3 ) = p 1 e,2 }{{} µ 2 (1) (4.53) = α x C x 2 = 1 N = 3 j = 1 j 2 Pr( y j X j = x j ) (4.54) = α x = [111] Pr( y 1 X 1 = 1 ) Pr( y 3 X 3 = 1 ) (4.55) = α Pr( y 1 X 1 = 1 ) Pr( y 3 X 3 = 1 ) (4.56) p 1 e,2 = α p 1 ch,1 p 1 ch,3 }{{}}{{} µ 1 (1) µ 3 (1) (4.57) The scaling factor α can be found such that p 0 e,2 + p 1 e,2 = 1. 40

43 4.3. SUM-PRODUCT ALGORITHM In terms of log likelihood ratios (L-values) are, The message update rule for the variable node 2 in terms of log likelihood ratios l }{{} e,2 = L( x 2 y 1, y 3 ) = ln l 2 ( ) Pr( x2 = 0 y 1, y 3 ) Pr( x 2 = 1 y 1, y 3 ) (4.58) By using equation (4.51) and (4.56) (4.59) = ln α Pr( y 1 X 1 = 0 ) Pr( y 3 X 3 = 0 ) α Pr( y 1 X 1 = 1 ) Pr( y 3 X 3 = 1 ) (4.60) = ln Pr( y 1 X 1 = 0 ) Pr( y 1 X 1 = 1 ) }{{} l ch,1 + ln Pr( y 3 X 3 = 0 ) Pr( y 3 X 3 = 1 ) }{{} l ch,3 (4.61) l e,2 = l ch,1 }{{} l 1 + l ch,3 }{{} l 3 (4.62) Summary of the message update rules for a variable node degree three It should be noted that there will always be at least one intrinsic message from the channel and the rest incoming extrinsic messages are from the neighbouring check nodes. Moreover, if the variable node has degree (d v ) 2, then the only incoming message from the channel is equal to the outgoing extrinsic message. However, if the variable node has degree at least (d v ) 3 ( see Figure 4.5 ), the general extrinsic message update rules can be shown in terms of probabilities and L-values. The update rules are generalized and summarized for a variable node having degree 3 using equations (4.52) and (4.57) for messages in terms of probabilities and (4.61) for messages in terms of L-values. 41

44 CHAPTER 4. MESSAGE PASSING ALGORITHM Here new notations (VAR and CHK) are also introduced [9] which can also be used for the generalization of the update rules easily. In the Figure 4.5 the two incoming messages are (µ 1, µ 3 ) or (l 1, l 3 ) and the outgoing message is µ 2 or l 2. These notations can be used in such a way, µ 3 l 3 µ 1 l 1 d v = 3 µ 2 d v = 3 l 2 Probabilities L values Figure 4.5: Variable node having degree (d v ) 3, the outgoing extrinsic message is µ 2 (l 2 ) and the two incoming messages are µ 1 (l 1 ) and µ 3 (l 3 ). In terms of probabilities: ( ) µ2 (0) µ 2 (1) = V AR ( µ 1, µ 3 ) = ( α µ1 (0)µ 3 (0) α µ 1 (1)µ 3 (1) ) (4.63) where, α is a scaling factor such that µ 2 (0) + µ 2 (1) = 1. In terms of L-values: l 2 = V AR ( l 1, l 3 ) = l1 + l 3 (4.64) It can be shown that these L-values at the variable node side can be scaled by any constant β. β l 2 = β V AR ( l 1, l 3 ) = β l1 + β l 3 (4.65) Generalization of the update rules for a variable node of any degree The repetition code of length N can be pondered over after taking the repetition code of length 3. The generalization can easily be seen because the repetition code has always two codewords either all-zeros or all-ones. Therefore, in the sum-product formulation of both extrinsic and a-posteriori there will be no summation over both codewords when one code symbol is fixed and in terms of log likelihood ratios (LLR or L-values), the formulation can be shown as, 42

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Belief-Propagation Decoding of LDPC Codes

Belief-Propagation Decoding of LDPC Codes LDPC Codes: Motivation Belief-Propagation Decoding of LDPC Codes Amir Bennatan, Princeton University Revolution in coding theory Reliable transmission, rates approaching capacity. BIAWGN, Rate =.5, Threshold.45

More information

Low-Density Parity-Check Codes

Low-Density Parity-Check Codes Department of Computer Sciences Applied Algorithms Lab. July 24, 2011 Outline 1 Introduction 2 Algorithms for LDPC 3 Properties 4 Iterative Learning in Crowds 5 Algorithm 6 Results 7 Conclusion PART I

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

Bounds on Mutual Information for Simple Codes Using Information Combining

Bounds on Mutual Information for Simple Codes Using Information Combining ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar

More information

Decomposition Methods for Large Scale LP Decoding

Decomposition Methods for Large Scale LP Decoding Decomposition Methods for Large Scale LP Decoding Siddharth Barman Joint work with Xishuo Liu, Stark Draper, and Ben Recht Outline Background and Problem Setup LP Decoding Formulation Optimization Framework

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

LOW-density parity-check (LDPC) codes were invented

LOW-density parity-check (LDPC) codes were invented IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 26 Linear Block Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay July 28, 2014 Binary Block Codes 3 / 26 Let F 2 be the set

More information

Low-density parity-check codes

Low-density parity-check codes Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras e-mail: hari_jethanandani@yahoo.com Abstract Low-density parity-check (LDPC) codes are discussed

More information

Graph-based Codes and Iterative Decoding

Graph-based Codes and Iterative Decoding Graph-based Codes and Iterative Decoding Thesis by Aamod Khandekar In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication

More information

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008 Codes on Graphs Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 27th, 2008 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 1 / 31

More information

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

CHAPTER 3 LOW DENSITY PARITY CHECK CODES 62 CHAPTER 3 LOW DENSITY PARITY CHECK CODES 3. INTRODUCTION LDPC codes were first presented by Gallager in 962 [] and in 996, MacKay and Neal re-discovered LDPC codes.they proved that these codes approach

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

State-of-the-Art Channel Coding

State-of-the-Art Channel Coding Institut für State-of-the-Art Channel Coding Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Introducing Low-Density Parity-Check Codes

Introducing Low-Density Parity-Check Codes Introducing Low-Density Parity-Check Codes Sarah J. Johnson School of Electrical Engineering and Computer Science The University of Newcastle Australia email: sarah.johnson@newcastle.edu.au Topic 1: Low-Density

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

Noisy channel communication

Noisy channel communication Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields MEE10:83 Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields Ihsan Ullah Sohail Noor This thesis is presented as part of the Degree of Master of Sciences in Electrical Engineering

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Trellis-based Detection Techniques

Trellis-based Detection Techniques Chapter 2 Trellis-based Detection Techniques 2.1 Introduction In this chapter, we provide the reader with a brief introduction to the main detection techniques which will be relevant for the low-density

More information

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University

More information

Low-density parity-check (LDPC) codes

Low-density parity-check (LDPC) codes Low-density parity-check (LDPC) codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

Digital Modulation 1

Digital Modulation 1 Digital Modulation 1 Lecture Notes Ingmar Land and Bernard H. Fleury Navigation and Communications () Department of Electronic Systems Aalborg University, DK Version: February 5, 27 i Contents I Basic

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

An Introduction to Low-Density Parity-Check Codes

An Introduction to Low-Density Parity-Check Codes An Introduction to Low-Density Parity-Check Codes Paul H. Siegel Electrical and Computer Engineering University of California, San Diego 5/ 3/ 7 Copyright 27 by Paul H. Siegel Outline Shannon s Channel

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus Turbo Compression Andrej Rikovsky, Advisor: Pavol Hanus Abstract Turbo codes which performs very close to channel capacity in channel coding can be also used to obtain very efficient source coding schemes.

More information

POLAR CODES FOR ERROR CORRECTION: ANALYSIS AND DECODING ALGORITHMS

POLAR CODES FOR ERROR CORRECTION: ANALYSIS AND DECODING ALGORITHMS ALMA MATER STUDIORUM UNIVERSITÀ DI BOLOGNA CAMPUS DI CESENA SCUOLA DI INGEGNERIA E ARCHITETTURA CORSO DI LAUREA MAGISTRALE IN INGEGNERIA ELETTRONICA E TELECOMUNICAZIONI PER L ENERGIA POLAR CODES FOR ERROR

More information

Decoding of LDPC codes with binary vector messages and scalable complexity

Decoding of LDPC codes with binary vector messages and scalable complexity Downloaded from vbn.aau.dk on: marts 7, 019 Aalborg Universitet Decoding of LDPC codes with binary vector messages and scalable complexity Lechner, Gottfried; Land, Ingmar; Rasmussen, Lars Published in:

More information

Factor Graphs and Message Passing Algorithms Part 1: Introduction

Factor Graphs and Message Passing Algorithms Part 1: Introduction Factor Graphs and Message Passing Algorithms Part 1: Introduction Hans-Andrea Loeliger December 2007 1 The Two Basic Problems 1. Marginalization: Compute f k (x k ) f(x 1,..., x n ) x 1,..., x n except

More information

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming Byung-Hak Kim (joint with Henry D. Pfister) Texas A&M University College Station International Symposium on Information

More information

Making Error Correcting Codes Work for Flash Memory

Making Error Correcting Codes Work for Flash Memory Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging

More information

5. Density evolution. Density evolution 5-1

5. Density evolution. Density evolution 5-1 5. Density evolution Density evolution 5-1 Probabilistic analysis of message passing algorithms variable nodes factor nodes x1 a x i x2 a(x i ; x j ; x k ) x3 b x4 consider factor graph model G = (V ;

More information

6.451 Principles of Digital Communication II Wednesday, May 4, 2005 MIT, Spring 2005 Handout #22. Problem Set 9 Solutions

6.451 Principles of Digital Communication II Wednesday, May 4, 2005 MIT, Spring 2005 Handout #22. Problem Set 9 Solutions 6.45 Principles of Digital Communication II Wednesda, Ma 4, 25 MIT, Spring 25 Hand #22 Problem Set 9 Solutions Problem 8.3 (revised) (BCJR (sum-product) decoding of SPC codes) As shown in Problem 6.4 or

More information

Coding theory: Applications

Coding theory: Applications INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts

More information

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei COMPSCI 650 Applied Information Theory Apr 5, 2016 Lecture 18 Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei 1 Correcting Errors in Linear Codes Suppose someone is to send

More information

2376 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Note that conic conv(c) = conic(c).

2376 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Note that conic conv(c) = conic(c). 2376 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007 Pseudo-Codeword Analysis of Tanner Graphs From Projective and Euclidean Planes Roxana Smarandache, Member, IEEE, and Pascal O. Vontobel,

More information

Iterative Decoding for Wireless Networks

Iterative Decoding for Wireless Networks Iterative Decoding for Wireless Networks Thesis by Ravi Palanki In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California

More information

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes. 5 Binary Codes You have already seen how check digits for bar codes (in Unit 3) and ISBN numbers (Unit 4) are used to detect errors. Here you will look at codes relevant for data transmission, for example,

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

Capacity-approaching codes

Capacity-approaching codes Chapter 13 Capacity-approaching codes We have previously discussed codes on graphs and the sum-product decoding algorithm in general terms. In this chapter we will give a brief overview of some particular

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

1.6: Solutions 17. Solution to exercise 1.6 (p.13). 1.6: Solutions 17 A slightly more careful answer (short of explicit computation) goes as follows. Taking the approximation for ( N K) to the next order, we find: ( N N/2 ) 2 N 1 2πN/4. (1.40) This approximation

More information

Applications of Linear Programming to Coding Theory

Applications of Linear Programming to Coding Theory University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Dissertations, Theses, and Student Research Papers in Mathematics Mathematics, Department of 8-2010 Applications of Linear

More information

EFFICIENT DECODING ALGORITHMS FOR LOW DENSITY PARITY CHECK CODES

EFFICIENT DECODING ALGORITHMS FOR LOW DENSITY PARITY CHECK CODES EFFICIENT DECODING ALGORITHMS FOR LOW DENSITY PARITY CHECK CODES Master s thesis in electronics systems by Anton Blad LiTH-ISY-EX--05/3691--SE Linköping 2005 EFFICIENT DECODING ALGORITHMS FOR LOW DENSITY

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Convergence of the Sum-Product Algorithm for Short Low-Density Parity-Check Codes

Convergence of the Sum-Product Algorithm for Short Low-Density Parity-Check Codes DIPLOMA THESIS Convergence of the Sum-Product Algorithm for Short Low-Density Parity-Check Codes Institut of Communications and Radio-Frequency Engineering Vienna University of Technology (TU Wien) Telecommunications

More information

Slepian-Wolf Code Design via Source-Channel Correspondence

Slepian-Wolf Code Design via Source-Channel Correspondence Slepian-Wolf Code Design via Source-Channel Correspondence Jun Chen University of Illinois at Urbana-Champaign Urbana, IL 61801, USA Email: junchen@ifpuiucedu Dake He IBM T J Watson Research Center Yorktown

More information

Fountain Uncorrectable Sets and Finite-Length Analysis

Fountain Uncorrectable Sets and Finite-Length Analysis Fountain Uncorrectable Sets and Finite-Length Analysis Wen Ji 1, Bo-Wei Chen 2, and Yiqiang Chen 1 1 Beijing Key Laboratory of Mobile Computing and Pervasive Device Institute of Computing Technology, Chinese

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels

Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels Iulian Topor Acoustic Research Laboratory, Tropical Marine Science Institute, National University of Singapore, Singapore 119227. iulian@arl.nus.edu.sg

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Successive Cancellation Decoding of Single Parity-Check Product Codes

Successive Cancellation Decoding of Single Parity-Check Product Codes Successive Cancellation Decoding of Single Parity-Check Product Codes Mustafa Cemil Coşkun, Gianluigi Liva, Alexandre Graell i Amat and Michael Lentmaier Institute of Communications and Navigation, German

More information

Message Passing Algorithm with MAP Decoding on Zigzag Cycles for Non-binary LDPC Codes

Message Passing Algorithm with MAP Decoding on Zigzag Cycles for Non-binary LDPC Codes Message Passing Algorithm with MAP Decoding on Zigzag Cycles for Non-binary LDPC Codes Takayuki Nozaki 1, Kenta Kasai 2, Kohichi Sakaniwa 2 1 Kanagawa University 2 Tokyo Institute of Technology July 12th,

More information

Eindhoven University of Technology MASTER. Gauss-Seidel for LDPC. Khotynets, Y. Award date: Link to publication

Eindhoven University of Technology MASTER. Gauss-Seidel for LDPC. Khotynets, Y. Award date: Link to publication Eindhoven University of Technology MASTER Gauss-Seidel for LDPC Khotynets, Y Award date: 2008 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored

More information

Pseudocodewords of Tanner Graphs

Pseudocodewords of Tanner Graphs SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY 1 Pseudocodewords of Tanner Graphs arxiv:cs/0504013v4 [cs.it] 18 Aug 2007 Christine A. Kelley Deepak Sridhara Department of Mathematics Seagate Technology

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise 9 THEORY OF CODES Chapter 9 Theory of Codes After studying this chapter you should understand what is meant by noise, error detection and correction; be able to find and use the Hamming distance for a

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

B I N A R Y E R A S U R E C H A N N E L

B I N A R Y E R A S U R E C H A N N E L Chapter 3 B I N A R Y E R A S U R E C H A N N E L The binary erasure channel (BEC) is perhaps the simplest non-trivial channel model imaginable. It was introduced by Elias as a toy example in 954. The

More information

The Turbo Principle in Wireless Communications

The Turbo Principle in Wireless Communications The Turbo Principle in Wireless Communications Joachim Hagenauer Institute for Communications Engineering () Munich University of Technology (TUM) D-80290 München, Germany Nordic Radio Symposium, Oulu,

More information

On the Block Error Probability of LP Decoding of LDPC Codes

On the Block Error Probability of LP Decoding of LDPC Codes On the Block Error Probability of LP Decoding of LDPC Codes Ralf Koetter CSL and Dept. of ECE University of Illinois at Urbana-Champaign Urbana, IL 680, USA koetter@uiuc.edu Pascal O. Vontobel Dept. of

More information

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels 2012 IEEE International Symposium on Information Theory Proceedings On Generalied EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels H Mamani 1, H Saeedi 1, A Eslami

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

BOUNDS ON THE MAP THRESHOLD OF ITERATIVE DECODING SYSTEMS WITH ERASURE NOISE. A Thesis CHIA-WEN WANG

BOUNDS ON THE MAP THRESHOLD OF ITERATIVE DECODING SYSTEMS WITH ERASURE NOISE. A Thesis CHIA-WEN WANG BOUNDS ON THE MAP THRESHOLD OF ITERATIVE DECODING SYSTEMS WITH ERASURE NOISE A Thesis by CHIA-WEN WANG Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the

More information

Linear and conic programming relaxations: Graph structure and message-passing

Linear and conic programming relaxations: Graph structure and message-passing Linear and conic programming relaxations: Graph structure and message-passing Martin Wainwright UC Berkeley Departments of EECS and Statistics Banff Workshop Partially supported by grants from: National

More information

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Thomas R. Halford and Keith M. Chugg Communication Sciences Institute University of Southern California Los Angeles, CA 90089-2565 Abstract

More information

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009 Modern Coding Theory Daniel J. Costello, Jr. Coding Research Group Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 2009 School of Information Theory Northwestern University

More information

APPLICATIONS. Quantum Communications

APPLICATIONS. Quantum Communications SOFT PROCESSING TECHNIQUES FOR QUANTUM KEY DISTRIBUTION APPLICATIONS Marina Mondin January 27, 2012 Quantum Communications In the past decades, the key to improving computer performance has been the reduction

More information

Investigation of the Elias Product Code Construction for the Binary Erasure Channel

Investigation of the Elias Product Code Construction for the Binary Erasure Channel Investigation of the Elias Product Code Construction for the Binary Erasure Channel by D. P. Varodayan A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF APPLIED

More information

Block Codes :Algorithms in the Real World

Block Codes :Algorithms in the Real World Block Codes 5-853:Algorithms in the Real World Error Correcting Codes II Reed-Solomon Codes Concatenated Codes Overview of some topics in coding Low Density Parity Check Codes (aka Expander Codes) -Network

More information

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

ABSTRACT. The original low-density parity-check (LDPC) codes were developed by Robert

ABSTRACT. The original low-density parity-check (LDPC) codes were developed by Robert ABSTRACT Title of Thesis: OPTIMIZATION OF PERMUTATION KEY FOR π-rotation LDPC CODES Nasim Vakili Pourtaklo, Master of Science, 2006 Dissertation directed by: Associate Professor Steven Tretter Department

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Digital Communications

Digital Communications Digital Communications Chapter 8: Trellis and Graph Based Codes Saeedeh Moloudi May 7, 2014 Outline 1 Introduction 2 Convolutional Codes 3 Decoding of Convolutional Codes 4 Turbo Codes May 7, 2014 Proakis-Salehi

More information