Eindhoven University of Technology MASTER. Gauss-Seidel for LDPC. Khotynets, Y. Award date: Link to publication

Size: px
Start display at page:

Download "Eindhoven University of Technology MASTER. Gauss-Seidel for LDPC. Khotynets, Y. Award date: Link to publication"

Transcription

1 Eindhoven University of Technology MASTER Gauss-Seidel for LDPC Khotynets, Y Award date: 2008 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain

2 TECHNISCHE UNIVERSITEIT EINDHOVEN Department of Mathematics and Computing Science Gauss-Seidel for LDPC By Yevgen Khotynets Supervisors : W.H.A. Schilders (TU/e) J.T.M.H. Dielissen (NXP) Eindhoven, October 2007

3 Contents Abstract 4 Preface 5 1 Introduction 7 2 LDPC Decoding Basic concepts of Information transmitting Matrix representation Tanner graph representation Practical example of information trasmitting Log-likelihood ratios The LDPC decoding Practical decoding example Belief propagation algorithm General formulation Detailed example for parity check matrix H Theory on convergence Criteria for performance of an LDPC decoder Gauss-Seidel method Basic theory of the Gauss-Seidel method General formulation of the Gauss-Seidel Convergence of Gauss-Seidel Effect of ordering Traditional way of Gauss-Seidel Successive over-relaxation Vector extrapolation Gauss-Seidel for LDPC Gauss-Seidel LDPC iterations

4 4.2 Simplified check-node calculations Practical parity check matrices Results on Gauss-Seidel LDPC Different orders for Gauss-Seidel Suggested orders Results and additional remarks Conclusions Excluding check nodes from the decoding procedure Exclude check-nodes based on MIN(m) values observation for one iteration Exclude check-nodes based on MIN(m) values observation for two iterations Exclude check-nodes based on MIN(m) values observation for three iterations Conclusions Numerical observations Convergence observations Vector extrapolation for LDPC Additional numerical observations Conclusions Conclusions and recommendations Different orders of check-nodes processing Conclusions Recommendations Excluding check-nodes from the decoding procedure Conclusions Recommendations Convergence observations for LDPC. Vector extrapolation Conclusions Recommendations References 90 A Appendix 92 A.1 Parity check matrix H A.2 Parity check matrix H A.3 Parity check matrix H

5 B Appendix 94 B.1 Gauss-Seidel order according to the maximal MIN(m), determined for each iteration (GSMaxM(ei)) B.1.1 Results on parity check matrix H B.1.2 Results on parity check matrix H B.1.3 Results on parity check matrix H B.1.4 Results on parity check matrix H

6 Abstract Recently, contributions were made in the field of the LDPC decoding that make use of the Gauss-Seidel method. In this thesis, several modifications of this method will be studied, such as different orders of solving the paritycheck equations, vector extrapolation and less known variants. Numerical results are given, as well as a comparison of the studied methods. 4

7 Preface In 1948, Claude Shannon published his paper in which he formalized the concept of information, and defined the bounds for the maximum amount of information that can be transmitted over unreliable channels. The Shannon limit or Shannon capacity of a communication channel is the theoretical maximum information transfer rate of the channel, for a particular noise level. Since then, there were many attempts to design a coding system that could approach the capacity, and it was widely believed that no relatively simple coding system could be made, until the breakthough in 1993 when turbo codes were introduced. It was shown that the turbo codes can perform very close to capacity with only a moderate complexity. The turbo decoder works by passing the output of one decoder to the input of the other decoder in a circular fashion. This type of iterative decoding was actually invented earlier by Robert Gallager (1960). Due to the computational effort in implementing coder and encoder for such codes, they were forgotten until about 10 years ago. The renewed interest in the coding theory led to the rediscovery of the Gallager codes(mackay). Later these codes were shown to nearly achieve the channel capacity of the channels whose outputs are independent of each other. In this thesis, in Chapter 1 we give a general introduction. Chapter 2 describes some basic principles of information transmission and the state-ofthe-art Low density parity check (LDPC) decoding algorithm formulation, followed by a numerical example. Chapter 3 provides a theoretical discussion on the Gauss-Seidel method and its modifications. In chapter 4, we explain how the Gauss-Seidel method is applied in practice in the LDPC decoding field. The check-nodes calculations are explained in more detail, examples of the practical parity check matrices are given and the results of the Gauss- Seidel application are shown. After that, different orders of check-nodes calculations are suggested for the decoding procedure (Chapter 5). In Chapter 6 we show that the number of computations can be reduced by excluding the check-nodes from the decoding procedure, according to several criteria. 5

8 Chapter 7 provides the results of numerical observations of the algorithm. Vector extrapolation (reduced rank extrapolation) is applied to the LDPC decoding. Chapter 8 summarizes conclusions from the previous chapters and contains recommendations for future work.

9 Chapter 1 Introduction Communication is extremely important in our society. Nowadays, with technological advances, we can not imagine our lives without telephone, internet and television services. In the last century, a revolution in telecommunications has changed communication a lot by providing new media for long distance communication. Telecommunication is the transmission of signals over a distance for the purpose of communication. In modern times, this process typically involves the sending of electromagnetic waves by electronic transmitters, but in earlier times telecommunication may have involved the use of smoke signals or drums. Today, telecommunication is widespread. The devices that assist the process, such as the television, radio and telephone, are common in many parts of the world. There are also numerous networks to connect these devices, including computer networks, radio networks, public telephone networks and television networks. Computer communication via Internet is one of many examples of telecommunication. Communication can be seen as a transmission of information from one place (transmitter) to another (receiver). To explain some principles, consider a basic communication system, which is composed of three parts: a transmitter, channel, and receiver. Transmission of information means sending a stream of bits or bytes from one location to another using some of the NOISE TRANSMITTER CHANNEL RECEIVER 7

10 technologies available today, such as copper wire, optical fiber, laser, radio, or infra-red light. Information is sent through the channel from the transmitter to the receiver. The problem in practice is that the received message may contain errors that occur due to noise in the channel or channel distortion. A way to protect the message from errors is the so called channel coding. The latter can help to retrieve the data, even when noise is present. To transmit information reliably, errors that occur due to the noise in the channel have to be first detected and then corrected. A low-density paritycheck (LDPC) code is a type of error correcting code. LDPC is a method to transmit a message over a noisy transmission channel. Neither LDPC nor any other error correcting codes can guarantee perfect transmission, but the probability of error can be made arbitrary small. LDPC codes are a very hot topic recently. Originally, this type of codes was invented in the early 1960 s by Robert Gallager [3]. Unfortunately, the codes were impractical to implement that time and they were largely forgotten. Recently, it was shown that LDPC allows data transmission rates very close to the theoretical maximum (Shannon Limit).The name LDPC originates from the characteristic of the parity-check matrix which contains only a few ones in comparison to the amount of zeros. Low-density parity check codes are attracting much attention of engineers. The codes are very promising and may become a standard error correction scheme in many sectors. A list of applications that are currently being considered for LDPC is very long. To mention a few applications: digital satellite television (A), ultra-high speed wireless local area networks (B), optical communications (C) and hard disk drives (D).

11 Chapter 2 LDPC Decoding In this chapter we explain the LDPC decoding process. First, basic concepts of information transmitting are explained. We describe the two ways of representing the LDPC codes: matrix representation and the Tanner graph representation. Log-likelihood ratios, used in the decoding process, are explained. To illustrate the concepts, we use a practical numerical example of information transmitting. Second, following the practical decoding example, we give a general formulation of the belief propagation algorithm. The latter is followed by a brief review of the theory on the convergence of the algorithm. After that, criteria for performance of an LDPC decoder are given. 2.1 Basic concepts of Information transmitting To explain the basic ideas of the the LDPC algorithm, we consider the following simple example. Suppose we would like to send some binary information (bits) from one point to another. Assume we want to transfer 4 information bits S 1, S 2, S 3, S 4 : { 0, 0, 0, 1 }. We send these bits in a form as described next and we calculate the additional parity bits, such that the sum of all bits in each row and column is equal to 0 modulo 2. The parity check bits are added in order to be able to correct errors made during the information transmission 1. 1 The information is sent at a rate R = (number of information bits)/(total number of bits)= 4/9. 9

12 0 S 1 0 S S 3 S 4 Add parity bits S 1 S 2 P S 3 S 4 P P 3 P 4 P 5 We now have the six parity check equations: S 1 + S 2 + P 1 = 0 mod 2 S 3 + S 4 + P 2 = 0 mod 2 P 3 + P 4 + P 5 = 0 mod 2 S 1 + S 3 + P 3 = 0 mod 2 S 2 + S 4 + P 4 = 0 mod 2 P 1 + P 2 + P 5 = 0 mod 2 (2.1) There are two possible ways for representation of an LDPC code: matrix and Tanner graph representation. These two ways of viewing the code are the focus of the next two subsections Matrix representation For the parity check equations (2.1) the parity check matrix has the following form: S 1 S 2 P 1 S 3 S 4 P 2 P 3 P 4 P 5 H =

13 The number of columns in H corresponds to the number of bits that we want to transfer (both systematic bits and calculated parity check bits). The position of non-zero elements in H contains the information of which bits sum is verified in which equation (so that S 1 + S 2 + P 1 = 0 mod 2 is verified in the first row, S 3 + S 4 + P 2 = 0 mod 2 is verified in the second row and so on). Low Density Parity Check code can be specified by this sparse paritycheck matrix H. Low density means that there are much fewer ones in H than zeros. The structure of the code is completely described by the parity check matrix H. The matrix H M N contains the information about the number of parity check equations (M) and which bits of the codeword vector c 1 N are verified in which parity check equation. The number of ones in m-th row (1 m M) is noted by the value K m and the number of ones per n-th column is noted by J n, (1 n N). If a code is regular (i.e. having the same number of ones per each row and having the same number of ones per each column) then the matrix H is defined by the values (N, J, K), with N being the number of columns of H, J - the number of ones per each column and K - the number of ones per row. The number of rows in H is then obtained as M = N J for a regular code. K The example matrix in the foregoing specifies the regular code for (9,2,3). A vector c is called a codeword for the code specified by the matrix H if and only if Hc T = 0mod2. (2.2) Having the equation above satisfied, we say that c contains consistent information Tanner graph representation Having discussed the matrix form representation of an LDPC code, we explain the Tanner graph representation. We consider the code defined by its parity-check matrix H above. The LDPC algorithm is often visualized by a Tanner graph [12], which is a bipartite graph. The two groups of nodes will be referred to as the symbol nodes and the check nodes. The number of symbol nodes corresponds to the number of columns N of matrix H, i.e. the total number of symbol and parity bits that we transfer. The number of check nodes corresponds to the number of rows M that the matrix H has, which is equal to the number of relations in (2.1). The n-th symbol node (1 n N) and the m-th check node (1 m M) are connected if and only if H(m, n)=1. Consider the example in Fig.(2.1). The first check node (on the right) is connected to the first, the second and the third symbol

14 S 1 S P S 3 S 4 P 2 P P P 5 9 Figure 2.1: Tanner graph representation for the matrix H. nodes, implying that (S 1 + S 2 + P 1 = 0 mod 2) is verified in the first parity check equation. Similar for all the rest Practical example of information trasmitting We would like to transmit the sequence of information bits and parity check bits. In information theory we submit signals instead of bits. For the signals, two values of equal power are chosen (e.g. +1, -1) and the chosen mapping is such that 0 +1, and : ( 0, 0, 0, 0, 1, 1, 0, 1, 1 ) (1, 1, 1, 1, -1, -1, 1, -1, -1 ). By transmitting our bits sequence through some channel, noise affects the information being received. To model this phenomenon one often assumes the noise to have the Gaussian distribution. If we use the Additive White Gaussian Noise (AWGN) channel, some random values with certain probabilities are added to our transmitted values (Gaussian noise, expressed in terms of the normal distribution function). This channel is most commonly used, because of it s uncomplicated mathematical model. The gaussian distribution according to the following equation p N (n i ) = 1 πn0 exp( n2 i N 0 ) (2.3) and visualized in Fig.(2.2) shows the probability P of finding a deviation from the mean (s 1 = 1 and s 2 = 1). The probability of larger and larger 1 we use to indicate a binary 0 or 1.

15 deviations can be seen to decrease rapidly. 1 0 Figure 2.2: Probability distribution function for the mean s 1 = 1 and for the mean s 2 = 1. The addition is done according to the model in Fig.(2.3) (r i = s i + n i ). n AWGN v s r y M 4/N 0 Figure 2.3: Sending the signal Log-likelihood ratios We now explain another important algorithmic issue, the log-likelihood ratios. Consider the scheme of sending the signal through the AWGN channel, as in Fig.(2.3). A binary digit v goes through the mapping M which maps 0 to a 1 and 1 to a -1. It then goes through the Gaussian channel, where the Gaussian noise is added to it. In hardware it is undesirable to work with many multiplications. We therefore prefer to operate in the log domain by choosing the ratio of likelihoods instead of likelihoods themselves, calculations further simplify. For a binary random variable x denote L(x) = P (x = 0)/P (x = 1)

16 be the likelihood ratio of x. Given another random variable y, the conditional likelihood ratio of x is denoted by L(x y) = (x = 0 y)/p (x = 1 y). Similarly, the log-likelihood ratio of x is ln(l(x)), and the conditional log-likelihood of x given y is ln(l(x y)). The log-likelihood ratios of the received signals, given equal probability of occurrence, are derived as LLR(x) = ln P (0 y) 1 P (1 y) = ln πn0 exp( (r s)2 N 0 ) 1 πn0 exp( (r+s)2 N 0 ) = 4rs N 0 = 2r σ 2, (2.4) with the signal value being s = 1. The input message into the symbol node n is then defined as ( ) sn L n = σ + n n 4E ( ) b R sn 2 = N 0 σ + n n σ = 2r n 2 σ, (2.5) 2 where σ 2 1 = 2 R E b /N 0, r n is the received symbol value and σ 2 being the noise variance. R is the rate and E b /N 0 being energy per information bit over spectral noise ratio. The equations (2.4) and (2.5) show that when the received signal r n is multiplied by 4 N 0 it s result represents a log-likelihood ratio. 2.2 The LDPC decoding In this section we will first introduce a general concept of the LDPC decoding (message passing) algorithm [3]. The symbol node function and the check node function are introduced. We make some basic definitions which are needed for the explicit formulation of the algorithm iteration. After the general formulation has been given, we will discuss the details for the example parity check matrix H (section 2.1.1) and explain how values in the t-th iteration are calculated by using the values from two previous iterations t-1 and t Practical decoding example Clearly the algorithm for the decoding must take care of noise. Let us have a look at the numerical example and assume that after the transmitting we receive the following values: (1.5, 1.1, 0.8, 1, -1.1, -0.8, 0.5, -0.4, 1.1). See Fig.(2.4). For the received value 0.8 we want to estimate if a -1 or a +1 was sent. The probability that a 1 being sent is higher than that of a -1, resulting in the decision that a 1 was sent. Similarly the value -1.1 at the output of the

17 Figure 2.4: Received values channel results in the decision that a -1 was sent. If the output of the channel is a positive value, we demap it back to 0. If a negative value is received, it is demapped to 1. We note that if a 0 is received the probability that a -1 was sent is equal to the probability that a 1 was sent (the choice has to be made if we treat 0 as a 0 or a 1 ). If we estimate which bits sequence was sent we would get the following logical values: 0 S 1 * 0 S 2 * 0 P 1 * 0 S 3 * 1 S 4 * 1 P 2 * 0 P 3 * 1 P 4 * 0 P 5 * We can see that there is an error in the received data: P5 is 0 while the original P 5 is 1. As a result of this single error the third parity check equation and the sixth parity check equation are not satisfied anymore. We now describe how the LDPC algorithm corrects this error. Below, one iteration of the LDPC algorithm is explained. Lets first consider in which way the information about each value can be obtained using a particular equation. We take the third parity check equation in Fig.(2.4) and replace the value 2.1 by A:

18 A We know that the parity check equation has to be satisfied. The value at the first position gives us a 0, the value at the second position gives us a 1. In order to satisfy the parity equation A should give us a 1. Thus the sign of A has to be negative. Considering any of the parity check equations the information for one of the values can be obtained by using the other two values which participate in the same parity check equation. The probability of one of the values is obtained by using the other two values in the equation. Successively it is done for all the values and through an iteration process. This algorithm is known as a belief propagation algorithm. The following formula is used: P (k) = min v,v k v [XOR v,v k {sign(v)}] mod2, k, v K, (2.6) where XOR is defined as sign equivalent of the boolean xor-function, i.e. XOR(, ) = +. K is the set of all the values that participate in the equation. We can obtain the information of A from the 2 parity check equations in which A participates: A From the third parity check equation we learned that the sign of A is -, and the magnitude is calculated (according to (2.3)) as min( 0.5, 0.4 ) = 0.4. From the sixth parity check equation it follows that A should have a sign - and the magnitude of A is calculated as min( 0.8, 0.8 ) = 0.8. Since the received value 1.1 and the parity check equation amongst themselves are independent we can sum them up ( ). In a similar way the information of all the other values in the graph can be obtained from the two other values in the same parity check equations. Fig.(2.5) shows the graph when updating all values in the described way. We further refer to this as one iteration.

19 demapping Figure 2.5: Values update after one iteration. We now see that we have corrected the error, which occurred during the transmission. All the parity check equations are satisfied now and we see that the vector c new = (S 1, S 2,...,P 5 ) is a codeword due to Hc T new = 0mod Belief propagation algorithm In the field of LDPC decoding, a general class of iterative algorithms, called message passing algorithms, is used. Based on Tanner graphs representation (section 2.1.2), the reason for the name of these algorithms is that at each iteration, messages are passed from message nodes to check nodes, and from check nodes back to message nodes. The messages from message nodes to check nodes are computed based on the value of the message node and messages passed from the connected check nodes to that message node. A very important subclass of message passing algorithms, used in practice, is the belief propagation algorithm. This algorithm is described in Robert Gallager s paper [3]. The messages passed through the edges in this algorithm are probabilities (beliefs). To be more precise, the message passed from a message node n to a check node m is the probability that n has a certain value given the observed value of that message node, and all the values passed to n in the previous iteration from check nodes connected to n other than m. On the other hand, the message passed from m to n is the probability that n has a certain value given all the messages passed to m in the previous round from message nodes other than n. Message passing is to be explained in detail in the foregoing section. The formulas for these probabilities can be derived. In practice, it is beneficial to work with the log-likelihoods rather than with the probabilities. The log-likelihoods were discussed in section

20 In section we give a general formulation of the belief propagation algorithm. The objective of section is to provide a practical numerical example of decoding by the algorithm General formulation Consider the Tanner graph representation for an LDPC code specified by the parity check matrix H M N (Fig.(2.6)). The n-th symbol node (1 n N) and the m-th check node are connected if and only if H(m, n)=1. The classical one iteration of the LDPC algorithm consists of sending the value from the symbol nodes to the check nodes and back. Symbol nodes Check nodes N symbol nodes... M check nodes Figure 2.6: Tanner graph representation. We will make some definitions. Define: Λ i 1 mn - a value, sent from the check node m to the symbol node n in the i-th iteration. λ i n - a value in the symbol node n updated after performing the i-th iteration. λ i 1 nm - a value, sent from the symbol node n to the check node m in the i-th iteration.

21 Further, we introduce the following sets: N(m) = {n n = 1,..., N; H(m, n) = 1}, 1 m M, e.g. the set of the symbol nodes numbers connected to the check node m. M(n) = {m m = 1,..., M; H(m, n) = 1}, 1 n N, e.g. the set of the check nodes numbers connected to the symbol node n. Next, we introduce the two functions, the symbol node function and the check node function to explain how LDPC decoding process works: The symbol node function S takes all the values Λ mn coming to a symbol node n in some iteration as an arguments. S t (n) = m M(n) Λ t m n, n = 1, 2,..., N. (2.7) Define also Sm(n) t = Λ t m n = S t (n) Λ t mn, m M(n),m m n = 1, 2,..., N.m, m M(n), (2.8) where the superscript t denotes the iteration number. The check node function G takes all the values λ nm in some iteration coming to a check node m in some iteration as an arguments. Define G t n(m) = min n N(m),n n λ t n m [ XOR n N(m),n n{sign(λ t n m)} ], (2.9) m = 1, 2...M, n N(m). The superscript t denotes the iteration number. Iteration process updates. We consider the t-th iteration of the LDPC decoding algorithm. As written above, there are input values L n, n = 1,..., N for each node. These values stay constant during the iteration process. The values to be sent from the symbol nodes to the check nodes in the t-th iteration are obtained as: nm = L n + Sm t 2 (n) = λ t 1 n λ t 1 Λ t 2 mn, n = 1,..., N, m M(n). (2.10)

22 The values in the symbol nodes after the t-th iteration are calculated as: λ t n = L n + m M(n) G t 1 n (m) = L n + S t 1 (n), n = 1,..., N (2.11) The values to be sent from the check nodes to the symbol nodes in the t-th iteration are obtained as: Λ t 1 mn = G t 1 n (m), m = 1,..., M, n N(m) (2.12) It is checked after performing the t-th iteration if the vector λ t = (λ t 1, λ t 2,..., λ t N) is a codeword vector i.e. Hλ tt = 0mod2, (2.13) where the components of λ t are first estimated before the multiplication, in a way described in section If λ t satisfies (2.13), the iteration process is stopped. Initialization. We may start the iterative process as soon as initial values are given. For our three-point scheme to be complete we put the initial values Λ 1 mn = 0, m = 1,..., M, n N(m). And we initialize λ 0 n = L n, n = 1,..., N. Schematic representation. Note that the algorithm is a three-point-scheme. We use the values both from (t 1)-st iteration and the (t 2)-nd iteration to obtain the values λ t n, n = 1,..., N. Observe the Fig.2.7 which represents the values update in the iteration t. By using the values Λ t 2 mn obtained. Simultaneously the values Λ t 1 mn Λ t 2 mn and λ t 1 n. and λ t 1 n the new values λ t n are are updated by using the values The algorithm, described in this section is known as Uniformly most powerful belief propagation [7],[10] algorithm or LDPC Jacobi algorithm (no data is reused within an iteration.) Detailed example for parity check matrix H We observe the Tanner graph (Fig. (2.8)) for the code specified by the matrix H. Lets show how new values λ t n are updated in the iteration process. Suppose we want to obtain the value λ t 5 by performing the t-th iteration. We obtain M(5) = {2, 5}, N(2) = {4, 5, 6}, N(5) = {2, 5, 8}. As described

23 t-2 mn t-1 n t-1 mn t n Figure 2.7: Schematic representation of the t-th iteration above, we can write as follows: λ t 5 = L 5 + m M(5) G t 1 5 (m) = L 5 + G t 1 5 (2) + G t 1 5 (5) = L 5 + Λ t Λ t 1 55 = L 5 + min n N(2),n 5( λ t 1 n Λt 2 2n ) [ XOR n N(2),n 5{sign(λ t 1 n Λt 2 2n )}] +min n N(5),n 5( λ t 1 n Λt 2 5n ) [ XOR n N(5),n 5{sign(λ t 1 n Λt 2 5n )}] = L 5 + min( λ t 1 4 Λ t 2 24, λ t 1 6 Λ t 2 26 ) [ XOR ( (λ t 1 4 Λ t 2 24 ), (λ t 1 6 Λ t 2 26 ) )] +min( λ t 1 2 Λ t 2 52, λ t 1 8 Λ t 2 58 ) [ XOR ( (λ t 1 2 Λ t 2 52 ), (λ t 1 8 Λ t 2 58 ) )] To give a numerical example, suppose we have received the values at the output of the AWGN channel (as in Fig.(2.4)) and we want to calculate λ 1 5 performing the 1-st iteration. We have the following initializations: λ 0 = (1.5, 1.1, 0.8, 1, 1.1, 0.8, 0.5, 0.4, 1.1), Λ 1 mn = 0, m = 1,..., 6, n N(m). As in the foregoing calculation for λ t 5 we find that: λ 1 5 = L 5 + min( λ 0 4 Λ 1 24, λ 0 6 Λ 1 26 ) [ XOR ( (λ 0 4 Λ 1 24 ), (λ 0 6 Λ 1 26 ) )] +min( λ 0 2 Λ 1 52, λ 0 8 Λ 1 58 ) [ XOR ( (λ 0 2 Λ 1 52 ), (λ 0 8 Λ 1 58 ) )] = = 2.3

24 symbol node functions S λ1 t L1 λ2 t λ3 t λ4 t λ5 t L5 λ6 t λ7 t L7 λ8 t λ9 t t-th ITERATION Check node functions G 1 2 Λ t Λ t Λ t Λ t symbol node functions S λ1 t-1 L1 λ t λ2 t-1 L2 λ t λ3 t-1 λ4 t-1 λ t λ t L4 λ5 t-1 λ6 t-1 λ t L6 λ7 t-1 λ t λ8 t-1 λ t L8 λ9 t-1 λ t L9 (t-1)-st ITERATION Check node functions G 1 2 Λ t-2 24 Λ t Λ t-2 39 Λ t Λ t-2 41 Λ t-2 44 Λ t Λ t Figure 2.8: Detailed Tanner graph representation for the matrix H with the definitions made

25 Theory on convergence In this section we give a brief discussion about the current research progress in the theory of convergence for the belief propagation algorithm. We have conducted a literature study to find how much is known about convergence of the algorithm. In literature the discussion is often restricted to the convergence rate of the belief propagation algorithm, normally a comparison of one modification of the belief propagation technique to another. Convergence rate indicates if one decoding technique is more efficient than another. Convergence analysis in the field of LDPC decoding is a very complicated topic. Much research has been conducted by different institutions on the issue, but there is still a lack of mathematical theory related to the convergence of the algorithm. Note, that discussion on convergence of the algorithm depends also on the type of the channel used. In this thesis, discussion is restricted to the AWGN channel, described earlier in section This channel model is the most commonly used in practice. Also note, that convergence depends on which modification of the belief propagation is used. In our case we conduct some practical experiments aiming to better understand the convergence of the algorithm where Gauss- Seidel iterations are used (section 7.1). The convergence properties of belief propagation have been studied under varying degrees of generality in a number of papers [5],[6],[8],[9],[13],[14] Criteria for performance of an LDPC decoder Performance of an LDPC decoder is normally judged by three different criteria: bit error rate(ber), frame error rate(fer) and average number of iterations. Bit error rate is a measure of transmission quality. It represents the percentage of bits that have errors relative to the total number of bits received in a transmission. Frame error rate is a measure of transmission quality that represents the percentage of frames that have errors relative to the total number of frames received in a transmission. The lower the BER and the FER the better transmission quality. For example if BER is 10 5, it means only one error in 10 5 bits received. Another important criteria for LDPC decoder is the average number of iterations which is required to correct the errors, that occur during signal transmission. Quality of a signal transmission depends on the E b /N 0 ratio,

26 meaning that for higher E b /N 0 there will be less noise to corrupt the signal, than for lower E b /N 0. To test an LDPC decoder we use one of the prototype matrices (as in section 4.3) and choose a codeword vector of bits, so that condition (2.13) is satisfied. Consequently, noise is generated for a chosen codeword vector (according to Fig.(2.3)) and iteration procedure starts. In practical implementations the maximum number of LDPC decoder iterations varies from 30 to 50. Each decoded vector of λ t is a frame. 1. After one frame is decoded noise is generated again for the chosen codeword and decoding starts again. If an LDPC decoder can not correct all the bit errors in some frame then frame error occurs. The process is continued until a certain number of frame errors is met. 1 If the received vector λ 0 satisfies (2.13) decoding is not needed.

27 Chapter 3 Gauss-Seidel method Recently, contributions to the field of LDPC decoding were made. It was realized that the Gauss-Seidel method can efficiently be used when applied to the LDPC algorithm. The focus of the current chapter is to provide a basic theory about the Gauss-Seidel method and its modifications. The modifications of the method can potentially be used to accelerate the rate of convergence of the algorithm. Although the theory of the Gauss-Seidel method is quite standard, there are certain features of the method that are not well known, and this is the main reason for the more extensive discussion here. Also, researchers in the field of the LDPC decoding are often not familiar with the Gauss-Seidel method. In later chapters we will discuss a practical application of the ideas based on the methods described in this chapter. 3.1 Basic theory of the Gauss-Seidel method The Gauss-Seidel method and related techniques are used to find solution for linear systems. The method was first used by Gauss and independently discovered by Seidel. The Gauss-Seidel method, also known as the method of successive displacements, is used to solve a linear system in an iterative manner. In every iteration, a new value is determined for each of the unknowns. The calculation of this value takes into account the new values of all unknowns which have already been treated in this iteration step. This is in contrast with the Jacobi method or method of simultaneous displacements, where only values from the previous iteration are used in the computation of the new values. 25

28 3.1.1 General formulation of the Gauss-Seidel To describe the Gauss-Seidel procedure, consider the linear system Ax = b, (3.1) in which A is an n n matrix. Let x (0) be an initial guess to the solution x of this system. The first iteration of the Gauss-Seidel procedure then starts with the calculation of a new value for the first component: a 11 x (1) 1 = b 1 n j=2 a 1j x (0) j. (3.2) Having determined x (1) 1, the second component can be determined from: a 22 x (1) 2 = b 2 a 21 x (1) 1 n j=3 a 1j x (0) j. (3.3) Note that the new value for the first component is used here. The procedure continues, and the final step in the first iteration is to determine a new value for the last component: a nn x (1) n = b n n 1 j=1 a nj x (1) j. (3.4) Several observations can be made. First, the above procedure can not be carried out whenever there are zero diagonal elements. In this case, one could consider a re-ordering of the equations and unknowns, in such a way that all diagonal elements are non-zero. The ordering of equations and unknowns is an important issue in the use of Gauss-Seidel methods. A second observation is that the calculation of x (1) i involves both old and new values: a ii x (1) i i 1 = b i j=1 a ij x (1) j n j=i+1 a ij x (0) j. (3.5) Apparently, a new value is used whenever the unknown is multiplied by a matrix element in the lower triangle of the matrix A. This shows that, if we split the matrix A into a lower triangular, a diagonal and an upper triangular part, A = L + D U, (3.6) then the Gauss-Seidel step can be cast into the form (provided D 1 exists) x (1) = D 1 ( b + Lx (1) + Ux (0)). (3.7)

29 The use of L and U is convenient in the following, but it also reflects the fact that the off-diagonal elements of A are often negative in practical applications. Expressions similar to (3.7) hold for subsequent iterations, so that the k-th step can be written in the form (D L)x (k+1) = b + Ux (k). (3.8) From this it is seen that the Gauss-Seidel process transforms the solution of the linear system (3.1) into the solution of a sequence of lower triangular systems Convergence of Gauss-Seidel Expression (3.8) can also be used to analyze the convergence of the sequence of Gauss-Seidel iterates. To see this, b is rewritten in terms of the solution x of (3.1): Using this in (3.8) leads to b = (D L U)x. (3.9) (D L)x (k+1) = (D L U)x + Ux (k), (3.10) If we define the error vector and the iteration matrix (D L)(x (k+1) x ) = U(x (k) x ). (3.11) the foregoing result can be written in the form Successive application of this expression leads to e (k) = x (k) x, (3.12) M = (D L) 1 U, (3.13) e (k+1) = Me (k). (3.14) e k = M k e (0). (3.15) If the initial error e (0) is not zero, this expression shows that the Gauss-Seidel procedure will yield the desired solution if and only if the matrices M k tend to zero. The latter implies that all the eigenvalues of M must be located inside the unit circle in the complex plane. For arbitrary matrices A, this

30 property is hard to verify. Fortunately, the coefficient matrices occurring in practical problems often have a special structure which can be exploited to establish convergence of the Gauss-Seidel procedure. An example of this is contained in the following theorem. The spectral radius ρ(m) of a matrix M is defined as the maximum modulus of the eigenvalues of M: ρ(m) = max i λ i (3.16) Theorem 1 If A is a strictly or irreducibly diagonally dominant matrix, and M is the iteration matrix of the Gauss-Seidel procedure, then ρ(m) < 1. Theorem 2 Let A = {{a i,j } i=1,...,n } j=1,...,n be a strictly or irreducibly diagonally dominant matrix. Then A is non-singular. If all the diagonal entries of A are positive real numbers, then the eigenvalues λ i of A satisfy Rλ i > 0, 1 i n. (3.17) From Theorem 2 it then follows that this matrix is not singular, so that det(m λi n ) 0. Since this is a contradiction, the assumption made is invalid. Hence, we must have that λ < 1 for all eigenvalues of M. As a corollary of this theorem, we have: Corollary 1 Let A be a strictly or irreducibly diagonally dominant n n matrix. Then the associated Gauss-Seidel process is convergent for any initial approximation x 0. Although convergence of the Gauss-Seidel method is guaranteed for the class of matrices in the above corollary, the rate of convergence may be prohibitively small Effect of ordering One important fact about the Gauss-Seidel method was noted in the above: the new iterate x (k) depends upon the order in which the equations are examined. The Gauss-Seidel method is sometimes called the method of successive displacements to indicate the dependence of the iterates on the ordering. If we change the ordering, the components of the new iterate, not just their order, will change as well. Reordering the equations can affect the rate at

31 which the Gauss-Seidel method converges. A poor choice of ordering can decrease the rate of convergence. On the contrary, a good choice of ordering can accelerate the convergence rate of the Gauss-Seidel process. This feature may be very attractive for the practical LDPC algorithm implementations, which is the focus of the Chapter Traditional way of Gauss-Seidel The version of the Gauss-Seidel method, described in section 3.1.1, was introduced when it became beneficial to use computers for solving the systems of linear equations. There are different orders of processing the equations, in which the system (3.1) can be solved by the Gauss-Seidel method. One of the first iterative methods for solving a linear system was described by Gauss. He proposed solving a system of equations by repeatedly solving the component in which the residual was the largest. The order of processing of the equations was determined by the unknown that helped to reduce the residual most. The principle of this variant of the Gauss-Seidel method is described below [11]. Find the residuals r i : r i = b i (Ax) i. After that, find the index i (i = 1,..., n), for which r i is the largest. Then solve the system for x i, keeping all x j (j i) constant. Clearly, the residuals will only change in equations where the new x i plays a role. Next step is to calculate a new residual vector and repeat the procedure in an iterative way, until the all the residuals are below a certain threshold. This method has been used successfully for semiconductor device problems. 3.2 Successive over-relaxation The convergence of the Gauss-seidel method can be improved by using relaxation methods. These methods are based upon the observation that it is often desirable to make a larger change in an unknown that is require to reduce the corresponding residual to zero. Since the corrections in the unknowns are usually larger than those in the Gauss-Seidel process, the terminology over-relaxation or successive over-relaxation(sor) is frequently used. The basic SOR algorithm starts with the choice of an over-relaxation factor ω. Starting from an initial estimate x (0) for the solution x of (3.1),

32 Gauss-Seidel steps are performed. There is one important difference, however, with the Gauss-Seidel process discussed in the previous section. Instead of adding the correction vector x (k+1) x (k) to the previous iterate x (k), we add ω times this correction vector. This process can be expressed in the form x (k+1) SOR = x (k) SOR + ω(x (k+1) SOR x (k) SOR). (3.18) Here, x (k+1) SOR denotes the approximation obtained from one Gauss-Seidel step, starting at x (k) SOR. The procedure can also be cast into a form similar to that in (3.8), simply by making use of (3.8) in (3.18). The result is x (k+1) SOR = ω(d L) 1 b + [ (1 ω)i n + ω(d L) 1 U ] x (k) SOR. (3.19) The use of (3.9) leads to x (k+1) SOR x = [ (1 ω)i n + ω(d L) 1 U ] (x (k) SOR x ); (3.20) so that the error vector e (k) SOR satisfies e (k+1) SOR Here, the iteration matrix M ω is defined by = M ω e (k) SOR = M k ωe (0) SOR. (3.21) M ω = (1 ω)i n + ω(d L) 1 U. (3.22) Note that, for ω = 1, this is just the Gauss-Seidel iteration matrix. As in the case of the Gauss-Seidel method, the convergence of the SOR process can be analyzed by investigating properties of the iteration matrix M ω. For the class of matrices considered in Theorem 2 and Corollary 1, convergence is guaranteed. By continuity, the SOR method will be convergent for ω in some interval containing unity. In view of the slow convergence of the Gauss- Seidel iterates, it is desirable to find a value of ω, which ensures more rapid convergence. 3.3 Vector extrapolation The Gauss-Seidel method can be viewed as a technique for solving linear problems. The method can also be viewed as a technique for transforming the problem of finding the solution of a linear system into the problem of finding a fixed point of a mapping. For the linear problem (3.1) this mapping is defined by G(x) = (D L) 1 b + (D L) 1 Ux. (3.23)

33 The fact that the eigenvalues of the iteration matrix (D L) 1 U are located within the unit circle implies that G is a contraction mapping (in some norm). This mean that G(x) G(y) γ x y, (3.24) for all x and y, and some γ < 1. Consequently, results from the theory of contraction mappings and fixed points can be carried over to Gauss-Seidel methods. Viewing one step of a Gauss-Seidel procedure as a means of defining a mapping G opens new possibilities to for solving linear problems. Starting from an initial guess x (0), subsequent iterates can be constructed by applying the mapping G: x (k) = G(x (k 1) ). (3.25) As a result, a sequence x (0), x (1), x (2),... is formed. This sequence may or may not converge to the desired solution x, depending on properties of the original system. Since x is also a fixed point of G, we could attempt to extract information about the location of fixed point by a closer examination of the sequence of iterates. In fact, a linear combination of these iterates can be used to obtain a more accurate approximation to x. A class of numerical methods suitable for this purpose is that formed by extrapolation methods. To explain vector extrapolation, let us assume that the sequence converges according to x (k) x = C(x (k 1) x ), (3.26) where C is a constant n n matrix. Subtracting two subsequent equations of this form leads to the identity x (k+2) x (k+1) = C(x (k+1) x (k) ). (3.27) Expression (3.28) specifies n relations for the n 2 elements of C. It is therefore expected that n identities of the form (3.28) suffice to determine the matrix C uniquely. To show this, let x (0),..., x (n+1) be the first n + 2 vectors in the sequence. Define the vectors of first and second order differences u (k) = x (k) = x (k+1) x (k), k = 0, 1,..., n 1, (3.28) v (k) = 2 x (k) = x (k+2) 2x (k+1) + x (k), k = 0, 1,..., n 1, (3.29) and the n k matrices U (k) = [ u (0) u (1)... u (k 1)], (3.30)

34 Since, by virtue of (3.28), V (k) = [ v (0) v (1)... v (k 1)]. (3.31) v (k) = (x (k+2) x (k+1) ) (x (k+1) x (k) ) = (C I n )u (k), (3.32) it is found that Assuming V (n) to be non-singular, we find that and consequently V (n) = (C I n )U (n). (3.33) (I n C) 1 = U (n) (V (n) ) 1, (3.34) x = x (0) U (n) (V (n) ) ( 1) u (0). (3.35) This is an expression of the fixed point in terms of the iterates in the vector sequence. The assumption that C I n be non-singular implies that the matrix C may not have an eigenvalue which is equal to unity. Clearly, expression (3.36) is not of much use whenever n is large. Too many iterations must be performed before the fixed point is found. A way out of this problem is provided by the observation that (3.36) is equivalent to solving V (n) y = u (0), (3.36) followed by evaluating x = x (0) + U (n) y. (3.37) If we replace V (n) and U (n) in these expressions by V (k) and U (k), respectively, it may be expected that the right hand side of (3.38) is a reasonable approximation to x, provided k is not much smaller than n. Since V (k) is not a square matrix for k < n, the system in (3.37) is solved in a least squares sense to obtain a k-vector y. This amounts to solving the k k system (V (k) ) T V (k) y = (V (k) ) T u (0). (3.38) The method described by (3.37)-(3.38) is referred to as full rank extrapolation, whereas the method which makes use of U (k) and V (k) is termed the reduced rank extrapolation method whenever k < n. When the reduced rank extrapolation method is used, the original recipe must be used to generate the vector sequence x (0),..., x (k+1). The minimum number of iterates is 3. The reduced rank extrapolation method is normally applied in the cycling mode. This means that, after k + 2 iterates have been determined, an RRE step is performed. Having found the approximation to the fixed point, a new sequence of k + 2 iterates are formed. The procedure is repeated until sufficient accuracy is reached.

35 Chapter 4 Gauss-Seidel for LDPC In the previous chapter, we provided a basic theory of the Gauss-Seidel method used to find solution to linear systems. We also described several modifications of the Gauss-Seidel method. The focus of the current chapter is a practical application of the method to the LDPC decoding process. We start by giving a formulation of the Gauss-Seidel LDPC iterations, followed by the numerical example. Next, check-node computations are explained in detail and the new (simplified) formulas for these computations derived. Examples of the two parity check matrices are given. These matrices will be used for practical simulations. The results of the practical simulations are discussed in the last section of the chapter. 4.1 Gauss-Seidel LDPC iterations Jacobi-LDPC iterations (2.10)-(2.12) can be reformulated into Gauss-Seidel iterations. Assumed that processing is check-node centric [1]: all K m λ s of one parity check equation are retrieved, after which K m Λ s are calculated. The updated Λ s are reused within an iteration [4],[15]. Equations (2.11) and (2.12) can be maintained. Expressions (2.7), (2.7) are reformulated: S t m(n) GS = S t (n) GS = m R(n,m),m m m U(n,m) Λ t m n, n = 1, 2,..., N. (4.1) Λ t m n, n = 1, 2,..., N.m, m R(n, m), (4.2) Consequently λ t 1 nm is redefined as: 33

36 λ t 1 nm = L n +S t 1 (n) GS +S t 2 m (n) GS = L n + m U(n,m) Λ t 1 m n + m R(n,m)m m Λ t 2 m n (4.3) In this equation, the set U(n, m) M(n), relates to the messages which have already been updated in the current iteration t before processing m, and R(n, m) = M(n)\U(n, m) e.g. the remaining set. Where for Jacobi iteration U(n, m) =, for Gauss-Seidel U(n, m) is defined as: = U(n, m 1 ) U(n, m 2 )... U(n, m Jn ) (4.4) where U(n, m i ) U(n, m i+1 ) and U(n, m Jn ) = M(n)\m Jn and m Jn relates to the last equation that processes the symbol-node n (inside an iteration). Consider the Tanner graph in Fig. (4.1) for the code specified by the matrix H. Let us show how the new values λ t n are updated in the iteration process of Gauss-Seidel procedure. Suppose we would like to calculate λ t 5 to see the difference from the Jacobi procedure updates. Let us assume that the check nodes are processed in a natural order m = 1, 2,..., M. When the first check node (m = 1) is processed, the new Λ t 1 12 is calculated: Λ t 1 12 = G t 1 2 (1) = min n N(1),n 2 λ t 1 n 1 XOR [ n N(1),n 2 sign(λ t 1 n 1 )] = min( λ t 1 11, λ t 1 31 ) XOR [ sign(λ t 1 11 ), sign(λ t 1 31 ) ] = min( L 1 + Λ t 2 41, L 3 + Λ t 2 63 ) XOR [ sign(l 1 + Λ t 2 41 ), sign(l 3 + Λ t 2 63 ) ]. After the third parity check equation is processed, Λ t 1 38 is obtained: Λ t 1 38 = G t 1 8 (3) = min n N(3),n 8 λ t 1 n 3 XOR [ n N(3),n 8 sign(λ t 1 n 3 )] = min( λ t 1 73, λ t 1 93 ) XOR [ sign(λ t 1 73 ), sign(λ t 1 93 ) ] = min( L 7 + Λ t 2 47, L 9 + Λ t 2 69 ) XOR [ sign(l 7 + Λ t 2 47 ), sign(l 9 + Λ t 2 69 ) ]. The Λ t 2 12 is replaced by the calculated Λ t 1 12 (shown bold on the left of the Fig.(4.1)). Similarly, when the third check node is processed, Λ t 2 38 is updated by the calculated Λ t These new updated Λ s are used within the current iteration.

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Low-density parity-check codes

Low-density parity-check codes Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

CHAPTER 3 LOW DENSITY PARITY CHECK CODES 62 CHAPTER 3 LOW DENSITY PARITY CHECK CODES 3. INTRODUCTION LDPC codes were first presented by Gallager in 962 [] and in 996, MacKay and Neal re-discovered LDPC codes.they proved that these codes approach

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

APPLICATIONS. Quantum Communications

APPLICATIONS. Quantum Communications SOFT PROCESSING TECHNIQUES FOR QUANTUM KEY DISTRIBUTION APPLICATIONS Marina Mondin January 27, 2012 Quantum Communications In the past decades, the key to improving computer performance has been the reduction

More information

Convergence analysis for a class of LDPC convolutional codes on the erasure channel

Convergence analysis for a class of LDPC convolutional codes on the erasure channel Convergence analysis for a class of LDPC convolutional codes on the erasure channel Sridharan, Arvind; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil Published in: [Host publication title

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields MEE10:83 Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields Ihsan Ullah Sohail Noor This thesis is presented as part of the Degree of Master of Sciences in Electrical Engineering

More information

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

1.6: Solutions 17. Solution to exercise 1.6 (p.13). 1.6: Solutions 17 A slightly more careful answer (short of explicit computation) goes as follows. Taking the approximation for ( N K) to the next order, we find: ( N N/2 ) 2 N 1 2πN/4. (1.40) This approximation

More information

Introducing Low-Density Parity-Check Codes

Introducing Low-Density Parity-Check Codes Introducing Low-Density Parity-Check Codes Sarah J. Johnson School of Electrical Engineering and Computer Science The University of Newcastle Australia email: sarah.johnson@newcastle.edu.au Topic 1: Low-Density

More information

Making Error Correcting Codes Work for Flash Memory

Making Error Correcting Codes Work for Flash Memory Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

Low-Density Parity-Check Codes

Low-Density Parity-Check Codes Department of Computer Sciences Applied Algorithms Lab. July 24, 2011 Outline 1 Introduction 2 Algorithms for LDPC 3 Properties 4 Iterative Learning in Crowds 5 Algorithm 6 Results 7 Conclusion PART I

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Belief-Propagation Decoding of LDPC Codes

Belief-Propagation Decoding of LDPC Codes LDPC Codes: Motivation Belief-Propagation Decoding of LDPC Codes Amir Bennatan, Princeton University Revolution in coding theory Reliable transmission, rates approaching capacity. BIAWGN, Rate =.5, Threshold.45

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Low-density parity-check (LDPC) codes

Low-density parity-check (LDPC) codes Low-density parity-check (LDPC) codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding

More information

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes

Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes Electronic Notes in Theoretical Computer Science 74 (2003) URL: http://www.elsevier.nl/locate/entcs/volume74.html 8 pages Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes David

More information

Decoding of LDPC codes with binary vector messages and scalable complexity

Decoding of LDPC codes with binary vector messages and scalable complexity Downloaded from vbn.aau.dk on: marts 7, 019 Aalborg Universitet Decoding of LDPC codes with binary vector messages and scalable complexity Lechner, Gottfried; Land, Ingmar; Rasmussen, Lars Published in:

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Communication by Regression: Achieving Shannon Capacity

Communication by Regression: Achieving Shannon Capacity Communication by Regression: Practical Achievement of Shannon Capacity Department of Statistics Yale University Workshop Infusing Statistics and Engineering Harvard University, June 5-6, 2011 Practical

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

LDPC Decoder LLR Stopping Criterion

LDPC Decoder LLR Stopping Criterion International Conference on Innovative Trends in Electronics Communication and Applications 1 International Conference on Innovative Trends in Electronics Communication and Applications 2015 [ICIECA 2015]

More information

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise 9 THEORY OF CODES Chapter 9 Theory of Codes After studying this chapter you should understand what is meant by noise, error detection and correction; be able to find and use the Hamming distance for a

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008 Codes on Graphs Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 27th, 2008 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 1 / 31

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei COMPSCI 650 Applied Information Theory Apr 5, 2016 Lecture 18 Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei 1 Correcting Errors in Linear Codes Suppose someone is to send

More information

5. Density evolution. Density evolution 5-1

5. Density evolution. Density evolution 5-1 5. Density evolution Density evolution 5-1 Probabilistic analysis of message passing algorithms variable nodes factor nodes x1 a x i x2 a(x i ; x j ; x k ) x3 b x4 consider factor graph model G = (V ;

More information

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras e-mail: hari_jethanandani@yahoo.com Abstract Low-density parity-check (LDPC) codes are discussed

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Low-Complexity Encoding Algorithm for LDPC Codes

Low-Complexity Encoding Algorithm for LDPC Codes EECE 580B Modern Coding Theory Low-Complexity Encoding Algorithm for LDPC Codes Problem: Given the following matrix (imagine a larger matrix with a small number of ones) and the vector of information bits,

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Structured Low-Density Parity-Check Codes: Algebraic Constructions Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

THE ANALYTICAL DESCRIPTION OF REGULAR LDPC CODES CORRECTING ABILITY

THE ANALYTICAL DESCRIPTION OF REGULAR LDPC CODES CORRECTING ABILITY Transport and Telecommunication Vol. 5, no. 3, 04 Transport and Telecommunication, 04, volume 5, no. 3, 77 84 Transport and Telecommunication Institute, Lomonosova, Riga, LV-09, Latvia DOI 0.478/ttj-04-005

More information

On the Block Error Probability of LP Decoding of LDPC Codes

On the Block Error Probability of LP Decoding of LDPC Codes On the Block Error Probability of LP Decoding of LDPC Codes Ralf Koetter CSL and Dept. of ECE University of Illinois at Urbana-Champaign Urbana, IL 680, USA koetter@uiuc.edu Pascal O. Vontobel Dept. of

More information

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Institute of Electronic Systems Signal and Information Processing in Communications Nana Traore Shashi Kant Tobias

More information

VHDL Implementation of Reed Solomon Improved Encoding Algorithm

VHDL Implementation of Reed Solomon Improved Encoding Algorithm VHDL Implementation of Reed Solomon Improved Encoding Algorithm P.Ravi Tej 1, Smt.K.Jhansi Rani 2 1 Project Associate, Department of ECE, UCEK, JNTUK, Kakinada A.P. 2 Assistant Professor, Department of

More information

A Short Length Low Complexity Low Delay Recursive LDPC Code

A Short Length Low Complexity Low Delay Recursive LDPC Code A Short Length Low Complexity Low Delay Recursive LDPC Code BASHAR M. MANSOOR, TARIQ Z. ISMAEEL Department of Electrical Engineering College of Engineering, University of Baghdad, IRAQ bmml77@yahoo.com

More information

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University

More information

Ma/CS 6b Class 24: Error Correcting Codes

Ma/CS 6b Class 24: Error Correcting Codes Ma/CS 6b Class 24: Error Correcting Codes By Adam Sheffer Communicating Over a Noisy Channel Problem. We wish to transmit a message which is composed of 0 s and 1 s, but noise might accidentally flip some

More information

Error Floors of LDPC Coded BICM

Error Floors of LDPC Coded BICM Electrical and Computer Engineering Conference Papers, Posters and Presentations Electrical and Computer Engineering 2007 Error Floors of LDPC Coded BICM Aditya Ramamoorthy Iowa State University, adityar@iastate.edu

More information

Numerical Programming I (for CSE)

Numerical Programming I (for CSE) Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let

More information

ECC for NAND Flash. Osso Vahabzadeh. TexasLDPC Inc. Flash Memory Summit 2017 Santa Clara, CA 1

ECC for NAND Flash. Osso Vahabzadeh. TexasLDPC Inc. Flash Memory Summit 2017 Santa Clara, CA 1 ECC for NAND Flash Osso Vahabzadeh TexasLDPC Inc. 1 Overview Why Is Error Correction Needed in Flash Memories? Error Correction Codes Fundamentals Low-Density Parity-Check (LDPC) Codes LDPC Encoding and

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES RETHNAKARAN PULIKKOONATTU ABSTRACT. Minimum distance is an important parameter of a linear error correcting code. For improved performance of binary Low

More information

Mapper & De-Mapper System Document

Mapper & De-Mapper System Document Mapper & De-Mapper System Document Mapper / De-Mapper Table of Contents. High Level System and Function Block. Mapper description 2. Demodulator Function block 2. Decoder block 2.. De-Mapper 2..2 Implementation

More information

Cyclic Redundancy Check Codes

Cyclic Redundancy Check Codes Cyclic Redundancy Check Codes Lectures No. 17 and 18 Dr. Aoife Moloney School of Electronics and Communications Dublin Institute of Technology Overview These lectures will look at the following: Cyclic

More information

Turbo Codes are Low Density Parity Check Codes

Turbo Codes are Low Density Parity Check Codes Turbo Codes are Low Density Parity Check Codes David J. C. MacKay July 5, 00 Draft 0., not for distribution! (First draft written July 5, 998) Abstract Turbo codes and Gallager codes (also known as low

More information

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu

More information

Binary Compressive Sensing via Analog. Fountain Coding

Binary Compressive Sensing via Analog. Fountain Coding Binary Compressive Sensing via Analog 1 Fountain Coding Mahyar Shirvanimoghaddam, Member, IEEE, Yonghui Li, Senior Member, IEEE, Branka Vucetic, Fellow, IEEE, and Jinhong Yuan, Senior Member, IEEE, arxiv:1508.03401v1

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes Iterative Methods for Systems of Equations 0.5 Introduction There are occasions when direct methods (like Gaussian elimination or the use of an LU decomposition) are not the best way to solve a system

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels

Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels Iulian Topor Acoustic Research Laboratory, Tropical Marine Science Institute, National University of Singapore, Singapore 119227. iulian@arl.nus.edu.sg

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem Osamu Watanabe, Takeshi Sawai, and Hayato Takahashi Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology

More information

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION EE229B PROJECT REPORT STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION Zhengya Zhang SID: 16827455 zyzhang@eecs.berkeley.edu 1 MOTIVATION Permutation matrices refer to the square matrices with

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Pravin Salunkhe, Prof D.P Rathod Department of Electrical Engineering, Veermata Jijabai

More information

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Agenda NAND ECC History Soft Information What is soft information How do we obtain

More information

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q)

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q) Extended MinSum Algorithm for Decoding LDPC Codes over GF (q) David Declercq ETIS ENSEA/UCP/CNRS UMR-8051, 95014 Cergy-Pontoise, (France), declercq@ensea.fr Marc Fossorier Dept. Electrical Engineering,

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Error Correction and Trellis Coding

Error Correction and Trellis Coding Advanced Signal Processing Winter Term 2001/2002 Digital Subscriber Lines (xdsl): Broadband Communication over Twisted Wire Pairs Error Correction and Trellis Coding Thomas Brandtner brandt@sbox.tugraz.at

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information