EFFICIENT DECODING ALGORITHMS FOR LOW DENSITY PARITY CHECK CODES

Size: px
Start display at page:

Download "EFFICIENT DECODING ALGORITHMS FOR LOW DENSITY PARITY CHECK CODES"

Transcription

1 EFFICIENT DECODING ALGORITHMS FOR LOW DENSITY PARITY CHECK CODES Master s thesis in electronics systems by Anton Blad LiTH-ISY-EX--05/3691--SE Linköping 2005

2

3 EFFICIENT DECODING ALGORITHMS FOR LOW DENSITY PARITY CHECK CODES Master s thesis in electronics systems at Linköping Institute of Technology by Anton Blad LiTH-ISY-EX--05/3691--SE Supervisor: Oscar Gustafsson Examiner: Lars Wanhammar Linköping

4

5 Avdelning, Institution Division, Department Datum Date Institutionen för systemteknik LINKÖPING Språk Language Svenska/Swedish X Engelska/English Rapporttyp Report category Licentiatavhandling X Examensarbete ISBN ISRN LITH-ISY-EX--05/3691--SE C-uppsats D-uppsats Övrig rapport Serietitel och serienummer Title of series, numbering ISSN URL för elektronisk version Titel Title Författare Author Effektiva avkodningsalgoritmer för low density parity check-koder Efficient Decoding Algorithms for Low-Density Parity-Check Codes Anton Blad Sammanfattning Abstract Low-density parity-check codes have recently received much attention because of their excellent performance and the availability of a simple iterative decoder. The decoder, however, requires large amounts of memory, which causes problems with memory consumption. We investigate a new decoding scheme for low density parity check codes to address this problem. The basic idea is to define a reliability measure and a threshold, and stop updating the messages for a bit whenever its reliability is higher than the threshold. We also consider some modifications to this scheme, including a dynamic threshold more suitable for codes with cycles, and a scheme with soft thresholds which allow the possibility of removing a decision which have proved wrong. By exploiting the bits different rates of convergence we are able to achieve an efficiency of up to 50% at a bit error rate of less than 10^-5. The efficiency should roughly correspond to the power consumption of a hardware implementation of the algorithm. Nyckelord Keyword LDPC, Low Density Parity Check, Tanner graph, decoding, sum-product, probability propagation, early decision, threshold

6

7 Abstract Low-density parity-check codes have recently received much attention because of their excellent performance and the availability of a simple iterative decoder. The decoder, however, requires large amounts of memory, which causes problems with memory consumption. We investigate a new decoding scheme for low density parity check codes to address this problem. The basic idea is to define a reliability measure and a threshold, and stop updating the messages for a bit whenever its reliability is higher than the threshold. We also consider some modifications to this scheme, including a dynamic threshold more suitable for codes with cycles, and a scheme with soft thresholds which allow the possibility of removing a decision which have proved wrong. By exploiting the bits different rates of convergence we are able to achieve an efficiency of up to 50% at a bit error rate of less than The efficiency should roughly correspond to the power consumption of a hardware implementation of the algorithm. Keywords: LDPC, Low Density Parity Check, Tanner graph, decoding, sumproduct, probability propagation, early decision, threshold i

8 ii

9 Notation d Minimum distance of a code. E b The energy used for a code word, averaged over the information bits. E s The energy used for transmitting each symbol. G A generator matrix. H A parity check matrix. j Column weight of the parity check matrix of a regular LDPC code. K Number of message bits in a code word. k Row weight of a the parity check matrix of a regular LDPC code. M Number of parity bits in a code word. M (n) The neighbours of variable node n. m, m i Denotes a message, or bit i of the message. N Number of bits in a code word. N 0 The power spectral density of the (AWGN) channel noise. N (m) The neighbours of check node m. P b The message bit error probability, after decoding. P e The symbol error probability, before decoding. P w The code word error probability, after decoding. p, p i Denotes the parity, or bit i of the parity. p a n The prior likelihood of bit n assuming the value a. qmn a The variable-to-check message from variable n to check m. qn a The pseudo-posterior likelihood of bit n assuming the value a. R The rate of a code. r, r i Denotes a received signal, or sample i of the received signal. rmn a The check-to-variable message from check m to variable n. t The threshold of an early-decision decoder. x, x i Denotes a code word, or bit i of the code word. ˆx Denotes the decoder s guess of the sent code word. iii

10 iv

11 Contents 1 Introduction Background Problem definition Outline and reading instructions Error control systems Digital communication Basic communication elements Channels Coding Two simple codes Performance metrics Blockcodes Definition of block codes Systematic form Code rate Encoding Low-density parity-check codes Definition of LDPC codes Tanner graphs Girth Integer lattice codes LDPC performance Decoding Iterative decoding Decodingoncycle-freegraphs A decoding example The decoding algorithm Decoding on graphs with cycles Message domains v

12 vi Contents 4 Decoder approximations Bit reliabilities Early decision decoder Thresholds in subsequent iterations Constant threshold Dynamic threshold Considering local checks Soft thresholds Further considerations and suggestions for future work Number of iterations Undetected errors Other codes and parameter sensitivity Conclusion Implementability... 52

13 Chapter 1 Introduction 1.1 Background Low-density parity-check (LDPC) codes were first discovered by Robert Gallager [1] in the early 60 s. For some reason, though, they were forgotten and the field lay dormant until the mid-90 s when the codes were rediscovered by David MacKay and Radford Neal [2]. Since then, the class of codes has been shown to be remarkably powerful, comparable to the best known codes and performing very close to the theoretical limit of error-correcting codes. The nature of the codes also suggests a natural decoding algorithm operating directly on the parity check matrix. This algorithm has relatively low complexity and allows a high degree of parallelization when implemented in hardware, allowing high decoding speeds. The performance comes at a price, however. The memory requirements are very large, and the random nature of the codes leads to high interconnect complexities and routing congestions. 1.2 Problem definition The aims of this work were to build a software framework with which different modifications of the decoding algorithm could be tested and evaluated. The framework were to include the usual elements in a coding system: encoding of messages, transmission of code words, and decoding. While focus lay on the algorithms theoretical properties, the suitability for hardware implementation were not to be neglected. Using this framework, ideas could be easily tested, and the hope was that at least some modification would lead to better decoding algorithm performance. 1.3 Outline and reading instructions In chapter 2 we describe the basic elements of a digital communication system. We introduce the channel abstraction, and define measurements of performance. Then 1

14 2 Introduction we explain the basic ideas of codes, and declare the theoretical bound for coding known as the Shannon limit. We continue with a description of general block codes and their properties, followed by a definition of low-density parity-check codes. We also define the Tanner graph, which is a visualization of codes, especially suited for low-density parity-check codes. Finally, we compare the performances of different low-density parity-check codes with some conventional coding schemes. The reader already familiar with digital communication and coding could skip this chapter. Chapter 3 contains the description of the sum-product decoding algorithm. A simple example showing the idea of the algorithm is included. Then the algorithm for the cycle-free case is described, followed by the modifications necessary to adapt the algorithm to codes with cycles. In the end, some alternative message domains are described, suitable for various implementations. The material in this chapter is quite theoretical and a complete understanding is not necessary for the reader to grasp the contents of chapter 4. Chapter 4 contains the new ideas of this work. We define a measure of message reliabilities, and we use this measure to define a new decoding algorithm. Four different modifications are evaluated using a reference code, and the results are shown. Also included are some further considerations and discussions about the results. Chapter 5, finally, contains the summarized conclusions of this work, and some words about the implementability of the algorithm.

15 Chapter 2 Error control systems In this chapter the basic concepts of digital communication and error control coding are introduced. We consider the communication model, vector representation of signals, different channels, theoretical limits, and theoretical code performance measurements. There are numerous books introducing the subject, for example Wicker [3], concentrating on coding, and Anderson [4], treating more of the communication aspect. 2.1 Digital communication Basic communication elements A digital communication system is a system in which an endpoint A transmits information to an endpoint B. The system is digital, meaning that the information is represented by a sequence of symbols from a finite discrete alphabet. The sequence is mapped onto an analog signal, which is then transmitted through a cable or using an antenna. During transmission the signal is distorted by noise, so the received signal is not the same as the sent. The receiver selects the most likely sequence of symbols and delivers it to the receiving endpoint B. The transmitter and receiver functions are usually performed by different elements. The basic ones are shown in figure 2.1. The modulator maps the source symbols onto the analog signal. The channel transmitter puts the signal on the physical medium. On the receiver side, the channel receiver reconstructs an analog signal, and the demodulator compares the signal to the modulation waveforms and selects the most likely corresponding symbols. Usually, the modulation scheme is linear, implying that an orthogonal basis can be chosen for the waveforms. Then the signals can be represented as vectors over this basis, where the lengths of the vectors are the square roots of the waveform energies. It can be shown (see e.g.[4], p. 48) that with a matched receiver only the noise in the basis dimensions is detected. 3

16 4 Error control systems Data source Data sink Modulator Noise Demodulator Transmitter Channel Receiver Figure 2.1. Basic elements of a digital communication system. Throughout this work, we will assume the source and destination alphabet to be the binary, A = {0, 1}. Furthermore, we will work exclusively with the binary phase shift key (BPSK) modulation format. The BPSK format is one-dimensional, and we represent the waveforms with the scalars +1 and 1. Conventionally the symbol 0 maps to +1, and the symbol 1 maps to Channels Definition 2.1 A channel is defined by an input alphabet A X, an output alphabet A Y,andatransition function p Y X (y X = x), wherex A X and y A Y. The transition function denotes the probability (when A Y is discrete) or the probability density (when A Y is continuous) of the event that symbol y is received, given that symbol x was sent. The channels we consider are memoryless, meaning that the output is independent of earlier uses of the channel. The two most common types of channels are the binary symmetric channel (BSC) and the additive white Gaussian noise (AWGN) channel. The BSC is really a channel abstraction including a hard-decision receiver, and the output alphabet is the same as the input alphabet, A Y = A X = {0, 1}. The channel is defined by a probability p of cross-over between the symbols, such that { py X (0 X =0)=p Y X (1 X =1)=1 p p Y X (0 X =1)=p Y X (1 X =0)=p For example, if a 0 was sent there is a probability of p that 1 is received, and a probability of 1 p that 0 is received. The AWGN channel models the noise as a white Gaussian stochastic process with spectral density N 0 /2. This noise has infinite power, and is therefore not realizable. It is still a realistic model, though, for many practical channels over cables when the frequencies are not extremely high. With the vector model the noise becomes a Gaussian stochastic vector with mean 0 and standard deviation σ = N 0 /2 in every dimension. If we use BPSK modulation the input alphabet will be A X = { 1, +1}. The channel is realized by the relation Y = X + N,

17 2.1 Digital communication 5 where N N(0,σ), so the output alphabet is the set of real values A Y = R. The transition function can be given as p Y X (y X = x) =f x,σ (y), where f x,σ is the probability density function for a Gaussian stochastic variable with mean x and standard deviation σ. The simulations in this work have been done over the AWGN channel exclusively. Unlike the BSC, the AWGN channel conveys reliability information about the transmitted symbols. Much better performance can be achieved if the decoder is able to use this information. As we will see later, the LDPC decoder is perfectly suited for this task Coding Recall that the purpose of the receiver system was to decide the most probable sequence of symbols. Obviously, with the transmission schemes discussed so far, there is nothing to gain by looking at sequences of symbols, as the transmitted symbols are independent. By inserting redundant symbols in the transmitted sequence, dependencies can be introduced, and the set of possible sequences can be restricted. This can lead to remarkably better performance, even if the signal energies has to be decreased to compensate for the higher transmission rate. Figure 2.2 shows a digital communication system that employs channel coding. Two elements have been added, the channel encoder and the channel decoder. Their functions are quite obvious, the encoder adds redundancy to the data according to the code being used, and the decoder uses the code properties to correct transmission errors. As said earlier, there are decoders that are able to use the likelihood values from the demodulator. These are called soft-decision decoders, and the decoders with which we are to concern ourselves belong to this category. The other kind are hard-decision decoders, which have to choose a symbol for each received signal before it is decoded. Some information is lost this way, for example if the value +2 is received over an AWGN channel it is more likely that the symbol 0 was sent than if the value +0.5 was received. The hard-decision decoder removes this information. Before we continue on to the topic of practical code construction, we will delve a bit deeper into the properties of channels and codes. In 1948, Claude E. Shannon published an article [5] in the Bell Systems technical journal proving some quite intricate theorems about the transmission capabilities of channels. The most important is the noisy channel coding theorem. Theorem 2.1 The noisy channel coding theorem states that for every channel a quantity called the channel capacity can be defined, such that for information rates below this limit arbitrarily small error probabilities can be achieved. Moreover, for information rates above the channel capacity the error probability must necessarily be bound away from zero. It should be emphasised that the formal channel models not only the interfering noise, but also the transmission scheme with waveforms and signal energies. Thus

18 6 Error control systems Data source Data sink Channel encoder Channel decoder Modulator Noise Demodulator Transmitter Channel Receiver Figure 2.2. Basic elements of a digital communication system employing channel coding. an alternate interpretation of Shannon s theorem is that above a certain transmission power arbitrarily small error probabilities can be achieved, if the information rate is kept constant. The information rate has not been formally defined here, and is a bit out of scope for this work, but for independent and identically distributed source symbols, it equals the quota between useful data (information) and total data transmitted. This quantity will also be referred to as the code rate. The definition of the code rate for a block code can be found in section The noisy channel coding theorem is non-constructive in its nature, and serves merely as a theoretical limit to strive for. Moreover, the theorem says nothing about the practicality of the codes promised, and, not surprisingly, the more powerful the codes are, the more far-reaching the dependencies between the symbols need to be, and the more difficult the code will be to code and decode Two simple codes Here we will define two simple codes, which we will use as a base for defining performance metrics. The first code is the repetition code, which simply repeats each symbol a number of times. We will use 3 as the repetition factor, so each symbol is repeated 3 times. For example the symbols 101 become when encoded. The decoder can then just look at the received symbols in groups of three and select the symbol that occured at least two times. The second code we define is the (7,4,3) Hamming code, where the parameters are (N,K,d) =(7, 4, 3). This is a block code, that takes the source message in blocks of K bits and encodes them to blocks of N bits. The d parameter is the minimum distance of the code, and is the minimum number of symbols that two encoded words may differ in. The (7,4,3) Hamming code can be defined by a number of parity check equations, where the computations are done modulo 2. We call the source message bits

19 2.1 Digital communication 7 m 0, m 1, m 2 and m 3 and calculate the parity bits p 4, p 5 and p 6 according to the equations p 4 = m 0 + m 2 + m 3 p 5 = m 0 + m 1 + m 3 p 6 = m 0 + m 1 + m 2 Then the code word (m 0,m 1,m 2,m 3,p 4,p 5,p 6 ) is sent over the channel, and as long as at most one error has occured, the receiver can find the transmission error and correct it. This is possible because the minimum distance of the code is 3, so as long as the transmission error is in just one symbol, the transmitted word will always be the closest Performance metrics Next we will look at performance measurements of transmission schemes, and the gain to be had with coding. First we will consider pure BPSK modulation over the AWGN channel. We can calculate the bit error probability (or bit error rate) as a function of the signal-to-noise ratio (SNR). The SNR is a measurement of the signal strength relative to noise, and is usually given in db as SNR = 10 log E s N 0 where E s is the symbol energy and N 0 is twice the power spectral density of the noise. The symbol error probability, denoted P e, is the probability that a transmitted symbol 0 is received as a 1 or vice versa. The transmission scheme is symmetric, so it is sufficient to look at one of the cases. Assume therefore that the symbol 0 was transmitted, corresponding to signal level + E s. The channel adds zero-mean N Gaussian noise, with standard deviation σ = 0 2. The received signal r is then Gaussian with r N ( E s,σ ), and the error probability P e is the probability that this function is less than zero: ( ) ( ) Es 2Es P e = P (r <0) = Q = Q σ This function is plotted in figure 2.3. In the figure we have the quantity E b /N 0 on the x-axis, whereas the SNR was defined as E s /N 0. E b is a new concept that we will have to define when we employ coding. As different codes may have different code rates, it would not be fair to compare codes with the same signal energies. Therefore we define the quantity E b to be the energy per information bit transmitted. The symbol energy E s will then be the energy of the information bits averaged over all the bits transmitted, so if the code rate is R the relation E s = RE b will hold. In the recent transmission N 0

20 8 Error control systems BPSK Repetition Hamming Bit error probability E b /N 0 (db) Figure 2.3. Bit error probabilities for some simple transmission schemes. Shown in the figure are plain binary phase shift key (BPSK) transmission, the 3-repetition coding scheme, and the (7,4,3) Hamming code. The Hamming code is able to correct one arbitrary error in each transmitted block. As can be seen, the Hamming code provides an insignificant coding gain at high enough signal strengths, while the repetition code performs worse than if no coding is used at all. scheme discussed, where no coding were used, all the bits are information bits, so E s = E b. When comparing codes, we will keep E b constant, while the relation E s = RE b will allow us to calculate the symbol energy in order to determine the error probability of each transmitted symbol. We will also need to define the bit error probability, orbit error rate, denoted P b. This is the probability that a message bit, after decoding, will be in error, and is a bit harder to calculate in the case of coding. In the no-coding case, however, it equals the symbol error probability P e. The 3-repetition code discussed sends each information bit three times, so the code rate is here R = 1 3, and the symbol energy E s = 1 3 E b. Therefore, the symbol error probability will be ( ) ( ) Es 2Eb P e = Q = Q σ 3N 0

21 2.2 Block codes 9 The code is however able to correct single transmission errors, as each information bit is sent three times. So the cases where the information is incorrectly decoded are when all three bits are in error, or when just one bit is correctly received. We can write this as P b = Pe 3 +3Pe 2 which is of course also a function of E b N 0. In figure 2.3 it can be seen that the code actually performs worse than if no coding is used at all. The code is simply too weak to compensate for the overhead of sending three times as many symbols. It is therefore unusable as en error correcting code, but serves as a simple example of performance calculation. The Hamming code is slightly more complex. The code rate is R = 4 7,sothe symbol error probability is ( ) ( ) Es 8Eb P e = Q = Q σ 7N 0 The code is able to correct a single error in an arbitrary position, so the probability of a correctly transmitted message is (1 P e ) 7 +7P e (1 P e ) 6. Thus, the probability of a code word error is 1 (1 P e ) 7 7P e (1 P e ) 6. When a word error occurs, essentially half the bits are garbled, so the bit error probability is about half the word probability error: P b 1 ) (1 2 (1 P e ) 7 7(1 P e ) 6 P e In the figure we see that the Hamming code surpasses the plain BPSK scheme when the signal is strong enough. Assume for example that the BPSK scheme at E b /N 0 = 11 db is used. Then we can employ the (7,4,3) Hamming code and either reduce the transmitter power by 0.45 db, or enjoy a 4.5-fold reduction of transmission errors. However, there are more advanced codes that offer benefits a lot better than this, see section for a survey. 2.2 Block codes Definition of block codes There are two main ways to define a linear block code, either through a generator matrix G or a parity check matrix H. The relation x = G T m (mod 2) holds for a code defined by a generator matrix. Thus the rows of G (the columns of G T ) form a basis for the code, and the message m is the coordinates for the code word x. In this work, however, we will define codes through parity check matrices. Then the set of code words is given by the relation Hx = 0 (mod 2). The rows of H thus define a set of checks on the code word x. The relation implies that the bits involved in each check must have an even number of ones for the word to be in the code. This definition of a code does not include a mapping between code words and

22 10 Error control systems messages, but often a code is constructed such that the message bits are mapped to certain locations in the code word. These bits are then called message bits, and the other bits are called parity bits. For example the relations defining the (7,4,3) Hamming code in section can be put in a matrix to form a parity check matrix H for the code: H = If x =(m 0 m 1 m 2 m 3 p 4 p 5 p 6 ) T, where m 0...m 3 are the message bits and p 4...p 6 are the parity bits, the previously given equations result from the relation Hx = 0 (mod 2) Systematic form When put on the form H =[PI], where I is the identity matrix, H is said to be on systematic form. On this form the parity check matrix is particularly easy to convert to a generator matrix. By recognizing which parity bits are changed by changing one message bit and keeping the other message bits constant, we can determine the rows of the generator matrix. In the above case, changing bit m 1 requires changing bits p 5 and p 6 for the parity check equations to be valid. This leads to a generator matrix G = [ IP ] T. For example, the parity check matrix for the (7,4,3) Hamming code can be converted to the following generator matrix on systematic form: G = Observe that the leading identity matrix ensures that the message bits are copied into the first locations of the code word Code rate Usually the number of message bits in a code word is denoted K, and the number of parity bits is denoted M. Thus the relation K + M = N holds, where N is the total number of bits in a code word. Assuming that the rows of the parity check matrix are linearly independent (as is always the case with systematic parity check matrices), each row defines a parity bit. Therefore there are M rows and N columns in the parity check matrix. Similarly the generator matrix has dimensions K N. Therate of a code, denoted R, is the ratio between the number of message bits and the total number of bits: R = K N. Later we will also use the concept design rate for systematically constructed (i.e.non-random) codes. The design rate denotes the rate that the code was designed to have according to the parity check matrix. However, systematically con-

23 2.2 Block codes 11 structed parity check matrices often have a small number of dependencies, so the actual code rate is a bit higher Encoding Encoding is done by multiplication of the generator matrix with the message m. The operation yields the code word x: x = G T m (mod 2) Alternatively the parity bits can be computed as sums of message bits, using the parity check matrix on systematic form. This technique was demonstrated in section An arbitrary parity check matrix can be put in systematic form using Gaussian elimination. This may require that the code word bits be reordered though, so the message bits may become scattered. The procedure may be formalized in the following way, given without proof: Theorem 2.2 Assume that H is a full-rank parity check matrix with dimensions M N, i.e. it consists of M independent rows. Let x denote a code word. Then there exists a column permutation p such that H = p(h) =(BA), wherea is square and non-singular. Applying this permutation to the code word x yields the equation Hx = 0 (mod 2) p(h)p(x T ) T = H x = 0 (mod 2) If we define the last M bits of x to be parity bits p and the rest to be message bits m, we can split the equation to obtain the relation ( ) H x m =(BA) = 0 (mod 2) Ap = Bm (mod 2) p p = A 1 Bm (mod 2) with which the parity bits can be effectively computed. This encoding technique has been employed in the experiments in this work. However, when the block length grows it becomes impractical to store the random and dense generator matrix for direct encoding (several hundred megabytes are required for the codes proposed for the next generation of satellite-tv), so other methods must be sought. There are many algorithms for efficient encoding of various codes in hardware, but we will not discuss the topic further. The reader is instead referred to [3] for the general encoding problem, or to [6] for LDPC-specific encoding algorithms.

24 12 Error control systems 2.3 Low-density parity-check codes Definition of LDPC codes LDPC codes are defined from parity-check matrices. Originally Gallager defined an LDPC matrix as a randomly created matrix with small constant column weights and row weights [1]. An (N,j,k) LDPC code denotes an LDPC code with code word length N, column weight j, and row weight k. These codes were later renamed regular LDPC codes, as opposed to irregular LDPC codes which are still lowdensity but lack the property of constant column and/or row weights. It was shown already by Gallager that regular LDPC codes are not able to approach channel capacity with vanishing error probability on binary symmetric channels. MacKay, however, later showed [2] that irregular LDPC codes may achieve any desired error probability arbitrarily close to the channel capacity for any channel with symmetric stationary ergodic noise (including the BSC and the AWGN channel defined in section 2.1.2). In practice, though, these results say very little about the usability of a code. What is more important is the existence of a practical decoding algorithm, and herein lies the importance of LDPC codes. The structure of the codes naturally suggests a simple iterative decoding algorithm that is inherently scalable and lends itself well to hardware implementation. This topic is explored in chapter 3, but before we get there we will look a bit more inte the theoretical properties of LDPC codes Tanner graphs We will define a simple example code to simplify the description of the Tanner graph and girth concepts. Let H be the parity check matrix H = (2.1) The code has design rate 1/3, but it actually has one redundant check, so an arbitrary check can be removed for a slightly higher rate code. (This always happen when the column weight is even, as the sum of all rows is the all-zero vector.) The Tanner graph is needed later in chapter 3 when we describe the decoding algorithm. The graph is a visualization of the codes structure, and the decoding algorithm operates directly on it. As the name implies, the graph representation was investigated by Tanner, but he used it primarily for designing codes using a kind of divide-and-conquer technique [7].

25 2.3 Low-density parity-check codes 13 The nodes of the graph are the variables and the checks of the code, thus there is a node for every column and row in the parity check matrix. There is an edge between two nodes if and only if there is a one in the intersection between the corresponding column and row in the matrix. There are no intersections between two columns or two rows, so the graph is bipartite between the variable nodes and the check nodes. Figure 2.4 shows the Tanner graph corresponding to H Girth The girth of a graph is the length of the shortest cycle in it. In this work we will also talk about girths of matrices, and mean the lengths of the shortest cycles in their associated Tanner graphs. As the Tanner graphs are bipartite, every cycle will have even length, and thus the girth will also be even. Note that we cannot formally talk about the girth of a code, as there are several parity check matrices that define the same code and these need not have the same girth. Equation 2.2 and figure 2.5 show the relationship between parity check matrices and Tanner graphs, using the example code. The length of the cycle is 8, and this is also the girth of the code as there are no shorter cycles. H = (2.2) The existence of cycles in the parity check matrix impact negatively upon the LDPC decoder s performance. Thus it is desirable to obtain matrices with high girth values. 8 is in fact a quite high girth value. LDPC codes are often randomly constructed, and these matrices almost always have girth 4, which is the lowest possible. Elements in the matrix can be moved around to eliminate the short cycles, but it is difficult to achieve girths higher than 6 this way. Many try to find good algorithms for systematic construction of LDPC codes. The matrices can then be constructed with girths of 8 and higher, while they get the structure necessary for practical implementations of hardware encoders and decoders. It is difficult, however, to find suitable algorithms that fulfil these demands and still make the matrix random enough for the code to perform well. It would be optimal for LDPC decoding purposes to use codes with parity check matrices without cycles, as in this case the decoding algorithm is optimal. Unfortunately such codes are useless. It was shown by Etzion et al. [8]thatcyclefree codes have minimum distance at most two when the code rate R 1 2. Such codes are not able to correct a single error, and therefore we are forced to use codes with cycles. Very little research have been done about the influence of cycles upon the decoder s performance, but it is believed that increasing girth is a good way to

26 14 Error control systems Variable nodes Check nodes Figure 2.4. The figure shows the Tanner graph corresponding to the code given by the parity check matrix in equation 2.1. The nodes are numbered as their corresponding columns and rows in the matrix. Variable nodes Check nodes Figure 2.5. The figure shows a cycle of length 8 in the Tanner graph for the parity check matrix H given in equation 2.2. The elements corresponding to the cycle s edges are also shown in the matrix. There are no cycles of length less than 8, therefore the girth of H is 8. increase both the codes and the decoder s performances. Other results [9] suggest that high girth might not be that important, though. We will now continue with an introduction of one type of systematically con-

27 2.3 Low-density parity-check codes 15 structed codes that we will use later for experimental performance measurements of the LDPC decoders Integer lattice codes Integer lattice codes (proposed by Vasic et al. [10]) belong to a category of codes called quasi-cyclic (or quasi-circulant) codes, which is an extension of the cyclic codes. The cyclic property of codes implies that every cyclic shift of a code word is also a code word. It enables a field-algebraic description of codes, as well as ensures that encoding can be done efficiently using shift registers [3]. The Hamming codes are all cyclic, as are the BCH and Reed-Solomon codes with which the reader may be familiar. A code that is quasi-cyclic can be divided inte q equally-sized blocks, where a simultaneous cyclic shift of all blocks preserves code words. Quasi-cyclic codes can also be efficiently encoded using shift registers, although it is not necessarily the best encoding method for LDPC codes. A quasi-cyclic code with q = 1 is cyclic, of course. Example 1 We consider as an example a simple code given by the parity check matrix H: H = Assume that x =(x 0 x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 ) T is a code word. Then x = (x 2 x 0 x 1 x 5 x 3 x 4 x 8 x 6 x 7 ) T is a vector where each block of q = 3 has been circularly shifted. Then check 1 is satisfied for x if and only if check 2 is satisfied for x, and similarly for check 2 and 3, so x is a code word whenever x is. The parity check matrices for lattice codes are constructed by concatenating cyclically shifted identity matrices. However, there are many other algorithms yielding the same structure, so it is not characteristic for lattice codes. As most systematically constructed LDPC codes, the girth is ensured to be at least 6. Girth 8 is easily achieved too, with carefully chosen design parameters. To define the lattice codes we first need some geometrical constructions. Definition 2.2 A lattice is a rectangular grid of points, with height q and width j. j can be any positive integer, whereas q is required to be prime. The lattice is periodic in the vertical dimension. Definition 2.3 A line with slope s starting at point (0,a) is the set of j points: {(x, y) :y =(a + sx) modq, x {0,.., j 1}}

28 16 Error control systems Figure 2.6. The figure shows an integer lattice used for construction of LDPC codes. Some of the lines with slope 0 and 2 are shown. Thus a line includes a point from every column in the lattice, and the vertical lines are not allowed. For every slope, we require that either none or all of the lines with that slope exist. Example 2 We let q =5andj = 3, and define the lattice with 15 points shown in figure 2.6. Some of the lines with slope 0 and 2 are shown. One intuitive way of constructing a code from this geometry is to let each point correspond to a code word bit, and let the lines determine which bits are included in each check. However, we will do the other way, and let the lines correspond to bits, and the points correspond to checks. We let S = {s 0,...,s k 1 } denote the set of slopes we decide to use, and let k = S be the number of slopes. There will then be j points intersecting each line, and k lines intersecting each point, so j and k are the column and row weights of the matrix, respectively. To construct the parity check matrix we will need to define labelings for the points and lines. For example we can use the labeling defined by l p (x, y) for the point at coordinate (x, y), and l l (s i,a) for the line with slope s i starting at point (0,a), given by: l p (x, y) =qx + y +1 l l (s i,a)=qi + a +1 The labeling of the points is illustrated in figure 2.6. Definition 2.4 Let q and j be parameters of a lattice, and let S = {s 0,...,s k 1 } be a set of slopes. Then a lattice code is defined by the parity check matrix H

29 2.3 Low-density parity-check codes 17 with dimensions qj qk, where the element at (l p (x, y),l l (s i,a)) is 1 if and only if the line with slope s i and starting point (0,a) intersects the point at coordinate (x, y). Example 3 We extend the slope set of example 2 to obtain the set S = {0, 1, 2, 3}. Then q =5, j =3andk = 4 and we obtain a code with design rate 1 j k = 1 4. The code is given by the parity check matrix H, where each column correspond to a line over the lattice. The first five columns correspond to those with slope 0, the next five to those with slope 1, and so on. Similarly, the first five rows in the matrix correspond to the first column of points in the lattice, and the next five rows to the second column, and so on. For example, we can see that column 13 (bold) correspond to the line through the points (0, 2), (1, 4) and (2, 1), with labels 3, 10 and H = Theorem 2.3 A lattice with j =3and the slope set S = {s 0,...,s k 1 },wheres does not contain any three-term arithmetic progression and every s i q 2, defines an LDPC code with girth 8. With a three-term arithmetic progression we mean three integers a, b and c, such that b a = c b. The integers need not be consecutive in S. For the proof of theorem 2.3, see [10] LDPC performance Randomly constructed LDPC codes are often used when evaluating performances of systematically constructed LDPC codes. Random LDPC codes are most likely

30 18 Error control systems BPSK Hamming (255,247,3) Reed Solomon (31,25,7) Lattice (203,116) Random LDPC (203,116) Shannon limit (R=4/7) Bit error probability E /N (db) b 0 Figure 2.7. Performance comparisons of some short-length codes. The Reed-Solomon and Hamming codes performance curves were calculated, while the LDPC codes were determined by simulation. As can be seen, the random LDPC code outperforms the Reed-Solomon code by about 3 db. good, although they are not very suitable for practical usage. Systematically constructed codes that surpass random codes exist, but are relatively rare. This section provides a comparison of lattice codes to random LDPC codes, as well as to conventional coding schemes. We also provide an example with images to give the reader an intuitive sense of coding benefits. Figure 2.7 compares short LDPC codes to simple coding schemes. The Hamming code can correct one error, and when the block length increases the probability of more than one error increases. Therefore we need codes able to correct multiple errors to increase performance. The Reed-Solomon codes are one type of widelyused codes, and the figure shows one with minimum distance 7. The block length is 31, but the symbols consist of 5 bits and are not binary, so the length in bits is comparable to the LDPC codes. As can be seen, the random LDPC code provides abouta3dbincreaseinperformanceatabiterrorrateof10 6. The comparison is not altogether fair though, as Reed-Solomon codes have an inherent capability to correct burst errors, which are often present in reality but not modeled by the AWGN channel. Also shown is the capacity (Shannon limit) for rate 4/7 codes

31 2.3 Low-density parity-check codes BPSK Lattice (32008,16004) LDPC, N=10 6 LDPC, N=10 7 Shannon limit (R=1/2) Bit error probability E b /N 0 (db) Figure 2.8. Performance comparisons of some long codes. With increased lengths LDPC codes can achieve performances very close to the channel capacity. There is also a class of codes known as Turbo codes that achieve comparable performances. over the AWGN channel. Figure 2.8 shows the performance of two irregular LDPC codes [11] created with a technique called density evolution. A code with a length of ten million bits performs at just db from the Shannon limit. This is about as good as coding can ever get, if we can only find practical and fast decoding algorithms. A code of length one million bits performs just below one db worse. While a block length of one or ten million bits may seem to long to be practical, it is certainly the case that many real transmission channels use rates of 10 Mbit/s or beyond. One example is digital TV with 15 Mbit/s, where a latency of one second would be acceptable. Inter-computer connections are another example, where speeds are growing ever faster, and above 1 Gbit/s codes of comparable length might well be considered. The main cause that makes the LDPC codes better than conventional codes is the existence of a practical soft-decision decoder. It is also the case that lowrate codes perform better than high-rate codes (in terms of E b /N 0 ), and while the LDPC decoding algorithm does not suffer significantly from reduced rate, the Reed-Solomon decoding algorithm becomes increasingly complex with increased

32 20 Error control systems (a) Original image. (b) Encoded image. Figure 2.9. Original and encoded image of a decoding example. amounts of parity bits. (It is still the case, though, that decreased code rate causes increased transmission rate, which increases demands on transmission filters and synchronization circuits.) Figures 2.9 and 2.10 show an example of the decoding process. To the left in figure 2.9 is shown an image that we wish to transmit. The image is first encoded with a rate-1/2 code into the image to the right. After transmission the top-left image in figure 2.10 is received. Using a soft-decision sum-product decoder, the received signal is then decoded in 14 iterations into the error-free image shown in The reader should note that the inherent structure of the image is not used by the decoder. In practice, the image would first be compressed, and the decoder would achieve equally good performance with the compressed data.

33 2.3 Low-density parity-check codes 21 (a) Received image. (b) Iteration 1. (c) Iteration 2. (d) Iteration 3. (e) Iteration 4. (f) Iteration 5. (g) Iteration 13. (h) Iteration 14. Figure Decoding process of a received image. The intensity of the pixels in each iteration corresponds to the current likelihood of the bit.

34 22 Error control systems

35 Chapter 3 Decoding The decoding problem is to find the most likely transmitted message, given knowledge about the channel, the code used, and the a priori probabilities of transmitted messages. Most often the a priori probabilities are assumed to be uniformly distributed over the message set, i.e.the messages are assumed to be equally likely, and indeed this is often a valid assumption. Usually the AWGN channel is assumed, but when the signal is airborne some fading channel might be more appropriate. However, it is often difficult to adapt the decoder to these conditions, but one way is to use an interleaver which spread the bits over several frames. This makes the errors more randomly located, so that the AWGN channel can be assumed. Formally, the decoding problem over the BSC with equally likely messages can be stated as follows: Theorem 3.1 Assume that a parity check matrix H of size M N and a received vector r of length N are given. Find the minimum-weight vector n such that H n = Hr. Then x = r n is the most likely sent code word. Proof. We know that r = x + n, where x is the transmitted code word. Thus Hr = H(x + n) =Hx + Hn = Hn. The result is then easily deduced. It has been known [12] for over 25 years that this problem is NP-hard. It is also believed that maximum-likelihood decoding of any capacity-approaching code is NP-hard, although this has not been shown as far as the author knows. 3.1 Iterative decoding The sum-product (SP) decoder (or min-sum decoder, or belief propagation decoder) is a type of iterative decoder, the latest hype in the coding theory world. The algorithm works by passing messages representing bit and check probabilities over the Tanner graph of the code. For each iteration, more of the received data is used for calculating the likelihoods of each sent bit, until the set of bits form a valid code word or a maximum number of iterations is reached. 23

36 24 Decoding The sum-product algorithm is a general algorithm that may be used for a wide range of calculations, including turbo code decoding, Kalman filtering, certain fast fourier transforms, as well as LDPC decoding [13]. The main strength of the SP decoder is its simplicity and inherent scalability. Every node in the graph can be considered a separate simple processing entity, receiving and sending messages along its edges. Thus, the calculations can be made either in parallel by an element for every node, a single processor or DSP doing all the calculations in serial, or any combination. This ability to parallelise computations makes it possible to reach very high throughput rates of 1 Gb/s or higher. The weaknesses, on the other hand, are very high memory requirements for storing of interim messages, and high wire routing complexity caused by the random nature of the graph. The large amounts of memory, in turn, causes large power dissipation issues, and the routing complexities makes it very difficult to make fully parallel implementations of codes longer than about 1000 bits. In short, there are many implementation difficulties regarding LDPC codes. Still, there are large amounts of structure that should be possible to exploit. In chapter 4 we analyze some ideas of reducing the decoding complexity. Compared to turbo codes, the codes have almost orthogonal features. Their definition is very different, and the iterative decoding algorithm is difficult to implement in parallel. On the other hand, it does not have the memory requirements of the LDPC decoder. In the following sections, we will first consider the decoder on cycle-free graphs, where it is easier to understand the algorithm. We will then describe how the algorithm can be used on graphs with cycles. 3.2 Decoding on cycle-free graphs The decoding algorithm works by passing messages between the nodes of the Tanner graph of the code. As described in section 2.3.2, the graph consists of variable nodes and check nodes. Each variable of the code word correspond to a variable node, and each parity check correspond to a check node. We will use the following notions when we describe the algorithm: x is the sent code word, and x i denotes bit i of the sent code word. r is the received vector of correlation values from the demodulator. ˆx is the decoder s guess of the sent code word. p a n is the prior likelihood of bit x n being a. Prior here means before decoding, i.e.p a n is determined solely from the value of r n. For example, p 0 7 is the prior likelihood that bit 7 of the sent code word was a 0. Of course, for binary-input channels, a take the value 0 or 1, and p 0 n + p 1 n =1.

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

CHAPTER 3 LOW DENSITY PARITY CHECK CODES 62 CHAPTER 3 LOW DENSITY PARITY CHECK CODES 3. INTRODUCTION LDPC codes were first presented by Gallager in 962 [] and in 996, MacKay and Neal re-discovered LDPC codes.they proved that these codes approach

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008 Codes on Graphs Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 27th, 2008 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 1 / 31

More information

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems Prof. Shu Lin Dr. Cathy Liu Dr. Michael Steinberger U.C.Davis Avago SiSoft A Brief Tour of FEC for Serial Link Systems Outline Introduction Finite Fields and Vector Spaces Linear Block Codes Cyclic Codes

More information

Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes

Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes Electronic Notes in Theoretical Computer Science 74 (2003) URL: http://www.elsevier.nl/locate/entcs/volume74.html 8 pages Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes David

More information

Making Error Correcting Codes Work for Flash Memory

Making Error Correcting Codes Work for Flash Memory Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Introducing Low-Density Parity-Check Codes

Introducing Low-Density Parity-Check Codes Introducing Low-Density Parity-Check Codes Sarah J. Johnson School of Electrical Engineering and Computer Science The University of Newcastle Australia email: sarah.johnson@newcastle.edu.au Topic 1: Low-Density

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

Turbo Codes are Low Density Parity Check Codes

Turbo Codes are Low Density Parity Check Codes Turbo Codes are Low Density Parity Check Codes David J. C. MacKay July 5, 00 Draft 0., not for distribution! (First draft written July 5, 998) Abstract Turbo codes and Gallager codes (also known as low

More information

Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1

Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1 Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1 Sunghwan Kim, Jong-Seon No School of Electrical Eng. & Com. Sci. Seoul National University, Seoul, Korea Email: {nodoubt, jsno}@snu.ac.kr

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

Block diagonalization of matrix-valued sum-of-squares programs

Block diagonalization of matrix-valued sum-of-squares programs Block diagonalization of matrix-valued sum-of-squares programs Johan Löfberg Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping, Sweden WWW:

More information

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

Low-Density Parity-Check codes An introduction

Low-Density Parity-Check codes An introduction Low-Density Parity-Check codes An introduction c Tilo Strutz, 2010-2014,2016 June 9, 2016 Abstract Low-density parity-check codes (LDPC codes) are efficient channel coding codes that allow transmission

More information

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Structured Low-Density Parity-Check Codes: Algebraic Constructions Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu

More information

Low-density parity-check codes

Low-density parity-check codes Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic. Modular numbers and Error Correcting Codes Introduction Modular Arithmetic Finite fields n-space over a finite field Error correcting codes Exercises Introduction. Data transmission is not normally perfect;

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Pravin Salunkhe, Prof D.P Rathod Department of Electrical Engineering, Veermata Jijabai

More information

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields MEE10:83 Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields Ihsan Ullah Sohail Noor This thesis is presented as part of the Degree of Master of Sciences in Electrical Engineering

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION EE229B PROJECT REPORT STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION Zhengya Zhang SID: 16827455 zyzhang@eecs.berkeley.edu 1 MOTIVATION Permutation matrices refer to the square matrices with

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

On the minimum distance of LDPC codes based on repetition codes and permutation matrices 1

On the minimum distance of LDPC codes based on repetition codes and permutation matrices 1 Fifteenth International Workshop on Algebraic and Combinatorial Coding Theory June 18-24, 216, Albena, Bulgaria pp. 168 173 On the minimum distance of LDPC codes based on repetition codes and permutation

More information

Belief-Propagation Decoding of LDPC Codes

Belief-Propagation Decoding of LDPC Codes LDPC Codes: Motivation Belief-Propagation Decoding of LDPC Codes Amir Bennatan, Princeton University Revolution in coding theory Reliable transmission, rates approaching capacity. BIAWGN, Rate =.5, Threshold.45

More information

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach Shu Lin, Shumei Song, Lan Lan, Lingqi Zeng and Ying Y Tai Department of Electrical & Computer Engineering University of California,

More information

Low-Density Parity-Check Codes

Low-Density Parity-Check Codes Department of Computer Sciences Applied Algorithms Lab. July 24, 2011 Outline 1 Introduction 2 Algorithms for LDPC 3 Properties 4 Iterative Learning in Crowds 5 Algorithm 6 Results 7 Conclusion PART I

More information

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise 9 THEORY OF CODES Chapter 9 Theory of Codes After studying this chapter you should understand what is meant by noise, error detection and correction; be able to find and use the Hamming distance for a

More information

Information Theory - Entropy. Figure 3

Information Theory - Entropy. Figure 3 Concept of Information Information Theory - Entropy Figure 3 A typical binary coded digital communication system is shown in Figure 3. What is involved in the transmission of information? - The system

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

Noisy channel communication

Noisy channel communication Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University

More information

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved. Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information

More information

A Short Length Low Complexity Low Delay Recursive LDPC Code

A Short Length Low Complexity Low Delay Recursive LDPC Code A Short Length Low Complexity Low Delay Recursive LDPC Code BASHAR M. MANSOOR, TARIQ Z. ISMAEEL Department of Electrical Engineering College of Engineering, University of Baghdad, IRAQ bmml77@yahoo.com

More information

Utilizing Correct Prior Probability Calculation to Improve Performance of Low-Density Parity- Check Codes in the Presence of Burst Noise

Utilizing Correct Prior Probability Calculation to Improve Performance of Low-Density Parity- Check Codes in the Presence of Burst Noise Utah State University DigitalCommons@USU All Graduate Theses and Dissertations Graduate Studies 5-2012 Utilizing Correct Prior Probability Calculation to Improve Performance of Low-Density Parity- Check

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Thomas R. Halford and Keith M. Chugg Communication Sciences Institute University of Southern California Los Angeles, CA 90089-2565 Abstract

More information

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Agenda NAND ECC History Soft Information What is soft information How do we obtain

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Channel Coding and Interleaving

Channel Coding and Interleaving Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.

More information

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras e-mail: hari_jethanandani@yahoo.com Abstract Low-density parity-check (LDPC) codes are discussed

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem Osamu Watanabe, Takeshi Sawai, and Hayato Takahashi Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology

More information

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei COMPSCI 650 Applied Information Theory Apr 5, 2016 Lecture 18 Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei 1 Correcting Errors in Linear Codes Suppose someone is to send

More information

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming

More information

LDPC codes based on Steiner quadruple systems and permutation matrices

LDPC codes based on Steiner quadruple systems and permutation matrices Fourteenth International Workshop on Algebraic and Combinatorial Coding Theory September 7 13, 2014, Svetlogorsk (Kaliningrad region), Russia pp. 175 180 LDPC codes based on Steiner quadruple systems and

More information

State-of-the-Art Channel Coding

State-of-the-Art Channel Coding Institut für State-of-the-Art Channel Coding Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Constructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011

Constructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011 Constructing Polar Codes Using Iterative Bit-Channel Upgrading by Arash Ghayoori B.Sc., Isfahan University of Technology, 011 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree

More information

On the minimum distance of LDPC codes based on repetition codes and permutation matrices

On the minimum distance of LDPC codes based on repetition codes and permutation matrices On the minimum distance of LDPC codes based on repetition codes and permutation matrices Fedor Ivanov Email: fii@iitp.ru Institute for Information Transmission Problems, Russian Academy of Science XV International

More information

Channel Coding: Zero-error case

Channel Coding: Zero-error case Channel Coding: Zero-error case Information & Communication Sander Bet & Ismani Nieuweboer February 05 Preface We would like to thank Christian Schaffner for guiding us in the right direction with our

More information

Low-density parity-check (LDPC) codes

Low-density parity-check (LDPC) codes Low-density parity-check (LDPC) codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

System analysis of a diesel engine with VGT and EGR

System analysis of a diesel engine with VGT and EGR System analysis of a diesel engine with VGT and EGR Master s thesis performed in Vehicular Systems by Thomas Johansson Reg nr: LiTH-ISY-EX- -5/3714- -SE 9th October 25 System analysis of a diesel engine

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Belief propagation decoding of quantum channels by passing quantum messages

Belief propagation decoding of quantum channels by passing quantum messages Belief propagation decoding of quantum channels by passing quantum messages arxiv:67.4833 QIP 27 Joseph M. Renes lempelziv@flickr To do research in quantum information theory, pick a favorite text on classical

More information

5. Density evolution. Density evolution 5-1

5. Density evolution. Density evolution 5-1 5. Density evolution Density evolution 5-1 Probabilistic analysis of message passing algorithms variable nodes factor nodes x1 a x i x2 a(x i ; x j ; x k ) x3 b x4 consider factor graph model G = (V ;

More information

LOW-density parity-check (LDPC) codes were invented

LOW-density parity-check (LDPC) codes were invented IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew

More information

An algorithm to improve the error rate performance of Accumulate-Repeat-Accumulate codes Tae-Ui Kim

An algorithm to improve the error rate performance of Accumulate-Repeat-Accumulate codes Tae-Ui Kim An algorithm to improve the error rate performance of Accumulate-Repeat-Accumulate codes Tae-Ui Kim The Graduate School Yonsei University Department of Electrical and Electronic Engineering An algorithm

More information

Mapper & De-Mapper System Document

Mapper & De-Mapper System Document Mapper & De-Mapper System Document Mapper / De-Mapper Table of Contents. High Level System and Function Block. Mapper description 2. Demodulator Function block 2. Decoder block 2.. De-Mapper 2..2 Implementation

More information

The New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the Binary LDPC Codes With Better Cycle-Structures

The New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the Binary LDPC Codes With Better Cycle-Structures HE et al.: THE MM-PEGA/MM-QC-PEGA DESIGN THE LDPC CODES WITH BETTER CYCLE-STRUCTURES 1 arxiv:1605.05123v1 [cs.it] 17 May 2016 The New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the

More information

Error Correction and Trellis Coding

Error Correction and Trellis Coding Advanced Signal Processing Winter Term 2001/2002 Digital Subscriber Lines (xdsl): Broadband Communication over Twisted Wire Pairs Error Correction and Trellis Coding Thomas Brandtner brandt@sbox.tugraz.at

More information

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes Kevin Buckley - 2010 109 ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes m GF(2 ) adder m GF(2 ) multiplier

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Institute of Electronic Systems Signal and Information Processing in Communications Nana Traore Shashi Kant Tobias

More information

Revision of Lecture 5

Revision of Lecture 5 Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information

More information

Coding theory: Applications

Coding theory: Applications INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

Convolutional Coding LECTURE Overview

Convolutional Coding LECTURE Overview MIT 6.02 DRAFT Lecture Notes Spring 2010 (Last update: March 6, 2010) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu LECTURE 8 Convolutional Coding This lecture introduces a powerful

More information

Polar Codes: Graph Representation and Duality

Polar Codes: Graph Representation and Duality Polar Codes: Graph Representation and Duality arxiv:1312.0372v1 [cs.it] 2 Dec 2013 M. Fossorier ETIS ENSEA/UCP/CNRS UMR-8051 6, avenue du Ponceau, 95014, Cergy Pontoise, France Email: mfossorier@ieee.org

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

A relaxation of the strangeness index

A relaxation of the strangeness index echnical report from Automatic Control at Linköpings universitet A relaxation of the strangeness index Henrik idefelt, orkel Glad Division of Automatic Control E-mail: tidefelt@isy.liu.se, torkel@isy.liu.se

More information

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Linear Codes Enes Pasalic University of Primorska Koper, 2013 2 Contents 1 Preface 5 2 Shannon theory and coding 7 3 Coding theory 31 4 Decoding of linear codes and MacWilliams

More information

ECC for NAND Flash. Osso Vahabzadeh. TexasLDPC Inc. Flash Memory Summit 2017 Santa Clara, CA 1

ECC for NAND Flash. Osso Vahabzadeh. TexasLDPC Inc. Flash Memory Summit 2017 Santa Clara, CA 1 ECC for NAND Flash Osso Vahabzadeh TexasLDPC Inc. 1 Overview Why Is Error Correction Needed in Flash Memories? Error Correction Codes Fundamentals Low-Density Parity-Check (LDPC) Codes LDPC Encoding and

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

An efficient implementation of gradient and Hessian calculations of the coefficients of the characteristic polynomial of I XY

An efficient implementation of gradient and Hessian calculations of the coefficients of the characteristic polynomial of I XY Technical report from Automatic Control at Linköpings universitet An efficient implementation of gradient and Hessian calculations of the coefficients of the characteristic polynomial of I XY Daniel Ankelhed

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g Exercise Generator polynomials of a convolutional code, given in binary form, are g 0, g 2 0 ja g 3. a) Sketch the encoding circuit. b) Sketch the state diagram. c) Find the transfer function TD. d) What

More information

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes. 5 Binary Codes You have already seen how check digits for bar codes (in Unit 3) and ISBN numbers (Unit 4) are used to detect errors. Here you will look at codes relevant for data transmission, for example,

More information

Analytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes

Analytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes Analytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes Rathnakumar Radhakrishnan, Sundararajan Sankaranarayanan, and Bane Vasić Department of Electrical and Computer Engineering

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

A Systematic Description of Source Significance Information

A Systematic Description of Source Significance Information A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh

More information

An Introduction to (Network) Coding Theory

An Introduction to (Network) Coding Theory An to (Network) Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland April 24th, 2018 Outline 1 Reed-Solomon Codes 2 Network Gabidulin Codes 3 Summary and Outlook A little bit of history

More information