Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

Size: px
Start display at page:

Download "Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,..."

Transcription

1 Chapter 16 Turbo Coding As noted in Chapter 1, Shannon's noisy channel coding theorem implies that arbitrarily low decoding error probabilities can be achieved at any transmission rate R less than the channel capacity C by using sufficiently long bloc (or constraint) lengths. In particular, Shannon showed that randomly chosen codes, along with maximum lielihood decoding, can provide capacity achieving performance. He did this by proving that the average performance of a randomly chosen ensemble of codes results in an exponentially decreasing decoding error probability with increasing bloc (or constraint) length. However, he gave no guidance about how to actually construct good codes, i.e., codes that perform at least as well as the average, or to implement maximum lielihood decoding for such codes. In the ensuing years after the publication of Shannon's paper in 1948, a large amount of research was conducted into the construction of specific codes with good error correcting capabilities and the development of efficient decoding algorithms for these codes. Much of this research has been described in the previous chapters of this boo. Typically, the best code designs contain a large amount of structure, either algebraic, as is the case with most bloc codes, such as RM and BCH codes, or topological, as is the case with most convolutional codes, which can be represented using trellis or tree diagrams. The structure is a ey component of the code design, since it can be used to guarantee good minimum distance properties for the code, as in the BCH bound, and since the particular decoding method, such as the Berleamp-Massey algorithm for BCH codes or the Viterbi algorithm for convolutional codes, is based on this structure. In fact, one can say generally that the more structure a code contains, the easier 1

2 2 CHAPTER 16. TURBO CODING it is to decode. However, structure does not always result in the best distance properties for a code, and most highly structured code designs usually fall far short of achieving the performance promised by Shannon. Primarily because of the need to provide structure in order to develop easily implementable decoding algorithms, little attention was paid to the design of codes with random-lie properties, as originally envisioned by Shannon. Random code designs, because they laced structure, were thought to be too difficult to decode. In this chapter, we discuss a relatively new coding technique, dubbed turbo coding by its inventors, that succeeds in achieving a random-lie code design with just enough structure to allow for an efficient iterative decoding method. Because of this feature, these codes have exceptionally good performance, particularly at moderate BER's and for large bloc lengths. In fact, for essentially any code rate and information bloc lengths greater than about 10 4 bits, turbo codes with iterative decoding can achieve BER's as low as 10 5 at SNR's within 1dB of the Shannon limit, i.e., the value of E b =N 0 for which the code rate equals channel capacity. This typically exceeds the performance of previously nown codes of comparable length and decoding complexity by several db, although the fact that decoding is iterative results in large decoding delays. Another important feature of turbo codes is that they are composed of two or more simple constituent codes, arranged in a variation of the concatenation scheme introduced in Chapter 15, along with a pseudorandom interleaver. Since the interleaver is part of the code design, a complete maximum lielihood decoder for the entire code would be prohibitively complex. However, because more than one code is used, it is possible to employ simple soft-in soft-out (SISO) decoders for each constituent code in an iterative fashion, in which the soft output values of one decoder are passed to the other and vice versa until the final decoding estimate is obtained. Extensive experience with turbo decoders has shown that this simple, suboptimum iterative decoding approach almost always converges to the maximum lielihood solution. In summary, turbo coding consists of two fundamental ideas: a code design which produces a code with random-lie properties, and a decoder design which maes use of soft output values and iterative

3 16.1. INTRODUCTION TO TURBO CODING 3 decoding. The basic features of turbo code design are developed in Sections 16.1 to 16.4 and the principles of iterative turbo decoding are presented in Section Introduction to Turbo Coding A bloc diagram of the basic turbo encoding structure is illustrated in Figure 16.1(a) and a specific example is shown in Figure 16.1(b). The basic system consists of an input information sequence, two systematic feedbac (recursive) convolutional encoders, and an interleaver. We will assume that the input sequence includes m termination bits to return the first encoder to the all-zero state S 0 = 0, where m is the memory order of the first encoder. The information sequence (including termination bits) is considered to be a bloc of length N and is represented by the vector u = [u 0 ;u 1 ; ;u N 1 ] : (16.1) Since encoding is systematic, the information sequence u is the first transmitted sequence, i.e., u = v (0) = h i v (0) 0 ;v(0) 1 ; ;v(0) N 1 : (16.2a) The first encoder generates the parity sequence v (1) = h i v (1) 0 ;v(1) 1 ; ;v(1) N 1 : (16.2b) The interleaver reorders or permutes the N bits in the information bloc so that the second encoder receives a permuted information sequence u 0 different than the first. (Note that the second encoder may not be terminated.) The parity sequence generated by the second encoder is represented as v (2) = h i v (2) 0 ;v(2) 1 ; ;v(2) N 1 ; (16.2c) and the final transmitted sequence is given by the vector v = h v (0) 0 ;v(1) 0 ;v(2) 0 ; v(0) 1 ;v(1) 1 ;v(2) 1 ; ; v(0) N 1 ;v(1) N 1 ;v(2) N 1 i ; (16.3) so that the overall code rate is R = (N m)=3n ß 1=3 for large N.

4 4 CHAPTER 16. TURBO CODING u=[u 0,u 1,...,u N-1 ] v (0) =[v (0) 0,v (0) 1,...,v (0) N-1] π Encoder 1 Encoder 2 v (1) =[v (1) 0,v (1) 1,...,v (1) N-1] v (2) =[v (2) 0,v (2) 1,...,v (2) N-1] (a) u v (0) v (1) π v (2) (b) Figure 16.1: The Basic Turbo Encoding Structure

5 16.1. INTRODUCTION TO TURBO CODING 5 The information sequence u along with the first parity sequence v (1) is referred to as the first constituent code, while the permuted information sequence u 0 (which is not transmitted) along with the second parity sequence v (2) is referred to as the second constituent code. In Figure 16.1(b), both constituent codes are generated by the same (2,1,4) systematic feedbac encoder whose generator matrix is given by G (D) = h 1 1+D 4 1+D+D 2 +D 3 +D 4 i : (16.4) We now mae a number of remars related to the typical operation of turbo codes. Explanations will be given in later sections of this chapter. Remars: ffl In order to achieve performance close to the Shannon limit, the information bloc length (interleaver size) N is chosen to be very large, usually at least several thousand bits. ffl The best performance at moderate BER's down to about 10 5 is achieved with short memory order constituent encoders, typically m = 4 or less. ffl The constituent codes are normally generated by the same encoder, as in Figure 16.1(b), but this is not necessary for good performance. In fact, some asymmetric code designs have been shown to give very good performance [1]. ffl Recursive constituent codes, generated by systematic feedbac encoders, give much better performance than nonrecursive constituent codes. ffl Bits can be punctured from the parity sequences in order to produce higher code rates. example, puncturing alternate bits from v (1) and v (2) produces a rate 1=2 code. For ffl Additional constituent codes and interleavers can be used to produce lower rate codes. For example, rate 1=4 can be achieved with three constituent codes and two interleavers, as shown in Figure This is called a multiple turbo code. ffl The best interleavers reorder the bits in a pseudorandom manner. Conventional bloc interleavers do not perform well in turbo codes, except at relatively short bloc lengths. ffl Since it is only the ordering of the bits that is changed by the interleaver, the sequence u 0 that enters the second encoder has the same weight as the sequence u that enters the first encoder. ffl The interleaver is an integral part of the overall encoder, and thus the state complexity of turbo codes is extremely large, maing trellis-based maximum lielihood decoding impossible.

6 6 CHAPTER 16. TURBO CODING ffl Suboptimum decoding, which employs individual SISO decoders for each of the constituent codes in an iterative manner, achieves performance typically within a few tenths of a db of maximum lielihood decoding. The best performance is obtained when the BCJR, or MAP, algorithm is used. ffl Since the MAP decoder uses a forward-bacward algorithm, the information is arranged in blocs. Thus, the first constituent encoder is terminated by appending m bits to return it to the 0 state. Because the interleaver reorders the input sequence, the second encoder will not normally return to the 0 state, but this has little effect on performance for large bloc lengths. If desired, though, modifications can be made to insure termination of both encoders. ffl Bloc codes can also be used as constituent codes in turbo encoders. ffl Decoding can be stopped, and a final decoding estimate declared, after some fixed number of iterations (usually on the order of 10-20) or based on a stopping criterion which is designed to detect when the estimate is reliable with very high probability. u v (0) Encoder 1 v (1) π 1 Encoder 2 v (2) π 2 Encoder 3 v (3) Figure 16.2: A Rate 1=4 Multiple Turbo Code The basic encoding scheme illustrated in Figures 16.1 and 16.2 is referred to as parallel concatenation, because of its similarity to Forney's original concatenation scheme. Compared to conventional

7 16.1. INTRODUCTION TO TURBO CODING 7 concatenation, in parallel concatenation it is the input to the encoder, rather than its output, that enters the second encoder, although it is first permuted by the interleaver. In other words, the two encoders operate in parallel on different versions of the information sequence. The example code shown in Figure 16.1(b), punctured to rate 1=2, is capable of achieving a 10 5 BER at an SNR of E b =N 0 = 0:7dB with an information bloc length N = 2 16 = bits after 18 iterations of a SISO MAP decoder. By comparison, the NASA standard (2,1,6) convolutional code with ML decoding presented in Chapter 12 requires an E b =N 0 = 4:2dB to achieve the same BER. The performance comparison of these two codes is shown in Figure Thus the rate 1=2 turbo code achieves a 3:5dB coding gain compared to the (2,1,6) convolutional code at a 10 5 BER. The decoding complexity of the two codes is roughly equivalent, since a 16-state MAP decoder has about the same complexity as a 64-state Viterbi decoder. This advantage of turbo codes over conventional methods of coding is fairly typical over the entire range of possible code rates, i.e., several db's of coding gain can be achieved at moderate BER's with long turbo codes of the same rate and roughly the same decoding complexity as conventional codes. In addition, the performance of turbo codes at moderate BER's is within 1dB of capacity. For the example shown in Figure 16.3, the BER performance is only 0:7dB away from capacity and only 0:5dB away from the capacity for binary input channels. Turbo codes suffer from two disadvantages: a large decoding delay, due to the large bloc lengths and many iterations of decoding required for near capacity performance, and significantly weaened performance at BER's below 10 5 (see Figure 16.3), due to the fact that the codes have a relatively poor minimum distance, which manifests itself at very low BER's. The large delay appears to mae turbo codes unsuitable for real time applications such as voice transmission and pacet communication in high speed networs. It is possible, though, to trade delay for performance in such a way that turbo codes may be useful in some real time applications involving bloc lengths on the order of a few thousand bits, or even a few hundred bits. The fact that turbo codes typically do not have large minimum distances causes the performance curve to flatten out at BER's below 10 5, as shown in Figure This phenomenon, sometimes called an error floor, is due to the unusual weight distribution of turbo

8 8 CHAPTER 16. TURBO CODING 1e+00 (2,1,6) NASA Turbo Code 1e-01 1e-02 1e-03 P b (E) 1e-04 1e-05 1e-06 Capacity BPSK Capacity 1e E b /N 0 (db) Figure 16.3: Performance Comparison of Convolutional Codes and Turbo Codes

9 16.1. INTRODUCTION TO TURBO CODING 9 codes, which will be discussed in the next section. Because of the error floor, turbo codes may not be suitable for applications requiring extremely low BER's, such as some scientific or command and control applications. However, there are measures that can be taen to mitigate this problem. Interleavers can be designed to improve the minimum distance of the code, thus lowering the error floor. Also, an outer code, or a second layer of concatenation, can be used with a turbo code to correct many of the errors caused by the small minimum distance, at a cost of a small decrease in overall rate. Both of these techniques will be discussed later in this chapter. The fundamental property of turbo codes that gives them such excellent performance at moderate BER's is the random-lie weight spectrum of the codewords produced by the pseudorandom interleaver when systematic feedbac encoders are used. To see this, we consider a series of examples. Example 16.1 Consider the conventional (2,1,4) convolutional code with nonsystematic feedforward generator matrix G ff (D) = 1 + D + D 2 + D 3 + D D 4 Λ : (16.5) The minimum free distance of this code is 6, obtained from the information sequence u = [1 1 0 ]. If the encoder is converted to systematic feedbac form, the generator matrix is then given by G fb (D) = h 1 1+D 4 1+D+D 2 +D 3 +D 4 i : (16.6) Since the code is exactly the same, the free distance is still 6, but in this case the minimum weight codeword is obtained from the information sequence u = [ ], i.e., u(d) = 1+D 5. The two different encoders result in identical codes, but with different mappings between information blocs and codewords. Now consider that each encoder is terminated after an information bloc of length N 4 by appending four bits to return the encoders to the 0 state. (Note that, for the feedbac encoder, the termination bits depend on the information bloc and are in general non-zero.) In this case we obtain a (2N;N 4) bloc code with rate R = (N 4) =2N ß 1=2 for large N. Each of these bloc codes contains exactly N 5 weight 6 codewords. This is due to the fact that the information sequence that

10 10 CHAPTER 16. TURBO CODING produces the weight 6 codeword can begin at any of the first N 5 information positions and generate the same codeword. A similar analysis reveals that for weight 7 and other low weights, the number of codewords is also large, on the order of N or larger. In Table 16.1(a), we give the complete weight spectrum of the (32,12) code that results from choosing N = 16. Observe that the number of codewords at each weight grows rapidly until it reaches a pea at length 16, half the bloc length. In other words, the weight spectrum of the code is dense at the low end, and this results in a relatively high probability of error at low SNR's even with ML decoding. Λ In general, if an unterminated convolutional code has A d codewords of weight d caused by a set of information sequences fu(d)g whose first one occurs at time 0, then it also has A d codewords of weight d caused by the set of information sequences fdu(d)g, and so on. Terminated convolutional codes have essentially the same property for large bloc lengths. In other words, convolutional encoders are time-invariant, and it is this property that accounts for the relatively large numbers of low weight codewords. Next, we loo at an example where a pseudorandom interleaver is used to produce a parallel concatenation of two identical systematic feedbac constituent encoders. Example 16.2 Consider the systematic feedbac encoder of (16.6), a length N = 16 input sequence, and the size 16 interleaving pattern given by the permutation: Y 16 = [0; 8; 15; 9; 4; 7; 11; 5; 1; 3; 14; 6; 13; 12; 10; 2] : (16.7) The input sequence is first encoded by the parity generator 1 + D 4 = 1 + D + D 2 + D 3 + D 4, producing the parity sequence v (1) (D). Then the interleaver taes the 12 information bits plus the 4 termination bits and reorders them such that u 0 0 = u 0 ;u 0 1 = u 8 ;u 0 2 = u 15 ; ;u 0 15 = u 2 : (16.8)

11 16.1. INTRODUCTION TO TURBO CODING 11 This permuted input sequence is then re-encoded using the same parity generator 1 + D 4 = 1 + D + D 2 + D 3 + D 4, thus producing another version of the parity sequence. In order to compare with the code of Example 16.1, we now puncture alternate bits from the two versions of the parity sequence using the period T = 2 puncturing matrix P =» : (16.9) The result is a parallel concatenated code with the same dimensions as the code in Example 16.1, i.e., a (32,12) code. The weight spectrum for this parallel concatenated code is given in Table 16.1(b). We see that there is a noticeable difference between this weight spectrum and the one for the terminated convolutional code shown in Table 16.1(a), even though the code generators are the same. This altered weight spectrum is a direct result of the interleaver which permutes the bits for re-encoding. Note that the free distance has decreased, from 6 to 5, but that there is only one weight 5 codeword. More importantly, the multiplicities of the weight 6 through 9 codewords are less for the parallel concatenated code than for the terminated convolutional code. In other words, there has been a shift from lower weight codewords to higher weight codewords relative to the convolutional code in the parallel concatenated case. This shifting of low weight codewords towards higher weights in the parallel concatenation of feedbac convolutional encoders has been termed spectral thinning. The effect is due to the fact that interleaving causes almost all the low weight parity sequences in the first constituent code to be matched with high weight parity sequences in the second constituent code. For example, consider the weight 2 input sequence u = [ ] that causes the low weight parity sequence v (1) = [ ]. Thus without the interleaver, the convolutional code produces a codeword of weight 6. The interleaved input sequence is given by u 0 = [ ] and produces the parity sequence v (2) = [ ]. Combining v (1) and v (2) and then puncturing alternate bits using the period T = 2 puncturing matrix given in (16.9) produces the parity sequence [ ]. Thus the same weight 2 input sequence produces a codeword of weight 8 in the parallel concatenated code. This is typical of parallel concatenation, i.e., when feedbac constituent encoders are used, most

12 12 CHAPTER 16. TURBO CODING low weight codewords are shifted to higher weights. In the next example, we see that this spectral thinning becomes more dramatic for larger bloc lengths. Λ Example 16.3 Consider the same systematic feedbac convolutional encoder as in Examples 16.1 and 16.2, but with a bloc length of N = 32, including the m = 4 termination bits. The weight spectrum of the terminated (64,28) convolutional code is shown in Table 16.2(a). Now consider the same encoder in a parallel concatenation scheme, using the size 32 interleaving pattern given by Y 32 = [0; 16; 7; 17; 12; 28; 19; 2; 8; 11; 22; 9; 4; 20; 18; 26; 1; 3; 14; 6; 13; 31; 10; 29; 25; 24; 15; 30; 5; 23; 27; 21] (16.10) and the same period T = 2 puncturing matrix as in Example This results in the (64,28) parallel concatenated code whose weight spectrum is given in Table 16.2(b). We note that there is now a more pronounced difference between this weight spectrum and the one for the terminated convolutional code given in Table 16.2(a) than in the N = 16 case. The free distance of both codes equals 6, i.e., there is no change in d free, but the multiplicities of the low weight codewords are greatly reduced in the parallel concatenated code. This can be seen more clearly from the graphs of the two weight spectra shown in Figure In other words, the effect of spectral thinning becomes more dramatic as the bloc length (interleaver size) increases. In fact, for even larger values of N, the codeword and bit multiplicities of the low weight codewords in the turbo code weight spectrum are reduced by roughly a factor of N, the interleaver size, compared to the terminated convolutional code. This reduction by a factor of N in the low weight multiplicities is referred to as the interleaver gain and will be verified analytically in the next section. Λ Remars: ffl Different interleavers and puncturing matrices would produce different results in the above exam-

13 16.1. INTRODUCTION TO TURBO CODING 13 Table 16.1: Weight Spectra of Two (32,12) Codes (a) Terminated Convolutional (b) Parallel Concatenated Weight Multiplicity Weight Multiplicity

14 14 CHAPTER 16. TURBO CODING Table 16.2: Weight Spectra of Two (64,28) Codes (a) Terminated Convolutional (b) Parallel Concatenated Weight Multiplicity Weight Multiplicity Weight Multiplicity Weight Multiplicity

15 16.1. INTRODUCTION TO TURBO CODING 15 1e+08 Terminated Convolutional Parallel Concatenated 1e+07 1e+06 1e+05 Multiplicity 1e+04 1e+03 1e+02 1e+01 1e Codeword Weight Figure 16.4: An Illustration of Spectral Thinning ples, but the behavior observed is typical in most cases. ffl Spectral thinning has little effect on the minimum free distance, but it greatly reduces the multiplicities of the low weight codewords. ffl As the bloc length and corresponding interleaver size increase, the weight spectrum of parallel concatenated convolutional codes begins to approximate a random-lie distribution, i.e., the distribution that would result if each bit in every codeword were selected randomly from an i.i.d. probability distribution. ffl As will be seen in the next section, there is only a small spectral thinning effect if feedforward constituent encoders are used. ffl One can explain the superiority of feedbac encoders in parallel concatenation as a consequence of the fact that they are IIR, rather than FIR, filters, i.e., their response to single input "1's" is not localized to the constraint length of the code but extends over the entire bloc length. This property of feedbac encoders is exploited by a pseudorandom interleaver to produce the spectral thinning effect. It is also worth noting that parallel concatenated codes are no longer time-invariant. This can easily be seen by considering the effect when the input sequence in Example 16.2 is delayed by one

16 16 CHAPTER 16. TURBO CODING time unit, i.e., consider the input sequence u = [ ]. The first parity sequence v (1) = [ ] is also delayed by one time unit, but the interleaved input sequence is now u 0 = [ ], which produces the second parity sequence v (2) = [ ]. This is clearly not a delayed version of the v (2) in Example In other words, the interleaver has broen the time-invariant property of the code, resulting in a time-varying code. To summarize, in order to achieve the spectral thinning effect of parallel concatenation, it is necessary both to generate a time-varying code (via interleaving) and to employ feedbac, i.e., IIR, encoders. It is clear from the above examples that the interleaver plays a ey role in turbo coding. As we shall now briefly discuss, it is important that the interleaver has pseudorandom properties. Traditional bloc or convolutional interleavers do not wor well in turbo coding, particularly when the bloc length is large. What is important is that the low weight parity sequences from the first encoder get matched with high weight parity sequences from the second encoder almost all the time. This requires that the interleaver brea the patterns in the input sequences which produce low weight parity sequences. Interleavers with structure, such as bloc or convolutional interleavers, tend to preserve too many of these bad" input patterns, resulting in poor matching properties and limited spectral thinning. Pseudorandom interleavers, on the other hand, brea up almost all the bad" patterns and thus achieve the full effect of spectral thinning. In Example 16.2, the 11 input sequences u (D) = D i 1 + D 5 ; i = 0; 1; ; 10 (16.11) are bad", because they generate a low weight (4 in this case) parity sequence. As can be seen from a careful examination of Figure 16.5, if a 4 4 bloc (row-column) interleaver is employed, 9 of these 11 sequences will maintain the same bad" pattern after interleaving, resulting in a large multiplicity of low weight codewords. The weight spectrum of the code in Example 16.2 with this bloc interleaver, shown in Table 16.3, is clearly inferior to the weight spectrum shown in Table 16.1(b) obtained using the interleaver of (16.7). Pseudorandom interleavers, such as those in (16.7) and (16.10), generate a weight spectrum that has many of the same characteristics as the binomial distribution, which is

17 16.1. INTRODUCTION TO TURBO CODING 17 equivalent to the weight spectrum assumed by Shannon in his random coding proof of the noisy channel coding theorem. In other words, codes with random (binomial) weight distributions can achieve the performance guaranteed by Shannon's bound. Turbo coding with pseudorandom interleaving gives us a way of constructing codes with weight spectra similar to a binomial distribution and a simple, near-optimal iterative decoding method exists. Read Write Figure 16.5: A (4 4) Bloc Interleaver for a (32,12) Turbo Code Pseudorandom interleaving patterns can be generated in many ways, e.g., by using a primitive polynomial to generate a maximum length shift register sequence whose cycle structure determines the permutation. Another method uses a computationally simple algorithm based on the quadratic

18 18 CHAPTER 16. TURBO CODING congruence c m m(m + 1) 2 (mod N); 0» m < N; (16.12) to generate an index mapping function c m! c m+1 (mod, where N is the interleaver size and is an N) odd integer. For example, for N = 16 and = 1, we obtain (c 0 ;c 1 ; ;c 15 ) = (0; 1; 3; 6; 10; 15; 5; 12; 4; 13; 7; 2; 14; 11; 9; 8) : (16.13) This implies that index 0 (input bit u 0 0 ) in the interleaved sequence u0 is mapped into index 1 in the original sequence u (i.e., u 0 0 = u 1), index 1 in u 0 is mapped into index 3 in u (u 0 1 = u 3), and so on, resulting in the permutation Y 16 = [1; 3; 14; 6; 13; 12; 10; 2; 0; 8; 15; 9; 4; 7; 11; 5] : (16.14) If this interleaving pattern is shifted cyclically to the right r = 8 times, we obtain the interleaving pattern of (16.7). For N a power of two, it can be shown that these quadratic interleavers have statistical properties similar to randomly chosen interleavers, and thus they give good performance when used in turbo coding [2]. Other good interleaving patterns can be generated by varying and r, and the special case r = N=2 (used above to obtain (16.7)) results in an interleaver which simply interchanges pairs of indices (see Problem 16.2). This special case is particularly interesting for implementation purposes, since the interleaving and deinterleaving functions (both used in decoding) are identical. Finally, when N is not a power of 2, the above algorithm can be modified to generate similar permutations with good statistical properties [2]. The basic structure of an iterative turbo decoder is shown in Figure (We assume here a rate 1=3 parallel concatenated code without puncturing.) It employs two SISO decoders using the MAP algorithm presented earlier in Chapters 12 and 14. At each time unit, three output values are received from the channel, one for the information bit u = v (0), denoted r(0), and two for the parity bits v(1) and v (2), denoted r(1) and r (2), and the 3N-dimensional received vector is denoted by h i r = r (0) 0 ;r(1) 0 ;r(2); 0 r(0) 1 ;r(1) 1 ;r(2) 1 ; ; r(0) : (16.15) N 1 ;r(1) N 1 ;r(2) N 1

19 16.1. INTRODUCTION TO TURBO CODING 19 Table 16.3: The Weight Spectrum of a Bloc Interleaved (32,12) Turbo Code Weight Multiplicity

20 20 CHAPTER 16. TURBO CODING Now let each transmitted bit be represented using the binary values 0! +1 and 1! 1, so that mod-2 addition is equivalent to real multiplication. Then for an AWGN channel with unquantized (soft) outputs, we define the log-lielihood ratio (L-value) L v (0) j r (0) = L u j r (0) (before decoding) of a transmitted information bit u given the received value r (0) as L(u j r (0) ) = ln P P P = ln P P = ln P u = +1 j r (0) u = 1 j r (0) r (0) j u = +1 r (0) j u = 1 r (0) r (0) Λ P (u = +1) Λ P (u = 1) j u = +1 + ln P (u = +1) j u = 1 P (u = 1) = ln e (E s=n 0 ) r (0) 2 1 e (E s=n 0 ) r (0) where E s =N 0 is the channel SNR and u and r (0) simplifies to L u j r (0) 2 + ln P (u = +1) +1 P (u = 1) ; (16.16) have both been normalized by a factor of p E s. This ρ = E s 2 2 ff r (0) 1 r (0) ln P (u = +1) N 0 P (u = 1) = 4 E s N 0 r (0) + ln P (u = +1) P (u = 1) = L c r (0) + L a (u ) ; (16.17) where L c 4 = 4(Es =N 0 ) is called the channel reliability factor and L a (u ) is the a priori L-value of the bit u. In the case of a transmitted parity bit v (i), given the received value r(i), i = 1; 2; the L-value (before decoding) is given by L v (i) j r (i) = L c r (i) + L a v (i) = L c r (i) ; i = 1; 2; (16.18) since in a linear code with equally liely information bits, the parity bits are also equally liely to be 0 or 1, and thus the a priori L-values of the parity bits are 0, i.e., P v L a v (i) (i) = +1 = ln = 0; i = 1; 2: (16.19) P v (i) = 1

21 16.1. INTRODUCTION TO TURBO CODING 21 (We note here that L a (u ) also equals 0 for the first iteration of Decoder 1, but that thereafter the a priori L-values of the information bits are replaced by extrinsic L-values from the other decoder, as explained below.) The received soft channel L-values L c r (0) for u and L c r (1) for v (1) enter Decoder 1, while the (properly interleaved) received soft channel L-values L c r (0) for u and the received soft channel L- values L c r (2) for v (2) enter Decoder 2. The output of Decoder 1 contains two terms: (1) L (1) (u ) = h i ln P u = +1 j r 1 ; L (1) a =P u = 1 j r 1 ; L (1) a, the a posteriori L-value (after decoding) of each information bit produced by Decoder 1 given the (partial) received vector r = 1 h i and the a priori input vector L (1) a = L (1) a (u 0 );L (1) a (u 1 ); ;L (1) a (u N 1 ) L (1) (u ) h L c r (0) + L (2) e (u ) i h r (0) 0 ;r(1) 0 ; r(0) 1 ;r(1) 1 ; ; r(0) N 1 ;r(1) N 1 for Decoder 1, and (2) L (1) e (u ) =, the extrinsic a posteriori L-value (after decoding) associated with each information bit produced by Decoder 1, which, after interleaving, is passed to the input of Decoder 2 as the a priori value L (2) a (u ). Subtracting the term in bracets, viz., L c r (0) + L (2) e (u ), removes the effect of the current information bit u from L (1) (u ), leaving only the effect of the parity constraints, thus providing an independent estimate of the information bit u to Decoder 2 in addition to the received soft channel L-values at time. Similarly, the output of Decoder 2 contains two terms, L (2) (u ) = h i ln P u = +1 j r 2 ; L (2) a =P u = 1 j r 2 ; L (2) a the a priori input vector for Decoder 2, and L (2) e (u ) = L (2) (u ), where r 2 is the (partial) received vector and L (2) a h L c r (0) + L (1) e (u ) i, and the extrinsic a posteriori L-values L (2) e (u ) produced by Decoder 2, after deinterleaving, are passed bac to the input of Decoder 1 as the a priori values L (1) a (u ). Thus, the input to each decoder contains three terms, the soft channel L-values L c r (0) and L c r (1) (or L c r (2) ) and the extrinsic a posteriori L-values L (2) e (u )= L (1) a (u ) (or L (1) e (u ) = L (2) a (u )) passed from the other decoder. (In the initial iteration of Decoder 1, the extrinsic a posteriori L-values L (2) e (u )= L (1) a (u ) are just the original a priori L-values L a (u ), which, as noted above, are all equal to 0 for equally liely information bits. Thus the extrinsic L-values passed from one decoder to the other during the iterative decoding process are treated lie new sets of a priori probabilities by the MAP algorithm.) Decoding then proceeds iteratively, with each decoder passing its respective extrinsic L-values bac to the other decoder. This results in a turbo or i

22 22 CHAPTER 16. TURBO CODING bootstrapping effect in which each estimate becomes successively more reliable. After a sufficient number of iterations, the decoded output is determined from the a posteriori estimate L (2) (u ) at the output of Decoder 2. Since the decoded output is taen only after the final iteration, it is more accurate to refer to the SISO constituent decoders as a posteriori probability (APP) estimators rather than MAP decoders, since their outputs are extrinsic a posteriori L-values that are passed to their companion decoder for more processing. A more complete discussion of iterative turbo decoding is given in Section While it is true, as stated above, that the extrinsic a posteriori L-values L e (u ) passed between decoders during the first iteration of decoding are independent of u, this is not so for subsequent iterations. Thus the extrinsic information becomes less helpful in obtaining successively more reliable estimates of the information bits as the iterations continue. Eventually, a point is reached where no further improvement is possible, the iterations are stopped, and the final decoding estimate is produced. Methods for determining when to stop the iterations, nown as stopping rules, are discussed in Section It is worth pointing out here that the name turbo in turbo coding has more to do with decoding than encoding. Indeed, it is the successive feedbac of extrinsic information from the SISO decoders in the iterative decoding process that mimics the feedbac of exhaust gases in a turbo engine. Finally, before moving on to a more detailed discussion of turbo coding, we remar that many of its features are similar to those observed for low-density parity-chec codes (LDPCC's), to be discussed in Chapter 17. Both encoding schemes produce codes with random-lie weight distributions and a thin weight spectrum. Both decoding methods mae use of APP lielihoods in an iterative process, and they both employ the concept of extrinsic information. In fact, it was the discovery of turbo coding in 1993 that led to a re-discovery of the merits of LDPCC's, which had been largely neglected by the research community for more than 30 years.

23 Figure 16.6: Basic Structure of an Iterative Turbo Decoder L c r (0) L c r (1) Decoder 1 Decoder 2 L (2) e (u ) L (1) (u L (1) e (u ) L (1) )-L c r (0) e (u ) Interleaver - Deinterleaver L (2) e (u ) - L c r (0) L c r (2) Interleaver L (2) (u )-L c r (0) L (2) (u ) Deinterleaver Decision INTRODUCTION TO TURBO CODING 23

24 24 CHAPTER 16. TURBO CODING 16.2 Distance Properties of Turbo Codes As illustrated in Examples 16.1 to 16.3, the fundamental property of turbo codes that allows them to achieve such excellent performance is the random-lie weight spectrum, or spectral thinning, produced by the pseudorandom interleaver. In this section, we will examine the weight spectrum of turbo codes in more detail. In particular, we consider a series of examples for Parallel Concatenated Codes (PCC's), including both Parallel Concatenated Bloc Codes (PCBC's) and Parallel Concatenated Convolutional Codes (PCCC's). As noted in the remar following Example 16.3, the exact weight spectrum of a turbo code depends on the particular interleaver chosen. Thus, in order to avoid exhaustively searching all possible interleavers for the best weight spectrum for a specific PCC, we introduce the concept of a uniform interleaver. Definition 16.1 [3] A uniform interleaver of length N is a probabilistic device which maps a given N N permutations with equal probability 1= w w input bloc of weight w into all its distinct. Λ Using the notion of a uniform interleaver allows us to calculate the average weight spectrum of a specific PCC over all possible interleavers. This average weight spectrum is typical of the weight spectrum obtained for a randomly chosen interleaver. Example 16.4 Consider the (7,4,3) Hamming code in systematic form. The Weight Enumerating Function (WEF) for this code is given by A(X) = 7X 3 + 7X 4 + X 7 ; (16.20) i.e., in addition to the all-zero codeword, the code contains seven codewords of weight 3, seven codewords of weight 4, and the all-one codeword. The complete list of 16 codewords is shown in Table Splitting the contributions of the information and parity bits gives the Input Redundancy Weight Enumerating

25 16.2. DISTANCE PROPERTIES OF TURBO CODES 25 Table 16.4: Codeword List for the (7,4,3) Hamming Code Information Parity Function (IRWEF) A(W;Z) = W (3Z 2 + Z 3 ) + W 2 (3Z + 3Z 2 ) + W 3 (1 + 3Z) + W 4 Z 3 : (16.21) In other words, there are three codewords with information weight 1 and parity weight 2, one codeword with information weight 1 and parity weight 3, three codewords with information weight 2 and parity weight 1, and so on. Finally, the Conditional WEF (CWEF) for each input weight is given by A 1 (Z) = 3Z 2 + Z 3 ; A 2 (Z) = 3Z + 3Z 2 ; A 3 (Z) = 1 + 3Z; A 4 (Z) = Z 3 : (16.22) Λ Now we examine how the uniform interleaver concept can be used to compute the average IRWEF

26 26 CHAPTER 16. TURBO CODING Transmitted Bits n - n - Interleaver length Systematic code C 1 (n,) Systematic code C 2 (n,) Figure 16.7: A Parallel Concatenated Bloc Code for PCC's. First consider the general case shown in Figure 16.7 of a PCBC with two different (n; ) systematic bloc constituent codes, C 1 and C 2, with CWEF's given by A C 1 w (Z) and A C 2 w (Z), w = 1; ;, connected by a uniform interleaver of size N =. The original information bloc and both parity blocs are transmitted, resulting in a (2n ; ) PCBC. Assume that a particular input bloc of weight w enters the first encoder, thereby generating one of the parity weights in A C 1 w (Z). From the definition of the uniform interleaver, it follows that there is an equal probability that the second encoder will match that parity weight with any of the parity weights in A C 2 w (Z). Thus the average CWEF of the PCBC is given by and the average IRWEF is given by A PC w (Z) = AC 1 w (Z) Λ A C 2 w (Z) ; (16.23) N w X A PC (W;Z) = A w;z W w Z z = (w;z) X (1»w»N) Also, we can express the bit CWEF and the bit IRWEF as follows: W w A PC w (Z): (16.24) Bw PC (Z) = w N APC w (Z) (16.25a)

27 16.2. DISTANCE PROPERTIES OF TURBO CODES 27 and X B PC (W;Z) = B w;z W w Z z = (w;z) X (1»w»N) W w B PC w (Z); (16.25b) where B w;z = (w=n)a w;z. Finally, the average codeword and bit WEF's are given by and Remars: A PC (X) = = X X (d min»d) 1»w»N B PC (X) = X = X d min»d 1»w»N A d X d = A PC (W;Z) fi fi fi W =Z=X W w A PC w (Z) fi fi W =Z=X (16.26a) B d X d = B PC (W;Z) fi fi fi W =Z=X W w B PC w (Z) fi fi W =Z=X : (16.26b) ffl The codeword CWEF A PC w (Z), IRWEF A PC (W;Z), and WEF A PC (X) of the PCBC are all average quantities over the entire class of uniform interleavers, and thus their coefficients may not be integers, unlie the case when these quantities are computed for the constituent codes individually. ffl (16.26) represents two different ways of expressing the codeword and bit WEF's of a PCBC, one as a sum over the codeword weights d that is valid for both systematic and nonsystematic codes and the other as a sum over input weights w that applies only to systematic codes. ffl The codeword and bit WEF expressions in (16.26a) and (16.26b) are general and apply to all systematic codes. Expressions similar to the sums over input weights w that are valid for nonsystematic codes are given by and A(X) = A(W;X)j W =1 = B(X) = B(W;X)j W =1 = X (1»w»N) X (1»w»N) W w A w (X)j W =1 W w B w (X)j W =1 ; (16.26c) (16.26d) where A(W;X);A w (X);B(W;X), and B w (X) are the Input Output Weight Enumerating Functions (IOWEF's) of the code. ffl A more general class of PCBC's results if the two constituent codes, C 1 and C 2, have different bloc lengths, n 1 and n 2.

28 28 CHAPTER 16. TURBO CODING Example 16.4 (Continued) For the (10,4) PCBC with two identical (7,4,3) Hamming codes as constituent codes, the CWEF's are the IRWEF's are A PC 1 (Z) = (3Z2 + Z 3 ) 2 = 2:25Z 4 + 1:5Z 5 + 0:25Z 6 ; 4 A PC 2 (Z) = (3Z + 3Z2 ) 2 = 1:5Z 2 + 3Z 3 + 1:5Z 4 ; 6 A PC (1 + 3Z)2 3 (Z) = = 0:25 + 1:5Z + 2:25Z 2 ; 4 A PC 4 (Z) = (Z3 ) 2 = Z 6 ; (16.27) 1 A PC (W;Z) = W 2:25Z 4 + 1:5Z 5 + 0:25Z 6 + W 2 1:5Z 2 + 3Z 3 + 1:5Z 4 + W 3 0:25 + 1:5Z + 2:25Z 2 + W 4 Z 6 (16.28a) and B PC (W;Z) = W 0:56Z 4 + 0:37Z 5 + 0:06Z 6 + W 2 0:75Z 2 + 1:5Z 3 + 0:75Z 4 + W 3 0:19 + 1:12Z + 1:69Z 2 + W 4 Z 6 ; (16.28b) and the WEF's are A PC (X) = 0:25X 3 + 3X 4 + 7:5X 5 + 3X 6 + 0:25X 7 + X 10 (16.28c) and B PC (X) = 0:19X 3 + 1:87X 4 + 3:75X 5 + 1:12X 6 + 0:06X 7 + X 10 : (16.28d) Λ Remars: ffl The coefficients of the codeword WEF's for the PCBC are fractional, due to the effect of averaging. ffl The sum of all the coefficients in A PC (X) equals 15, the total number of nonzero codewords in the PCBC.

29 16.2. DISTANCE PROPERTIES OF TURBO CODES 29 ffl The minimum distance of the (10,4) PCBC is almost certainly either 3 or 4, depending on the particular interleaver chosen. ffl In this example, a detailed analysis of the 4! = 24 possible interleaving patterns reveals that exactly 6 result in a minimum distance of 3, while the other 18 result in a minimum distance of 4 [3]. ffl The average codeword multiplicities of the low weight codewords are given by A 3 A 4 = 3:0. = 0:25 and ffl The average bit multiplicities of the low weight codewords are given by B 3 = 0:19 and B 4 = 1:87. Now we examine the more general case, illustrated in Figure 16.8, of forming the constituent codes in a PCBC by concatenating h codewords of a basic (n; ) systematic code C to form an (hn; h) h- repeated code C h and using an interleaver size of N = h. Again, the original information bloc and both parity blocs are transmitted, resulting in a (2hn h; h) PCBC. In this case, the IRWEF of the code C h is given by A Ch (W;Z) = 1 + A C (W;Z) Λ h 1; (16.29) where we have included the 1's" in (16.29) to account for the fact that the all-zero codeword and a non-zero codeword in the basic code C can be combined to form a non-zero codeword in the h-repeated code C h. If A Ch 1 w (Z) and A Ch 2 w (Z) represent the CWEF's of two h-repeated constituent codes C h 1 and C h 2, then the average CWEF of the PCBC is given by the average IRWEF's are given by and A PC w (Z) = ACh 1 A PC (W;Z) = B PC (W;Z) = w (Z) Λ A Ch 2 w (Z) ; (16.30) N w X (1»w»h) X (1»w»h) and the average WEF's can be computed using (16.26). W w A PC w (Z) (16.31a) w h W w A PC w (Z); (16.31b)

30 30 CHAPTER 16. TURBO CODING Transmitted Bits n -... n h... 1 h Interleaver length N=h Figure 16.8: A PCBC with Repeated Bloc Constituent Codes Example 16.5 Consider the (20,8) PCBC formed by using h = 2 codewords from the (7,4,3) Hamming code, i.e., a (14,8) doubly-repeated Hamming code, as both constituent codes, along with a uniform interleaver of size N = h = 8. The IRWEF of the (14,8) constituent code is A(W;Z) = 1 + W (3Z 2 + Z 3 ) + W 2 (3Z + 3Z 2 ) + W 3 (1 + 3Z) + W 4 Z 3Λ2 1 = W (6Z 2 + 2Z 3 ) + W 2 (6Z + 6Z 2 + 9Z 4 + 6Z 5 + Z 6 ) + W 3 (2 + 6Z + 18Z Z 4 + 6Z 5 ) + W 4 (15Z Z Z 4 ) + W 5 (6Z + 24Z Z 3 + 6Z 5 + 2Z 6 ) + W 6 (1 + 6Z + 9Z 2 + 6Z 4 + 6Z 5 ) + W 7 (2Z 3 + 6Z 4 ) + W 8 Z 6 ; (16.32) and its CWEF's for each input weight are given by A 1 (Z) = 6Z 2 + 2Z 3 ; A 2 (Z) = 6Z + 6Z 2 + 9Z 4 + 6Z 5 + Z 6 ; A 3 (Z) = 2 + 6Z + 18Z Z 4 + 6Z 5 ; A 4 (Z) = 15Z Z Z 4 ; A 5 (Z) = 6Z + 24Z Z 3 + 6Z 5 + 2Z 6 ; A 6 (Z) = 1 + 6Z + 9Z 2 + 6Z 4 + 6Z 5 ; A 7 (Z) = 2Z 3 + 6Z 4 ; A 8 (Z) = Z 6 : (16.33) Λ

31 16.2. DISTANCE PROPERTIES OF TURBO CODES 31 Note that this code still has minimum distance 3, i.e., it is a (14,8,3) code, since each of the seven weight 3 codewords in one code can be paired with the all-zero codeword in the other code, resulting in a total of fourteen weight 3 codewords in the doubly-repeated code. Thus, by itself, this (14,8,3) code would not be interesting, since it is longer (more complex) than the (7,4,3) code but does not have better distance properties. However, when it is used as a constituent code for a (20,8) PCBC, the resulting code has better distance properties than the (10,4) PCBC formed from a single Hamming code. Example 16.5 (Continued) Using (16.30), we can compute the CWEF's of the (20,8) PCBC with two identical (14,8,3) doublyrepeated Hamming codes as constituent codes as follows: A PC 1 (Z) = A PC 2 (Z) = A PC 3 (Z) = A PC 4 (Z) = 6Z2 + 2Z 3 2 = 4:5Z 4 + 3Z 5 + 0:5Z 6 ; 8 6Z + 6Z2 + 9Z 4 + 6Z 5 + Z = 1:29Z 2 + 2:57Z 3 + 1:29Z 4 + 3:86Z 5 + 6:43Z 6 + 3Z 7 + 3:32Z 8 + 3:86Z 9 + 1:93Z :43Z :04Z 12 ; 2 + 6Z + 18Z3 + 24Z 4 + 6Z = 0:07 + 0:43Z + 0:64Z 2 + 1:29Z 3 + 5:57Z 4 + 5:57Z 5 + 7:07Z :43Z :14Z 8 + 5:14Z 9 + 0:64Z 10 ; 15Z2 + 40Z Z = 3:21Z :14Z :29Z :14Z 7 + 3:21Z 8 ; 6Z + A PC 24Z2 + 18Z 3 + 6Z 5 + 2Z (Z) = 56 = 0:64Z 2 + 5:14Z :14Z :43Z 5 + 7:07Z 6 + 5:57Z 7 + 5:57Z 8 + 1:29Z 9 + 0:64Z :43Z :07Z 12 ;

32 32 CHAPTER 16. TURBO CODING A PC 6 (Z) = 1 + 6Z + 9Z2 + 6Z 4 + 6Z = 0:04 + 0:43Z + 1:93Z 2 + 3:86Z 3 + 3:32Z 4 + 3Z 5 + A PC 7 (Z) = A PC 8 (Z) = 6:43Z 6 + 3:86Z 7 + 1:29Z 8 + 2:57Z 9 + 1:29Z 10 ; 2Z3 + 6Z 4 2 = 0:5Z 6 + 3Z 7 + 4:5Z 8 ; 8 Z 6 2 = Z 12 : (16.34) 1 Then the IRWEF's A PC (W;Z) and B PC (W;Z) and the WEF's A PC (X) and B PC (X) can be computed using (16.31) and (16.26) (see Problem 16.4). Λ Remars: ffl The minimum distance of the (20,8) PCBC is almost certainly either 3 or 4, depending on the particular interleaver chosen, the same as for the (10,4) PCBC. ffl However, the average codeword multiplicities of the low weight codewords have decreased from A 3 = 0:25 to A 3 = 0:07 and from A 4 = 3:0 to A 4 = 1:72, respectively, despite the fact that the (20,8) PCBC contains 16 times as many codewords as the (10,4) PCBC. ffl Also, the average bit multiplicities of the low weight codewords have decreased from B 3 = 0:19 to B 3 = 0:03 and from B 4 = 1:87 to B 4 = 0:48, respectively. ffl This is an example of spectral thinning, i.e., the multiplicities of the low weight codewords in a PCBC are decreased by increasing the length of the constituent code and the interleaver size. ffl Increasing the code length and interleaver size by further increasing the repeat factor h leads to additional spectral thinning, which results in improved performance at low SNR's, but for bloc constituent codes the beneficial effect of increasing h diminishes for large h. ffl A better approach would be to increase the interleaver size by using longer bloc constituent codes, but efficient SISO decoding is more difficult for large bloc sizes. Now we consider the case of PCCC's, where the constituent codes are generated by convolutional encoders, as illustrated in Figure An exact analysis, similar to the above examples for PCBC's, is possible but is complicated by issues involving termination of the constituent encoders. Hence we mae the simplifying assumption that both constituent encoders are terminated to the all-zero state. (As

33 16.2. DISTANCE PROPERTIES OF TURBO CODES 33 noted previously, this is normally the case for the first encoder, but not for the second encoder, because the required termination bits are generated by the interleaver only with probability 1=2 m, where m is the encoder memory order.) The resulting analysis is nearly exact whenever the interleaver size N is at least an order of magnitude larger than the memory order m of the constituent encoders. Since turbo codes are most effective for short memory orders and large interleaver sizes, this is an assumption which always holds in practice. We begin by illustrating the procedure for computing the CWEF's A w (Z) of the equivalent bloc code produced by a convolutional encoder which starts in the all-zero state S 0 = 0 and returns to the all-zero state after an input sequence of length N, including termination bits. For an (n; 1; m) convolutional encoder, the equivalent bloc code has dimensions (nn; N m). The situation here is somewhat different from that presented in Chapter 11, where we were interested in computing the WEF of all codewords which diverged from the all-zero state at a particular time and remerged only once. This WEF was the appropriate one to consider for evaluating the event and bit error probabilities per unit time of an encoder driven by semi-infinite (unterminated) input sequences. In order to evaluate the bloc and bit error probabilities of PCCC's, however, the WEF must include the effect of multiple error events, i.e., error events which diverge from and remerge with the all-zero state more than once. This is because the encoders are driven by finite length (terminated) input sequences, resulting in an equivalent bloc code, and the error probability analysis must consider the entire bloc, rather than just a particular time unit. Thus we will modify the single error event WEF from Chapter 11 to obtain a multiple error event WEF appropriate for PCCC's. From Figure 16.9, we can see that any multiple error event in a codeword belonging to a terminated convolutional code can be viewed as a succession of single error events separated by sequences of 0's. We begin by considering all codewords that can be constructed from a single error event of length» N, i.e., the error event is surrounded by (N ) 0's. Since the (N ) 0's are divided into two groups, one preceding and one following the error event, the number of single error events is the number of ways of summing two non-negative integers that add to N. Thus, the multiplicity of bloc codewords for

Turbo Codes for Deep-Space Communications

Turbo Codes for Deep-Space Communications TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga. Turbo Codes Manjunatha. P manjup.jnnce@gmail.com Professor Dept. of ECE J.N.N. College of Engineering, Shimoga June 29, 2013 [1, 2, 3, 4, 5, 6] Note: Slides are prepared to use in class room purpose, may

More information

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009 Modern Coding Theory Daniel J. Costello, Jr. Coding Research Group Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 2009 School of Information Theory Northwestern University

More information

Convolutional Codes ddd, Houshou Chen. May 28, 2012

Convolutional Codes ddd, Houshou Chen. May 28, 2012 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional Codes ddd, Houshou Chen Department of Electrical Engineering National Chung Hsing University Taichung,

More information

Digital Communications

Digital Communications Digital Communications Chapter 8: Trellis and Graph Based Codes Saeedeh Moloudi May 7, 2014 Outline 1 Introduction 2 Convolutional Codes 3 Decoding of Convolutional Codes 4 Turbo Codes May 7, 2014 Proakis-Salehi

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

Turbo Codes are Low Density Parity Check Codes

Turbo Codes are Low Density Parity Check Codes Turbo Codes are Low Density Parity Check Codes David J. C. MacKay July 5, 00 Draft 0., not for distribution! (First draft written July 5, 998) Abstract Turbo codes and Gallager codes (also known as low

More information

New Puncturing Pattern for Bad Interleavers in Turbo-Codes

New Puncturing Pattern for Bad Interleavers in Turbo-Codes SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 2, November 2009, 351-358 UDK: 621.391.7:004.052.4 New Puncturing Pattern for Bad Interleavers in Turbo-Codes Abdelmounaim Moulay Lakhdar 1, Malika

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

Decision-Point Signal to Noise Ratio (SNR)

Decision-Point Signal to Noise Ratio (SNR) Decision-Point Signal to Noise Ratio (SNR) Receiver Decision ^ SNR E E e y z Matched Filter Bound error signal at input to decision device Performance upper-bound on ISI channels Achieved on memoryless

More information

State-of-the-Art Channel Coding

State-of-the-Art Channel Coding Institut für State-of-the-Art Channel Coding Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Example of Convolutional Codec

Example of Convolutional Codec Example of Convolutional Codec Convolutional Code tructure K k bits k k k n- n Output Convolutional codes Convoltuional Code k = number of bits shifted into the encoder at one time k= is usually used!!

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g Exercise Generator polynomials of a convolutional code, given in binary form, are g 0, g 2 0 ja g 3. a) Sketch the encoding circuit. b) Sketch the state diagram. c) Find the transfer function TD. d) What

More information

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 ) Chapter Convolutional Codes Dr. Chih-Peng Li ( 李 ) Table of Contents. Encoding of Convolutional Codes. tructural Properties of Convolutional Codes. Distance Properties of Convolutional Codes Convolutional

More information

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 1 Bacground Material 1.1 Organization of the Trellis The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 The Viterbi algorithm (VA) processes the (noisy) output sequence from a state machine

More information

THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES

THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-COES PERFORMANCES Horia BALTA 1, Lucian TRIFINA, Anca RUSINARU 1 Electronics and Telecommunications Faculty, Bd. V. Parvan, 1900 Timisoara, ROMANIA,

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes Kevin Buckley - 2010 109 ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes m GF(2 ) adder m GF(2 ) multiplier

More information

Channel Coding and Interleaving

Channel Coding and Interleaving Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

Unequal Error Protection Turbo Codes

Unequal Error Protection Turbo Codes Unequal Error Protection Turbo Codes Diploma Thesis Neele von Deetzen Arbeitsbereich Nachrichtentechnik School of Engineering and Science Bremen, February 28th, 2005 Unequal Error Protection Turbo Codes

More information

Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel.

Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel. UCLA Graduate School of Engineering - Electrical Engineering Program Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel. Miguel Griot, Andres I. Vila Casado, and Richard

More information

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels Buletinul Ştiinţific al Universităţii "Politehnica" din Timişoara Seria ELECTRONICĂ şi TELECOMUNICAŢII TRANSACTIONS on ELECTRONICS and COMMUNICATIONS Tom 5(65), Fascicola -2, 26 Performance of Multi Binary

More information

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise. Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)

More information

Recent Results on Capacity-Achieving Codes for the Erasure Channel with Bounded Complexity

Recent Results on Capacity-Achieving Codes for the Erasure Channel with Bounded Complexity 26 IEEE 24th Convention of Electrical and Electronics Engineers in Israel Recent Results on Capacity-Achieving Codes for the Erasure Channel with Bounded Complexity Igal Sason Technion Israel Institute

More information

Binary Convolutional Codes

Binary Convolutional Codes Binary Convolutional Codes A convolutional code has memory over a short block length. This memory results in encoded output symbols that depend not only on the present input, but also on past inputs. An

More information

Interleaver Design for Turbo Codes

Interleaver Design for Turbo Codes 1 Interleaver Design for Turbo Codes H. R. Sadjadpour, N. J. A. Sloane, M. Salehi, and G. Nebe H. Sadjadpour and N. J. A. Sloane are with AT&T Shannon Labs, Florham Park, NJ. E-mail: sadjadpour@att.com

More information

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus Turbo Compression Andrej Rikovsky, Advisor: Pavol Hanus Abstract Turbo codes which performs very close to channel capacity in channel coding can be also used to obtain very efficient source coding schemes.

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

Low-density parity-check (LDPC) codes

Low-density parity-check (LDPC) codes Low-density parity-check (LDPC) codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

Lecture 3 : Introduction to Binary Convolutional Codes

Lecture 3 : Introduction to Binary Convolutional Codes Lecture 3 : Introduction to Binary Convolutional Codes Binary Convolutional Codes 1. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes. In contrast with a block

More information

Introduction to Binary Convolutional Codes [1]

Introduction to Binary Convolutional Codes [1] Introduction to Binary Convolutional Codes [1] Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw Y. S. Han Introduction

More information

A Study on Simulating Convolutional Codes and Turbo Codes

A Study on Simulating Convolutional Codes and Turbo Codes A Study on Simulating Convolutional Code and Turbo Code Final Report By Daniel Chang July 27, 2001 Advior: Dr. P. Kinman Executive Summary Thi project include the deign of imulation of everal convolutional

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with

More information

Coding on a Trellis: Convolutional Codes

Coding on a Trellis: Convolutional Codes .... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:

More information

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved. Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information

More information

A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure

A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure David G. M. Mitchell E H U N I V E R S I T Y T O H F R G E D I N B U A thesis submitted for the degree of Doctor of Philosophy.

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Turbo Codes for xdsl modems

Turbo Codes for xdsl modems Turbo Codes for xdsl modems Juan Alberto Torres, Ph. D. VOCAL Technologies, Ltd. (http://www.vocal.com) John James Audubon Parkway Buffalo, NY 14228, USA Phone: +1 716 688 4675 Fax: +1 716 639 0713 Email:

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Chec Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, IIT, Chicago sashann@hawiitedu, salim@iitedu Abstract We consider the problem of constructing

More information

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00 NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR Sp ' 00 May 3 OPEN BOOK exam (students are permitted to bring in textbooks, handwritten notes, lecture notes

More information

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications Punctured Convolutional Codes Revisited: the Exact State iagram and Its Implications Jing Li Tiffany) Erozan Kurtas epartment of Electrical and Computer Engineering Seagate Research Lehigh University Bethlehem

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008 Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Convolutional Codes November 6th, 2008 1

More information

ELEC 405/511 Error Control Coding. Binary Convolutional Codes

ELEC 405/511 Error Control Coding. Binary Convolutional Codes ELEC 405/511 Error Control Coding Binary Convolutional Codes Peter Elias (1923-2001) Coding for Noisy Channels, 1955 2 With block codes, the input data is divided into blocks of length k and the codewords

More information

Symbol Interleaved Parallel Concatenated Trellis Coded Modulation

Symbol Interleaved Parallel Concatenated Trellis Coded Modulation Symbol Interleaved Parallel Concatenated Trellis Coded Modulation Christine Fragouli and Richard D. Wesel Electrical Engineering Department University of California, Los Angeles christin@ee.ucla. edu,

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 4, No. 1, June 007, 95-108 Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System A. Moulay Lakhdar 1, R. Méliani,

More information

Bounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes

Bounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes Bounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes Igal Sason and Shlomo Shamai (Shitz) Department of Electrical Engineering Technion Israel Institute of Technology Haifa 3000,

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive

More information

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Thomas R. Halford and Keith M. Chugg Communication Sciences Institute University of Southern California Los Angeles, CA 90089-2565 Abstract

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Trellis-based Detection Techniques

Trellis-based Detection Techniques Chapter 2 Trellis-based Detection Techniques 2.1 Introduction In this chapter, we provide the reader with a brief introduction to the main detection techniques which will be relevant for the low-density

More information

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

1.6: Solutions 17. Solution to exercise 1.6 (p.13). 1.6: Solutions 17 A slightly more careful answer (short of explicit computation) goes as follows. Taking the approximation for ( N K) to the next order, we find: ( N N/2 ) 2 N 1 2πN/4. (1.40) This approximation

More information

A Turbo Code Tutorial

A Turbo Code Tutorial A Turbo Code Tutorial William E. Ryan ew Mexico State University ox 3000 Dept. 3-O, Las Cruces, M 88003 wryan@nmsu.edu Abstract We give a tutorial exposition of turbo codes and the associated algorithms.

More information

The Turbo Principle in Wireless Communications

The Turbo Principle in Wireless Communications The Turbo Principle in Wireless Communications Joachim Hagenauer Institute for Communications Engineering () Munich University of Technology (TUM) D-80290 München, Germany Nordic Radio Symposium, Oulu,

More information

Capacity-approaching codes

Capacity-approaching codes Chapter 13 Capacity-approaching codes We have previously discussed codes on graphs and the sum-product decoding algorithm in general terms. In this chapter we will give a brief overview of some particular

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps 2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

FEEDBACK does not increase the capacity of a discrete

FEEDBACK does not increase the capacity of a discrete 1 Sequential Differential Optimization of Incremental Redundancy Transmission Lengths: An Example with Tail-Biting Convolutional Codes Nathan Wong, Kasra Vailinia, Haobo Wang, Sudarsan V. S. Ranganathan,

More information

Turbo Codes Can Be Asymptotically Information-Theoretically Secure

Turbo Codes Can Be Asymptotically Information-Theoretically Secure Turbo Codes Can Be Asymptotically Information-Theoretically Secure Xiao Ma Department of ECE, Sun Yat-sen University Guangzhou, GD 50275, China E-mail: maxiao@mailsysueducn Abstract This paper shows that

More information

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming

More information

Minimum Distance Bounds for Multiple-Serially Concatenated Code Ensembles

Minimum Distance Bounds for Multiple-Serially Concatenated Code Ensembles Minimum Distance Bounds for Multiple-Serially Concatenated Code Ensembles Christian Koller,Jörg Kliewer, Kamil S. Zigangirov,DanielJ.Costello,Jr. ISIT 28, Toronto, Canada, July 6 -, 28 Department of Electrical

More information

Lecture 19 IIR Filters

Lecture 19 IIR Filters Lecture 19 IIR Filters Fundamentals of Digital Signal Processing Spring, 2012 Wei-Ta Chu 2012/5/10 1 General IIR Difference Equation IIR system: infinite-impulse response system The most general class

More information

Lecture 4: Linear Codes. Copyright G. Caire 88

Lecture 4: Linear Codes. Copyright G. Caire 88 Lecture 4: Linear Codes Copyright G. Caire 88 Linear codes over F q We let X = F q for some prime power q. Most important case: q =2(binary codes). Without loss of generality, we may represent the information

More information

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes S-723410 BCH and Reed-Solomon Codes 1 S-723410 BCH and Reed-Solomon Codes 3 Background The algebraic structure of linear codes and, in particular, cyclic linear codes, enables efficient encoding and decoding

More information

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic. Modular numbers and Error Correcting Codes Introduction Modular Arithmetic Finite fields n-space over a finite field Error correcting codes Exercises Introduction. Data transmission is not normally perfect;

More information

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION EE229B PROJECT REPORT STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION Zhengya Zhang SID: 16827455 zyzhang@eecs.berkeley.edu 1 MOTIVATION Permutation matrices refer to the square matrices with

More information

The Sorted-QR Chase Detector for Multiple-Input Multiple-Output Channels

The Sorted-QR Chase Detector for Multiple-Input Multiple-Output Channels The Sorted-QR Chase Detector for Multiple-Input Multiple-Output Channels Deric W. Waters and John R. Barry School of ECE Georgia Institute of Technology Atlanta, GA 30332-0250 USA {deric, barry}@ece.gatech.edu

More information

Polar Code Construction for List Decoding

Polar Code Construction for List Decoding 1 Polar Code Construction for List Decoding Peihong Yuan, Tobias Prinz, Georg Böcherer arxiv:1707.09753v1 [cs.it] 31 Jul 2017 Abstract A heuristic construction of polar codes for successive cancellation

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding Chapter 6 Reed-Solomon Codes 6. Finite Field Algebra 6. Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding 6. Finite Field Algebra Nonbinary codes: message and codeword symbols

More information

Frequency-domain representation of discrete-time signals

Frequency-domain representation of discrete-time signals 4 Frequency-domain representation of discrete-time signals So far we have been looing at signals as a function of time or an index in time. Just lie continuous-time signals, we can view a time signal as

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

2 Generating Functions

2 Generating Functions 2 Generating Functions In this part of the course, we re going to introduce algebraic methods for counting and proving combinatorial identities. This is often greatly advantageous over the method of finding

More information

Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels

Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels Iulian Topor Acoustic Research Laboratory, Tropical Marine Science Institute, National University of Singapore, Singapore 119227. iulian@arl.nus.edu.sg

More information

PUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS

PUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS 19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011 PUCTURED 8-PSK TURBO-TCM TRASMISSIOS USIG RECURSIVE SYSTEMATIC COVOLUTIOAL GF ( 2 ) ECODERS Calin

More information

Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity

Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity Henry D. Pfister, Member, Igal Sason, Member Abstract This paper introduces

More information

The Concept of Soft Channel Encoding and its Applications in Wireless Relay Networks

The Concept of Soft Channel Encoding and its Applications in Wireless Relay Networks The Concept of Soft Channel Encoding and its Applications in Wireless Relay Networks Gerald Matz Institute of Telecommunications Vienna University of Technology institute of telecommunications Acknowledgements

More information

Hybrid Concatenated Codes with Asymptotically Good Distance Growth

Hybrid Concatenated Codes with Asymptotically Good Distance Growth Hybrid Concatenated Codes with Asymptotically Good Distance Growth Christian Koller, Alexandre Graell i Amat,Jörg Kliewer, Francesca Vatta, and Daniel J. Costello, Jr. Department of Electrical Engineering,

More information

QPP Interleaver Based Turbo-code For DVB-RCS Standard

QPP Interleaver Based Turbo-code For DVB-RCS Standard 212 4th International Conference on Computer Modeling and Simulation (ICCMS 212) IPCSIT vol.22 (212) (212) IACSIT Press, Singapore QPP Interleaver Based Turbo-code For DVB-RCS Standard Horia Balta, Radu

More information

New attacks on Keccak-224 and Keccak-256

New attacks on Keccak-224 and Keccak-256 New attacks on Keccak-224 and Keccak-256 Itai Dinur 1, Orr Dunkelman 1,2 and Adi Shamir 1 1 Computer Science department, The Weizmann Institute, Rehovot, Israel 2 Computer Science Department, University

More information

1 Basic Combinatorics

1 Basic Combinatorics 1 Basic Combinatorics 1.1 Sets and sequences Sets. A set is an unordered collection of distinct objects. The objects are called elements of the set. We use braces to denote a set, for example, the set

More information

Graph-based Codes and Iterative Decoding

Graph-based Codes and Iterative Decoding Graph-based Codes and Iterative Decoding Thesis by Aamod Khandekar In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008 Codes on Graphs Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 27th, 2008 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 1 / 31

More information

On Weight Enumerators and MacWilliams Identity for Convolutional Codes

On Weight Enumerators and MacWilliams Identity for Convolutional Codes On Weight Enumerators and MacWilliams Identity for Convolutional Codes Irina E Bocharova 1, Florian Hug, Rolf Johannesson, and Boris D Kudryashov 1 1 Dept of Information Systems St Petersburg Univ of Information

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

Making Error Correcting Codes Work for Flash Memory

Making Error Correcting Codes Work for Flash Memory Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging

More information

Symmetric Product Codes

Symmetric Product Codes Symmetric Product Codes Henry D. Pfister 1, Santosh Emmadi 2, and Krishna Narayanan 2 1 Department of Electrical and Computer Engineering Duke University 2 Department of Electrical and Computer Engineering

More information

1e

1e B. J. Frey and D. J. C. MacKay (1998) In Proceedings of the 35 th Allerton Conference on Communication, Control and Computing 1997, Champaign-Urbana, Illinois. Trellis-Constrained Codes Brendan J. Frey

More information