A Turbo Code Tutorial
|
|
- Gervase McDaniel
- 5 years ago
- Views:
Transcription
1 A Turbo Code Tutorial William E. Ryan ew Mexico State University ox 3000 Dept. 3-O, Las Cruces, M wryan@nmsu.edu Abstract We give a tutorial exposition of turbo codes and the associated algorithms. Included are a simple derivation for the performance of turbo codes, and a straightforward presentation of the iterative decoding algorithm. The derivations of both the performance estimate and the modi ed CJR decoding algorithm are novel. The treatment is inted to be a launching point for further study in the eld and, signi cantly, to provide su±cient information for the design of computer simulations. I. Introduction Turbo codes, rst presented to the coding community in 993 [], represent the most important breathrough in coding since Ungerboec introduced trellis codes in 982 [2]. Whereas Ungerboec's wor eventually led to coded modulation schemes capable of operation near capacity on bandlimited channels [3], the original turbo codes offer near-capacity performance for deep space and satellite channels. The invention of turbo codes involved reviving some dormant concepts and algorithms, and combining them with some clever new ideas. ecause the principles surrounding turbo codes are both uncommon and novel, it has been di±cult for the initiate to enter into the study of these codes. Complicating matters further is the fact that there exist now numerous papers on the topic so that there is no clear place to begin study of these codes. In this paper, we hope to address this problem by including in one paper an introduction to the study of turbo codes. We give a detailed description of the encoder and present a simple derivation of its performance in additive white Gaussian noise (AWG). articularly di±cult for the novice has been the understanding and simulation of the iterative decoding algorithm, and so we give a thorough description of the algorithm here. This paper borrows from some of the most prominent publications in the eld [4]- [9], sometimes adding details that were omitted in those wors. However, the general presentation and some of the derivations are novel. Our goal is a self-contained, simple introduction to turbo codes for those already nowledgeable in the elds of algebraic and trellis codes. The paper is organized as follows. In the next section we present the structure of the encoder, which leads to an estimate of its performance. The subsequent section then describes the iterative algorithm used to decode these codes. The treatment in each of these sections is meant to be su±ciently detailed so that one may with reasonable ease design a computer simulation of the encoder and decoder. II. The Encoder and Its erformance Fig. depicts a standard turbo encoder. As seen in the gure, a turbo encoder is consists of two binary rate /2 convolutional encoders separated by an -bit interleaver or permuter, together with an optional puncturing mechanism. Clearly, without the puncturer, the encoder is rate /3, mapping data bits to 3 code bits. We observe that the encoders are con gured in a manner reminiscent of classical concatenated codes. However, instead of cascading the encoders in the usual serial fashion, the encoders are arranged in a so-called parallel concatenation. Observe also that the consituent convolutional encoders are of the recursive systematic variety. ecause any non-recursive (i.e., feedforward) non-catastrophic convolutional encoder is equivalent to a recursive systematic encoder in that they possess that same set of code sequences, there was no compelling reason in the past for favoring recursive encoders. However, as will be argued below, recursive encoders are necessary to attain the exceptional performance provided by turbo codes. Without any essential loss of generality, we assume that the constituent codes are identical. efore describing further details of the turbo encoder in its entirety, we shall rst discuss its individual components. A. The Recursive Systematic Encoders Whereas the generator matrix for a rate /2 nonrecursive convolutional code has the form G R (D) = [g (D) g 2 (D)] ; the equivalent recursive systematic encoder has the generator matrix G R (D) = g 2 (D) g (D) Observe that the code sequence corresponding to the encoder input u(d) for the former code is u(d)g R (D) = [u(d)g (D) u(d)g 2 (D)] ; and that the identical code sequence is produced in the recursive code by the sequence u 0 (D) =u(d)g (D); since in this case the code sequence is u(d)g (D)G R (D) =u(d)g R (D) Here, we loosely call the pair of polynomials u(d)g R (D) a code sequence, although the actual code sequence is derived from this polynomial pair in the usual way. Observe that, for the recursive encoder, the code sequence will be of nite weight if and only if the input sequence is divisible by g (D) We have the following immediate corollaries of this fact which we shall use later.
2 Corollary. A weight-one input will produce an in nite weight output (for such an input is never divisible by a polynomial g (D)). Corollary 2. For any non-trivial g (D); there exists a family of weight-two inputs of the form D j ( + D q ), j 0; which produce nite weight outputs, i.e., which are divisible by g (D) When g (D) is a primitive polynomial of degree m, then q =2 m ; more generally, q isthe length of the pseudorandom sequence generated by g (D) In the context of the code's trellis, Corollary says that a weight-one input will create a path that diverges from the all-zeros path, but never remerges. Corollary 2 says that there will always exist a trellis path that diverges and remerges later which corresponds to a weight-two data sequence. Example. Consider the code with generator matrix G R (D) = +D 2 + D 3 + D 4 +D + D 4 Thus, g (D) =+D + D 4 and g 2 (D) =+D 2 + D 3 + D 4 or, in octal form, (g ;g 2 ) = (3; 27) Observe that g (D) is primitive so that, for example, u(d) =+D 5 produces the nite-length code sequence (+D 5 ; +D+D 2 +D 3 + D 5 + D 7 + D 8 + D ). Of course, any delayed version of this input, say, D 7 (+D 5 ); will simply produce a delayed version of this code sequence. Fig. 2 gives one encoder realization for this code. We remar that, in addition to elaborating on Corollary 2, this example serves to demonstrate the conventions generally used in the literature for specifying such encoders.. The ermuter As the name implies, the function of the permuter is to tae each incoming bloc of data bits and rearrange them in a pseudo-random fashion prior to encoding by the second encoder. Unlie the classical interleaver (e.g., bloc or convolutional interleaver), which rearranges the bits in some systematic fashion, it is important that the permuter sort the bits in a manner that lacs any apparent order, although it might be tailored in a certain way for weight-two and weight-three inputs as explained in Example 2 below. Also important is that be selected quite large and we shall assume 000 hereafter. The importance of these two requirements will be illuminated below. We point out also that one pseudo-random permuter will perform about as well as any other provided is large. C. The uncturer While for deep space applications low-rate codes are appropriate, in other situations such as satellite communications, a rate of /2 or higher is preferred. The role of the turbo code puncturer is identical to that of its convolutional code counterpart, to periodically delete selected bits to reduce coding overhead. For the case of iterative decoding to be discussed below, it is preferrable to delete only parity bits as indicated in Fig., but there is no guarantee that this will maximize the minimum codeword distance. For example, to achieve a rate of /2, one might delete all even parity bits from the top encoder and all odd parity bits from the bottom one. D. The Turbo Encoder and Its erformance As will be elaborated upon in the next section, a maximum-liehood (ML) sequence decoder would be far too complex for a turbo code due to the presence of the permuter. However, the suboptimum iterative decoding algorithm to be described there o ers near-ml performance. Hence, we shall now estimate the performance of an ML decoder (analysis of the iterative decoder is much more di±cult). Armed with the above descriptions of the components of the turbo encoder of Fig., it is easy to conclude that it is linear since its components are linear. The constituent codes are certainly linear, and the permuter is linear since it may be modeled by a permutation matrix. Further, the puncturer does not a ect linearity since all codewords share the same puncture locations. As usual, the importance of linearity is that, in considering the performance of a code, one may choose the all-zeros sequence as a reference. Thus, hereafter we shall assume that the all-zeros codeword was transmitted. ow consider the all-zeros codeword (the 0 th codeword) and the th codeword, for some 2f; 2; ; 2 g. The ML decoder will choose the³p codeword over the 0 th codeword with probability Q 2d re b = 0 where r is the code rate and d is the weight of the th codeword. The bit error rate for this two-codeword situation would then be b ( j 0) = w (bit errors/cw error) (cw/ data bits) ³p Q 2rd E b = 0 (cw errors/cw) Ãr! = w 2rd Q E b 0 (bit errors/data bit) where w is the weight of the th data word. ow including all of the codewords and invoing the usual union bounding argument, we may write b = b (choose any 2f; 2;;2 g j0) = 2 = 2 = b ( j 0) Ãr! w 2rd Q E b 0 ote that every non-zero codeword is included in the above summation. Let us now reorganize the summation as b ( w) w= v= Ãr! w 2rdwv Q E b 0. ()
3 where the rst sum is over the weight-w inputs, the second sum is over the w di erent weight-w inputs, and dwv is the weight of the v th codeword produced by a weight-w input. Consider now the rst few terms in the outer summation of (). w = From Corollary and associated discussion above, weight-one inputs will produce only large weight codewords at both constituent encoder outputs since the trellis paths created never remerge with the all-zeros path. Thus, each d v is signi cantly greater than the minimum codeword weight so that the w = terms in () will be negligible. w =2 Of the 2 weight-two encoder inputs, only a fraction will be divisible by g (D) (i.e., yield remergent paths) and, of these, only certain ones will yield the smallest weight, d CC ; at a constituent encoder output (here, CC denotes \constituent code"). Further, with the permuter present, if an input u(d) of weight-two yields a weight-d CC codeword at the rst encoder's output, it is unliely that the permuted input, u 0 (D), seen by the second encoder will also correspond to a weight-d CC codeword (much less, be divisible by g (D)). We can be sure, however, that there will be some minimum-weight turbo codeword produced by a w = 2 input, and that this minimum weight can be bounded as d TC 2d CC 2 ; with equality when both of the constituent encoders produce weight-d CC codewords (minus 2 for the bottom encoder). The exact value of d TC is permuter depent. We will denote the number of weight-two inputs which produce weight-d TC turbo codewords by n 2 so that, for w = 2, the inner sum in () may be approximated as ( 2 ) Ãr! 0s 2 2rd2v Q E b ' 2n 2 0 Q TC E ba (2) 0 v= w = 3 Following an argument similar to the w = 2 case, we can approximate the inner sum in () for w =3as ( 3) Ãr! 0s 3 2rd3v Q E b ' 3n 3 0 Q TC 3;min E ba ; (3) 0 v= where n 3 and d TC are obviously de ned. While 3;min n 3 is clearly depent on the interleaver, we can mae some comments on its size relative to n 2 for a \randomly generated" interleaver. Although there are ( 2)=3 times as many w = 3 terms in the inner summation of () as there are w = 2 terms, we can expect the number of weight-three terms divisible by g (D) to be of the order of the number of weight-two terms divisible by g (D) Thus, most of the 3 terms in () can be removed from consideration for this reason. Moreover, given a weight-three encoder input u(d) divisible by g (D) (e.g., g (D) itself in the above example), it becomes very unliely that the permuted input u 0 (D) seen by the second encoder will also be divisible by g (D). For example, suppose u(d) =g (D) =+D +D 4 Then the permuter output will be a multiple of g (D) if the three input 's become the j th ; (j +) th ; and (j +4) th bits out of the permuter, for some j. If we imagine that the permuter acts in a purely random fashion so that the probability that one of the 's lands a given position is =,the permuter output will be D j g (D) =D j +D + D 4 with probabilility 3!= 3 For comparison, for w = 2 inputs, a given permuter output pattern occurs with probability 2!= 2. Thus, we would expect the number of weight-three inputs, n 3 ; resulting in remergent paths in both encoders to be much less than n 2, n 3 << n 2 ; with the result being that the inner sum in () for w =3 is negligible relative to that for w =2. 2 w 4 Again we can approximate the inner sum in () for w = 4 in the same manner as in (2) and (3). Still we would lie to mae some comments on its size for the \random" interleaver. A weight-four input might appear to the rst encoder as a weight-three input concatenated some time later with a weight-one input, leading to a nonremergent path in the trellis and, hence, a negligible term in the inner sum in (). It might also appear as a concatenation of two weight-two inputs, in which case the turbo codeword weight is at least 2d TC ; again leading to a negligible term in (). Finally, if it happens to be some other pattern divisible by g (D) at the rst encoder, with probability on the order of = 3 it will be simultaneously divisible by g (D) at the second encoder. 3 Thus, we may expect n 4 << n 2 so that the w 4 terms are negligible in (). The cases for w>4 are argued similarly. To summarize, the bound in () can be approximated as s 9 2rd = TC w;min b ' max E ba (4) w < wn w where n w and d TC w;min are functions of the particular interleaver employed. From our discussion above, we might expect that the w = 2 term dominates for a randomly generated interleaver, although it is easy to nd interleavers with n 2 = 0 as seen in the example to follow. In any case, This is not the only weight-three pattern divisible by g (D) g 2(D) =+D2 + D 8 is another one, but this too has probability 3!= 3 of occurring. 2 ecause our argument assumes a \purely random" permuter, the inequality n 3 << n 2 has to be interpreted probabilistically. Thus, it is more accurate to write Efn 3 g << Efn 2g where the expectation is over all interleavers. Alternatively, for the average interleaver, we would expect n 3 << n 2; thus if n 2 = 5, say, we would expect n 3 =0. 3 The value of = 3 derives from that fact that ideally a particular divisible output pattern occurs with probability 4!= 4, but there will be approximately shifted versions of that pattern, each divisible by g (D). ;
4 we observe that b decreases with, so that the error rate can be reduced simply by increasing the interleaver length. This e ect is called interleaver gain (or permuter gain) and demonstrates the necessity of large permuters. Finally, we note that recursive encoders are crucial elements of a turbo code since, for non-recursive encoders, division by g (D) (non-remergent trellis paths) would not be an issue and (4) would not hold (although () still would). Example 2. We consider the performance of a rate /2, (3, 33) turbo code for two di erent interleavers of size = 000. We start rst with an interleaver that was randomly generated. We found for this particular interleaver, n 2 = 0 and n 3 =,withd TC 3;min = 9, so that the w = 3 term dominates in (4). Interestingly, the interleaver input corresponding to this dominant error event was D 68 ( + D 5 + D 0 ) which produces the interleaver output D 88 ( + D 5 + D 848 ), where of course both polynomials are divisible by g (D) =+D + D 4. Figure 3 gives the simulated performance of of this code for 5 iterations of the iterative decoding algorithm detailed in the next section. Also included in Fig. 3 is the estimate of (4) for the same interleaver which is observed to be very close to the simulated values. The interleaver was then modi ed by hand to improve the weight spectrum of the code. It was a simple matter to attain n 2 = with d TC = 2 and n 3 = 4 with d TC 3;min = 5 for this second interleaver so that the w = 2 term now dominates in (4). The simulated and estimated performance curves for this second interleaver are also included in Fig. 3. In addition to illustrating the use of the estimate (4), this example helps explain the unusual shape of the error rate curve it may be interpreted as the usual Q-function shape for a signaling scheme with a modest d min ; \pushed down" by the interleaver gain w n w =, where w is the maximizing value of w in (4). III. The Decoder Consider rst an ML decoder for a rate /2 convolutional code (recursive or not), and assume a data word of length, 000 say. Ignoring the structure of the code, a naive ML decoder would have to compare (correlate) 2 code sequences to the noisy received sequence, choosing in favor of the codeword with the best correlation metric. Clearly, the complexity of such an algorithm is exorbitant. Fortunately, as we now, such a brute force approach is simpli ed greatly by Viterbi's algorithm which permits a systematic elimination of candidate code sequences (in the rst step, 2 are eliminated, then another 2 2 are eliminated on the second step, and so on). Unfortunately, we have no such luc with turbo codes, for the presence of the permuter immensely complicates the structure of a turbo code trellis, maing these codes loo more lie bloc codes. Just prior to the discovery of turbo codes, there was much interest in the coding community in suboptimal decoding strategies for concatenated codes, involving multiple (usually two) decoders operating cooperatively and iteratively. Most of the focus was on a type of Viterbi decoder which provides soft-output (or reliability) information to a companion soft-output Viterbi decoder for use in a subsequent decoding [0]. Also receiving some attention was the symbol-by-symbol maximum a posteriori (MA) algorithm of ahl, et al [], published over 20 years ago. It was this latter algorithm, often called the CJR algorithm, that errou, et al [], utilized in the iterative decoding of turbo codes. We will discuss in this section the CJR algorithm employed by each constituent decoder, but we refer the reader to [] for the derivations of some of the results. We rst discuss a modi ed version of the CJR algorithm for performing symbol-by-symbol MA decoding. We then show how this algorithm is incorporated into an iterative decoder employing two CJR-MA decoders. We shall require the following de nitions - E is a notation for encoder - E2 is a notation for encoder 2 - D is a notation for decoder - D2 is a notation for decoder 2 - m is the constituent encoder memory - S is the set of all 2 m constituent encoder states - x s =(x s ;xs 2 ;;xs )=(u ;u 2 ;;u ) is the encoder input word - x p =(x p ;xp 2 ;;xp ) is the parity word generated by a constituent encoder - y =(y s ;yp ) is a noisy (AWG) version of (xs ;xp ) - y b a =(y a;y a+ ;;y b ) - y = y =(y ;y 2 ;;y ) is the noisy received codeword A. The Modi ed CJR Algorithm In the symbol-by-symbol MA decoder, the decoder decides u =+if(u =+j y) >(u = j y), and it decides u = otherwise. More succinctly, the decision ^u is given by ^u =sign [L(u )] where L(u ) is the log a posteriori probability (LA) ratio de ned as L(u ), log µ (u =+j y) (u = j y) Incorporating the code's trellis, this may be written as L(u p(s = s 0 ;s = s; y)=p(y) S + A (5) p(s = s 0 ;s = s; y)=p(y) S
5 where s 2 S is the state of the encoder at time, S + is the set of ordered pairs (s 0 ;s) corresponding to all state transitions (s = s 0 )! (s = s) caused by data input u = +, and S is similarly de ned for u =. Observe we may cancel p(y) in (5) which means we require only an algorithm for computing p(s 0 ;s;y) = p(s = s 0 ;s = s; y) The CJR algorithm [] for doing this is p(s 0 ;s;y) = (s 0 ) (s 0 ;s) (s) (6) where (s), p(s = s; y ) is computed recursively as with initial conditions (s) = s 0 2S (s 0 ) (s 0 ;s) (7) 0 (0) = and 0 (s 6= 0)=0 (8) (These conditions state that the encoder is expected to start in state 0.) The probability (s 0 ;s) in (7) is de ned as (s 0 ;s), p(s = s; y j s = s 0 ) (9) and will be discussed further below. The probabilities (s), p(y j + s = s) in (6) are computed in a \bacward" recursion as with boundary conditions (s 0 )= s2s (s) (s 0 ;s) (0) (0) = and (s 6= 0)=0 () (The encoder is expected to in state 0 after input bits, implying that the last m input bits, called termination bits, are so selected.) Unfortunately, cancelling the divisor p(y) in (5) leads to a numerically unstable algorithm. We can include division by p(y)=p(y ) in the CJR algorithm 4 by de ning modi ed probabilities ~ (s) = (s)=p(y ) and ~ (s) = (s)=p(y + j y) Dividing (6) by p(y)=p(y ) = p(y )p(y j + y ), we obtain p(s 0 ;sj y)p(y )=~ (s 0 ) (s 0 ;s) ~ (s) (2) ote since p(y )= s2s (s); the values ~ (s) may be computed from f (s) s 2 Sg via ~ (s) = (s)= s2s (s) (3) 4 Unfortunately, dividing by simply p(y) toobtainp(s 0 ;s j y) also leads to an unstable algorithm. Obtaining p(s 0 ;s j y)p(y ) instead of the A p(s 0 ;s j y) presents no problem since an A ratio is computed so that the unwanted factor p(y ) cancels; see equation (6) below. ut since we would lie to avoid storing both f (s)g and f~ (s)g, we can use (7) in (3) to obtain a recursion involving only f~ (s)g, ~ (s) = = s s s 0 (s 0 ) (s 0 ;s) s 0 (s 0 ) (s 0 ;s) s 0 ~ (s 0 ) (s 0 ;s) s 0 ~ (s 0 ) (s 0 ;s) ; (4) where the second equality follows by dividing the numerator and the denominator by p(y ). The recursion for ~ (s) can be obtained by noticing that p(y j y ) = p(y) p(y + j y ) = s = s s 0 p(y ) (s 0 ) (s 0 ;s) p(y j + y) p(y ) s 0 ~ (s 0 ) (s 0 ;s) p(y + j y ) so that dividing (0) by this equation yields ~ (s 0 s )= ~ (s) (s 0 ;s) s s 0 ~ (s 0 ) (s 0 ;s) (5) In summary, the modi ed CJR-MA algorithm involves computing the LA ratio L(u ) by combining (5) and (2) to obtain L(u ~ (s 0 ) (s 0 ;s) ~ (s) S C + ~ (s 0 ) (s 0 ;s) ~ (s) A (6) S where the ~ 's and ~ 's are computed recursively via (4) and (5), respectively. Clearly the f~ (s)g and f ~ (s)g share the same boundary conditions as their counterparts as given in (8) and (). Computation of the probabilities (s 0 ;s) will be discussed shortly. On the topic of stability, we should point out also that the algorithm given here wors in software, but a hardware implementation would employ the \log-ma" algorithm [2], [4]. In fact, most software implementations these days use the log-ma, although they can be slower than the algorithm presented here if not done carefully. The algorithm presented here is close to the earlier turbo decoding algorithms [], [3], [4].. Iterative MA Decoding From ayes' rule, the LA ratio for an arbitrary MA decoder can be written as µ µ (y j u =+) (u =+) L(u )=log +log (y j u = ) (u = ) with the second term representing a priori information. Since (u = +) = (u = ) typically, the a priori term is usually zero for conventional decoders. However, for iterative decoders, D receives extrinsic or soft
6 information for each u from D2 which serves as a priori information. Similarly, D2 receives extrinsic information from D and the decoding iteration proceeds as D!D2!D!D2!..., with the previous decoder passing soft information along to the next decoder at each halfiteration except for the rst. The idea behind extrinsic information is that D2 provides soft information to D for each u, using only information not available to D (i.e., E2 parity); D does liewise for D2. An iterative decoder using component CJR-MA decoders is shown in Fig. 4. Observe how permuters and de-permuters are involved in arranging systematic, parity, and extrinsic information in the proper sequence for each decoder. We now show how extrinsic information is extracted from the modi ed-cjr version of the LA ratio embodied in (6). We rst observe that (s 0 ;s) may be written as (cf. equation (9)) (s 0 ;s) = (s j s 0 )p(y j s 0 ;s) = (u )p(y j u ) where the event u corresponds to the event s 0! s. De ning µ L e (u =+) (u ), log ; (u = ) observe that we may write µ exp[ L (u ) = e (u )=2] +exp[ L e exp[u L e (u )=2] (u )] = A exp[u L e (u )=2] (7) where the rst equality follows since it equals à p = + + = +! p + = = + when u = + and à p = + + = +! p = + = when u = where we have de ned +, (u = +) and, (u = ) for convenience. As for p(y j u ); we may write (recall y =(y s ;yp )andx =(x s ;xp )=(u ;x p )) p(y j u ) / exp (ys u ) 2 (y p xp )2 2¾ 2 2¾ " 2 # = exp ys 2 + u 2 + yp 2 + x p 2 2¾ 2 so that exp u y s + xp yp ¾ 2 = exp ys u + y p xp ¾ 2 (s 0 ;s) / A exp[u L e (u )=2] exp u y s + xp yp ¾ 2 (8) ow since (s 0 ;s) appears in the numerator (where u = +) and denominator (where u = ) of (6), the factor A will cancel as it is indepent of u. Also, since we assume transmission of the symbols overthe E c 0=2 = ¾ 2 channel, so that ¾ 2 = 0 =2E c where E c = re b is the energy per channel bit. From (8), we then have (s 0 ;s)» exp 2 u (L e (u )+L c y)+ s 2 L cy p xp ) = exp 2 u (L e (u )+L c y s e(s0 ;s) (9) where L c, 4Ec 0 and where e (s 0 ;s), exp 2 L cy p xp Combining (9) with (6) we obtain L(u ) = S + ~ (s 0 ) e (s0 ;s) ~ (s) C ~ (s 0 ) e(s0 ;s) ~ (s) C S = L c y s + L e (u ) (20) C A ~ (s 0 ) e(s0 ;s) ~ (s) S C + ~ (s 0 ) e(s0 ;s) ~ (s) A S where C, exp 2 u (L e (u )+L c y s ) The second equality follows since C (u = +) and C (u = ) can be factored out of the summations in the numerator and denominator, respectively. The rst term in (20) is sometimes called the channel value, the second term represents any a priori information about u provided by a previous decoder, and the third term represents extrinsic information that can be passed on to a subsequent decoder. Thus, for example, on any given iteration, D computes L (u )=L c y s + L e 2(u )+L e 2(u ) where L e 2(u ) is extrinsic information passed from D2 to D, and L e 2 (u ) is the third term in (20) which is to be used as extrinsic information from D to D2. C. seudo-code for the Iterative Decoder We do not give pseudo-code for the encoder here since this is much more straightforward. However, it must be emphasized that at least E must be terminated correctly to avoid serious degradation. That is, the last m bits of the -bit information word to be encoded must force E to the zero state by the th bit. The pseudo-code given below for iterative decoding of a turbo code follows directly from the development above. Implicit is the fact that each decoder must have full nowledge of the trellis of the constituent encoders. For example, each decoder must have a table (array) containing the input bits and parity bits for all possible state transitions s 0! s Also required are permutation and de-permutation
7 functions (arrays) since D and D2 will be sharing reliability information about each u, but D2's information is pemuted relative to D. We denote these arrays by [ ] and inv[ ], respectively. For example, the permuted word u 0 is obtained from the original word u via the pseudo-code statement for =, u 0 = u [],. We next point out that due to the presence of L c in L(u ), nowledge of the noise variance 0 =2 by each MA decoder is necessary. Finally, we mention that a simple way to simulate puncturing is, in the computation of (s 0 ;s), to set to zero the received parity samples, y p or y2p ; corresponding to the punctured parity bits, x p or x2p Thus, puncturing need not be performed at the encoder. ===== Initialization ===== D - - ~ () 0 (s) = for s =0 = 0 for s 6= 0 ~ () (s) = for s =0 = 0 for s 6= 0 - L e 2(u )=0for =; 2;; D2 - ~ (2) 0 (s) = for s =0 = 0 for s 6= 0 - ~ (2) (s) =~ (2) f~ (2) (s) for all s (set after computation of (s)g in the rst iteration)5 - L e 2(u ) is to be determined from D after the rst half-iteration and so need not be initialized ======================= ===== The n th interation ===== D for = -get y =(y s ;yp ) where yp parity is a noisy version of E - compute (s 0 ;s) from (9) for all allowable state transitions s 0! s (u in (9) is set to the value of the encoder input which caused the transition s 0! s; L e (u )isinthiscasel e 2(u inv[] ), the de-permuted extrinsic informaton from the previous D2 iteration) 5 ote encoder 2 cannot be simply terminated due to the presence of the interleaver. The strategy implied here leads to a negligible loss in performance. Other termination strategies can be found in the literature (e.g., [5]). - compute ~ () (s) for all s using (4) for = 2 - compute ~ () (s) for all s using (5) for = - compute L e 2(u ) using L e 2(u )=log D2 for S ~ () S + ~ () (s0 ) e(s0 ;s) ~ () (s) (s0 ) e(s0 ;s) ~ () (s) C A -get y =(y s [] ;y2p ) - compute (s 0 ;s) from (9) for all allowable state transitions s 0! s (u in (9) is set to the value of the encoder input which caused the transition s 0! s; L e (u )is L e 2(u [] ), the permuted extrinsic informaton from the previous D iteration; y s is the permuted systematic value, y s ) [] - compute ~ (2) (s) for all s using (4) for = 2 - compute ~ (2) (s) for all s using (5) for = - compute L e 2(u ) using L e 2(u S ~ (2) S + ~ (2) (s0 ) e(s0 ;s) ~ (2) (s) (s0 ) e(s0 ;s) ~ (2) (s) ========================== ===== After the last iteration ===== for = - compute -if L (u ) > 0 L (u )=L c y s + Le 2 (u inv[])+l e 2 (u ) decide u =+ C A
8 else decide u = ========================== Acnowledgment The author would lie to than Omer Aciel of ew Mexico State University for help with Example 2, and Eso ieminen of oia and rof. Steve Wilson of the University of Virginia for helpful suggestions. References [] C. errou, A. Glavieux, and. Thitimajshima, \ear Shannon limit error- correcting coding and decoding Turbo codes," roc. 993 Int. Conf. Comm., pp [2] G. Ungerboec, \Channel coding with multilevel/phase signals," IEEE Trans. Inf. Theory, pp , Jan [3] M. Eyuboglu, G. D. Forney,. Dong, G. Long, \Advanced modulation techniques for V.Fast," Eur. Trans. on Telecom., pp , May 993. [4]. Robertson, \Illuminating the structure of code and decoder of parallel concatenated recursive systematic (turbo) codes," roc. GlobeCom 994, pp [5] S. enedetto and G. Montorsi, \Unveiling turbo codes Some results on parallel concatenated coding schemes," IEEE Trans. Inf. Theory, pp , Mar [6] S. enedetto and G. Montorsi, \Design of parallel concatenated codes," IEEE Trans. Comm., pp , May 996. [7] J. Hagenauer, E. O er, and L. ape, \Iterative decoding of binary bloc and convolutional codes," IEEE Trans. Inf. Theory, pp , Mar [8] D. Arnold and G. Meyerhans, \The realization of the turbo-coding system," Semester roject Report, Swiss Fed. Inst. of Tech., Zurich, Switzerland, July, 995. [9] L. erez, J. Seghers, and D. Costello, \A distance spectrum interpretation of turbo codes," IEEE Trans. Inf. Theory, pp , ov [0] J. Hagenauer and. Hoeher, \A Viterbi algorithm with soft-decision outputs and its applications," roc. GlobeCom 989, pp [] L. ahl, J. Coce, F. Jeline, and J. Raviv, \Optimal decoding of linear codes for minimizing symbol error rate," IEEE Trans. Inf. Theory, pp , Mar [2]. Robertson, E. Villebrun, and. Hoeher, \A comparison of optimal and suboptimal MA decoding algorithms operating in the log domain," roc. 995 Int. Conf. on Comm., pp [3] C. errou and A. Glavieux, \ear optimum error correcting coding and decoding turbo-codes," IEEE Trans. Comm., pp , Oct. 996.
9 [4] A. Viterbi, \An intuitive justi cation and a simpli ed implementation of the MA decoder for convolutional codes," IEEE JSAC, pp , Feb (3,33) Turbo Code Comparison 0 = r = /2 (even parity punctured) [5] D. Divsalar and F. ollara, \Turbo codes for CS applications," roc. 995 Int. Conf. Comm., pp b bnd for intlvr 5 iterations 0 6 bnd for intlvr2 0 7 u=x s RSC -bit Interleaver u RSC 2 g2 ( D) g ( D) g2 ( D) g ( D) x p x 2p uncturing Mechanism x p, x 2p Fig.. Diagram of a standard turbo encoder with two identical recursive systematic encoders (RSC's). u Eb/o Fig. 3. Simulated performance of the rate /2 (3, 33) turbo code for two di erent interleavers ( = 000) together with the asymptotic performance of each predicted by (4). y p y s y 2p MA Decoder L e 2 -bit De-Intrlver -bit Intrlver -bit Intrlver MA Decoder 2 L e 2 Fig. 4. Diagram of iterative (turbo) decoder which uses two MA decoders operating cooperatively. L e 2 is \soft" or extrinsic information from D to D2, and L e 2 is de ned similarly. The nal decisions may come from either D or D2. u + g g 2 =0 g 3 =0 g 4 g 20 g 2 =0 g 22 g 23 g 24 + p Fig. 2. Recursive systematic encoder for code generators (g ;g 2 ) = (3, 27).
Turbo Codes for Deep-Space Communications
TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,
More informationEncoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...
Chapter 16 Turbo Coding As noted in Chapter 1, Shannon's noisy channel coding theorem implies that arbitrarily low decoding error probabilities can be achieved at any transmission rate R less than the
More informationExact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel
Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering
More informationPUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS
19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011 PUCTURED 8-PSK TURBO-TCM TRASMISSIOS USIG RECURSIVE SYSTEMATIC COVOLUTIOAL GF ( 2 ) ECODERS Calin
More informationCode design: Computer search
Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator
More informationAalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes
Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication
More informationNew Puncturing Pattern for Bad Interleavers in Turbo-Codes
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 2, November 2009, 351-358 UDK: 621.391.7:004.052.4 New Puncturing Pattern for Bad Interleavers in Turbo-Codes Abdelmounaim Moulay Lakhdar 1, Malika
More informationTurbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.
Turbo Codes Manjunatha. P manjup.jnnce@gmail.com Professor Dept. of ECE J.N.N. College of Engineering, Shimoga June 29, 2013 [1, 2, 3, 4, 5, 6] Note: Slides are prepared to use in class room purpose, may
More informationTHE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES
THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-COES PERFORMANCES Horia BALTA 1, Lucian TRIFINA, Anca RUSINARU 1 Electronics and Telecommunications Faculty, Bd. V. Parvan, 1900 Timisoara, ROMANIA,
More informationInterleaver Design for Turbo Codes
1 Interleaver Design for Turbo Codes H. R. Sadjadpour, N. J. A. Sloane, M. Salehi, and G. Nebe H. Sadjadpour and N. J. A. Sloane are with AT&T Shannon Labs, Florham Park, NJ. E-mail: sadjadpour@att.com
More informationSIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land
SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:
More informationData Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.
Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)
More informationExponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes
International Symposium on Information Theory and its Applications, ISITA004 Parma, Italy, October 10 13, 004 Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes
More informationPerformance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels
Buletinul Ştiinţific al Universităţii "Politehnica" din Timişoara Seria ELECTRONICĂ şi TELECOMUNICAŢII TRANSACTIONS on ELECTRONICS and COMMUNICATIONS Tom 5(65), Fascicola -2, 26 Performance of Multi Binary
More informationEfficient Computation of EXIT Functions for Non-Binary Iterative Decoding
TO BE PUBLISHED IN IEEE TRANSACTIONS ON COMMUNCATIONS, DECEMBER 2006 Efficient Computation of EXIT Functions for Non-Binary Iterative Decoding Jörg Kliewer, Senior Member, IEEE, Soon Xin Ng, Member, IEEE,
More informationMaximum Likelihood Decoding of Codes on the Asymmetric Z-channel
Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus
More informationIntroduction to Convolutional Codes, Part 1
Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes
More informationThe Viterbi Algorithm EECS 869: Error Control Coding Fall 2009
1 Bacground Material 1.1 Organization of the Trellis The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 The Viterbi algorithm (VA) processes the (noisy) output sequence from a state machine
More informationChapter 7: Channel coding:convolutional codes
Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication
More informationOn the Computation of EXIT Characteristics for Symbol-Based Iterative Decoding
On the Computation of EXIT Characteristics for Symbol-Based Iterative Decoding Jörg Kliewer, Soon Xin Ng 2, and Lajos Hanzo 2 University of Notre Dame, Department of Electrical Engineering, Notre Dame,
More informationNAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00
NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR Sp ' 00 May 3 OPEN BOOK exam (students are permitted to bring in textbooks, handwritten notes, lecture notes
More informationDigital Communications
Digital Communications Chapter 8: Trellis and Graph Based Codes Saeedeh Moloudi May 7, 2014 Outline 1 Introduction 2 Convolutional Codes 3 Decoding of Convolutional Codes 4 Turbo Codes May 7, 2014 Proakis-Salehi
More informationTrellis-based Detection Techniques
Chapter 2 Trellis-based Detection Techniques 2.1 Introduction In this chapter, we provide the reader with a brief introduction to the main detection techniques which will be relevant for the low-density
More informationTurbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus
Turbo Compression Andrej Rikovsky, Advisor: Pavol Hanus Abstract Turbo codes which performs very close to channel capacity in channel coding can be also used to obtain very efficient source coding schemes.
More informationBASICS OF DETECTION AND ESTIMATION THEORY
BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal
More informationResearch on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 4, No. 1, June 007, 95-108 Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System A. Moulay Lakhdar 1, R. Méliani,
More informationFEEDBACK does not increase the capacity of a discrete
1 Sequential Differential Optimization of Incremental Redundancy Transmission Lengths: An Example with Tail-Biting Convolutional Codes Nathan Wong, Kasra Vailinia, Haobo Wang, Sudarsan V. S. Ranganathan,
More informationExample of Convolutional Codec
Example of Convolutional Codec Convolutional Code tructure K k bits k k k n- n Output Convolutional codes Convoltuional Code k = number of bits shifted into the encoder at one time k= is usually used!!
More informationState-of-the-Art Channel Coding
Institut für State-of-the-Art Channel Coding Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More informationSymbol Interleaved Parallel Concatenated Trellis Coded Modulation
Symbol Interleaved Parallel Concatenated Trellis Coded Modulation Christine Fragouli and Richard D. Wesel Electrical Engineering Department University of California, Los Angeles christin@ee.ucla. edu,
More informationPunctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications
Punctured Convolutional Codes Revisited: the Exact State iagram and Its Implications Jing Li Tiffany) Erozan Kurtas epartment of Electrical and Computer Engineering Seagate Research Lehigh University Bethlehem
More informationBinary Convolutional Codes
Binary Convolutional Codes A convolutional code has memory over a short block length. This memory results in encoded output symbols that depend not only on the present input, but also on past inputs. An
More informationBounds on Mutual Information for Simple Codes Using Information Combining
ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar
More informationHigh rate soft output Viterbi decoder
High rate soft output Viterbi decoder Eric Lüthi, Emmanuel Casseau Integrated Circuits for Telecommunications Laboratory Ecole Nationale Supérieure des Télécomunications de Bretagne BP 83-985 Brest Cedex
More informationThese outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n
Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:
More informationOn the exact bit error probability for Viterbi decoding of convolutional codes
On the exact bit error probability for Viterbi decoding of convolutional codes Irina E. Bocharova, Florian Hug, Rolf Johannesson, and Boris D. Kudryashov Dept. of Information Systems Dept. of Electrical
More informationBinary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1
5 Conference on Information Sciences and Systems, The Johns Hopkins University, March 6 8, 5 inary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity Ahmed O.
More informationNew Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding
1 New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding Alireza Kenarsari-Anhari, Student Member, IEEE, and Lutz Lampe, Senior Member, IEEE Abstract Bit-interleaved
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More informationChannel Coding I. Exercises SS 2017
Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut
More informationChannel Coding and Interleaving
Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.
More informationExpected Error Based MMSE Detection Ordering for Iterative Detection-Decoding MIMO Systems
Expected Error Based MMSE Detection Ordering for Iterative Detection-Decoding MIMO Systems Lei Zhang, Chunhui Zhou, Shidong Zhou, Xibin Xu National Laboratory for Information Science and Technology, Tsinghua
More informationOn Performance of Sphere Decoding and Markov Chain Monte Carlo Detection Methods
1 On Performance of Sphere Decoding and Markov Chain Monte Carlo Detection Methods Haidong (David) Zhu, Behrouz Farhang-Boroujeny, and Rong-Rong Chen ECE Department, Unversity of Utah, USA emails: haidongz@eng.utah.edu,
More informationMinimum Distance Bounds for Multiple-Serially Concatenated Code Ensembles
Minimum Distance Bounds for Multiple-Serially Concatenated Code Ensembles Christian Koller,Jörg Kliewer, Kamil S. Zigangirov,DanielJ.Costello,Jr. ISIT 28, Toronto, Canada, July 6 -, 28 Department of Electrical
More informationError Correction Methods
Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.
More informationAn Introduction to Low Density Parity Check (LDPC) Codes
An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,
More informationExtended MinSum Algorithm for Decoding LDPC Codes over GF (q)
Extended MinSum Algorithm for Decoding LDPC Codes over GF (q) David Declercq ETIS ENSEA/UCP/CNRS UMR-8051, 95014 Cergy-Pontoise, (France), declercq@ensea.fr Marc Fossorier Dept. Electrical Engineering,
More informationA Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding
A Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding Arun Avudainayagam, John M. Shea and Dapeng Wu Wireless Information Networking Group (WING) Department of Electrical and Computer Engineering
More informationOptimized Symbol Mappings for Bit-Interleaved Coded Modulation with Iterative Decoding
Optimized Symbol Mappings for Bit-nterleaved Coded Modulation with terative Decoding F. Schreckenbach, N. Görtz, J. Hagenauer nstitute for Communications Engineering (LNT) Munich University of Technology
More informationA Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes
A Piggybacing Design Framewor for Read-and Download-efficient Distributed Storage Codes K V Rashmi, Nihar B Shah, Kannan Ramchandran, Fellow, IEEE Department of Electrical Engineering and Computer Sciences
More informationSoft-Output Trellis Waveform Coding
Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca
More informationLecture 3 : Introduction to Binary Convolutional Codes
Lecture 3 : Introduction to Binary Convolutional Codes Binary Convolutional Codes 1. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes. In contrast with a block
More informationQPP Interleaver Based Turbo-code For DVB-RCS Standard
212 4th International Conference on Computer Modeling and Simulation (ICCMS 212) IPCSIT vol.22 (212) (212) IACSIT Press, Singapore QPP Interleaver Based Turbo-code For DVB-RCS Standard Horia Balta, Radu
More informationReed-Solomon codes. Chapter Linear codes over finite fields
Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime
More informationPerformance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels
Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering
More informationLecture 4 : Introduction to Low-density Parity-check Codes
Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were
More informationBifurcations and Chaos in Turbo Decoding Algorithms
Bifurcations and Chaos in Turbo Decoding Algorithms Z. Tasev 1, P. Popovski 2, G.M. Maggio 3, and L. Kocarev 1 1 Institute for Nonlinear Science, University of California, San Diego, 9500 Gilman Drive,
More informationConstruction of coset-based low rate convolutional codes and their application to low rate turbo-like code design
Construction of coset-based low rate convolutional codes and their application to low rate turbo-like code design Durai Thirupathi and Keith M Chugg Communication Sciences Institute Dept of Electrical
More informationConvolutional Codes ddd, Houshou Chen. May 28, 2012
Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional Codes ddd, Houshou Chen Department of Electrical Engineering National Chung Hsing University Taichung,
More informationTHIS paper is aimed at designing efficient decoding algorithms
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used
More informationChapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )
Chapter Convolutional Codes Dr. Chih-Peng Li ( 李 ) Table of Contents. Encoding of Convolutional Codes. tructural Properties of Convolutional Codes. Distance Properties of Convolutional Codes Convolutional
More informationSome UEP Concepts in Coding and Physical Transport
Some UEP Concepts in Coding and Physical Transport Werner Henkel, Neele von Deetzen, and Khaled Hassan School of Engineering and Science Jacobs University Bremen D-28759 Bremen, Germany Email: {w.henkel,
More informationIntroduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.
Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information
More informationLow Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson
Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Agenda NAND ECC History Soft Information What is soft information How do we obtain
More informationSimplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation
Simplified Implementation of the MAP Decoder Shouvik Ganguly ECE 259B Final Project Presentation Introduction : MAP Decoder û k = arg max i {0,1} Pr[u k = i R N 1 ] LAPPR Λ k = log Pr[u k = 1 R N 1 ] Pr[u
More informationIterative Equalization using Improved Block DFE for Synchronous CDMA Systems
Iterative Equalization using Improved Bloc DFE for Synchronous CDMA Systems Sang-Yic Leong, Kah-ing Lee, and Yahong Rosa Zheng Abstract Iterative equalization using optimal multiuser detector and trellis-based
More informationA NEW CHANNEL CODING TECHNIQUE TO APPROACH THE CHANNEL CAPACITY
A NEW CHANNEL CODING TECHNIQUE TO APPROACH THE CHANNEL CAPACITY Mahesh Patel 1 and A. Annamalai 1 1 Department of Electrical and Computer Engineering, Prairie View A & M University, TX 77446, United States
More informationFast Joint Source-Channel Decoding of
Fast Joint Source-Channel Decoding of 1 Convolutional Coded Markov Sequences with Monge Property Sorina Dumitrescu Abstract This work addresses the problem of joint source-channel decoding of a Markov
More informationTurbo Codes for xdsl modems
Turbo Codes for xdsl modems Juan Alberto Torres, Ph. D. VOCAL Technologies, Ltd. (http://www.vocal.com) John James Audubon Parkway Buffalo, NY 14228, USA Phone: +1 716 688 4675 Fax: +1 716 639 0713 Email:
More informationOptimum Soft Decision Decoding of Linear Block Codes
Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is
More informationOn Weight Enumerators and MacWilliams Identity for Convolutional Codes
On Weight Enumerators and MacWilliams Identity for Convolutional Codes Irina E Bocharova 1, Florian Hug, Rolf Johannesson, and Boris D Kudryashov 1 1 Dept of Information Systems St Petersburg Univ of Information
More information1e
B. J. Frey and D. J. C. MacKay (1998) In Proceedings of the 35 th Allerton Conference on Communication, Control and Computing 1997, Champaign-Urbana, Illinois. Trellis-Constrained Codes Brendan J. Frey
More informationGraph-based codes for flash memory
1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background
More informationConstruction of Protographs for QC LDPC Codes With Girth Larger Than 12 1
Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1 Sunghwan Kim, Jong-Seon No School of Electrical Eng. & Com. Sci. Seoul National University, Seoul, Korea Email: {nodoubt, jsno}@snu.ac.kr
More informationThe Geometry of Turbo-Decoding Dynamics
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 46, NO 1, JANUARY 2000 9 The Geometry of Turbo-Decoding Dynamics Tom Richardson Abstract The spectacular performance offered by turbo codes sparked intense
More informationIntroduction to Binary Convolutional Codes [1]
Introduction to Binary Convolutional Codes [1] Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw Y. S. Han Introduction
More informationTurbo Code Design for Short Blocks
Turbo Code Design for Short Blocks Thomas Jerkovits, Balázs Matuz Abstract This work considers the design of short parallel turbo codes (PTCs) with block lengths in the order of (a few) hundred code bits.
More informationPolar Code Construction for List Decoding
1 Polar Code Construction for List Decoding Peihong Yuan, Tobias Prinz, Georg Böcherer arxiv:1707.09753v1 [cs.it] 31 Jul 2017 Abstract A heuristic construction of polar codes for successive cancellation
More informationAlgebraic Soft-Decision Decoding of Reed Solomon Codes
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 11, NOVEMBER 2003 2809 Algebraic Soft-Decision Decoding of Reed Solomon Codes Ralf Koetter, Member, IEEE, Alexer Vardy, Fellow, IEEE Abstract A polynomial-time
More informationTurbo Codes are Low Density Parity Check Codes
Turbo Codes are Low Density Parity Check Codes David J. C. MacKay July 5, 00 Draft 0., not for distribution! (First draft written July 5, 998) Abstract Turbo codes and Gallager codes (also known as low
More informationStatistical mechanics and capacity-approaching error-correctingcodes
Physica A 302 (2001) 14 21 www.elsevier.com/locate/physa Statistical mechanics and capacity-approaching error-correctingcodes Nicolas Sourlas Laboratoire de Physique Theorique de l, UMR 8549, Unite Mixte
More informationRADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology
RADIO SYSTEMS ETIN15 Lecture no: 8 Equalization Ove Edfors, Department of Electrical and Information Technology Ove.Edfors@eit.lth.se Contents Inter-symbol interference Linear equalizers Decision-feedback
More informationCoding on a Trellis: Convolutional Codes
.... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:
More informationSoft-Output Decision-Feedback Equalization with a Priori Information
Soft-Output Decision-Feedback Equalization with a Priori Information Renato R. opes and John R. Barry School of Electrical and Computer Engineering Georgia Institute of Technology, Atlanta, Georgia 333-5
More informationBounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes
Bounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes Igal Sason and Shlomo Shamai (Shitz) Department of Electrical Engineering Technion Israel Institute of Technology Haifa 3000,
More informationModern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009
Modern Coding Theory Daniel J. Costello, Jr. Coding Research Group Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 2009 School of Information Theory Northwestern University
More informationDecoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift
Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Ching-Yao Su Directed by: Prof. Po-Ning Chen Department of Communications Engineering, National Chiao-Tung University July
More informationAALTO UNIVERSITY School of Electrical Engineering. Sergio Damian Lembo MODELING BLER PERFORMANCE OF PUNCTURED TURBO CODES
AALTO UNIVERSITY School of Electrical Engineering Sergio Damian Lembo MODELING BLER PERFORMANCE OF PUNCTURED TURBO CODES Thesis submitted for examination for the degree of Master of Science in Technology
More informationDecision-Point Signal to Noise Ratio (SNR)
Decision-Point Signal to Noise Ratio (SNR) Receiver Decision ^ SNR E E e y z Matched Filter Bound error signal at input to decision device Performance upper-bound on ISI channels Achieved on memoryless
More informationLOW-density parity-check (LDPC) codes [1], a class
3872 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 11, NOVEMBER 2005 Analysis of Low-Density Parity-Check Codes for the Gilbert Elliott Channel Andrew W. Eckford, Member, IEEE, Frank R. Kschischang,
More informationEUSIPCO
EUSIPCO 3 569736677 FULLY ISTRIBUTE SIGNAL ETECTION: APPLICATION TO COGNITIVE RAIO Franc Iutzeler Philippe Ciblat Telecom ParisTech, 46 rue Barrault 753 Paris, France email: firstnamelastname@telecom-paristechfr
More informationChannel Codes for Short Blocks: A Survey
11th International ITG Conference on Systems, Communications and Coding February 6, 2017 Channel Codes for Short Blocks: A Survey Gianluigi Liva, gianluigi.liva@dlr.de Fabian Steiner, fabian.steiner@tum.de
More informationProceedings of the DATA COMPRESSION CONFERENCE (DCC 02) /02 $ IEEE
Compression with Side Information Using Turbo Codes Anne Aaron and Bernd Girod Information Systems Laboratory Stanford Universit y,stanford, CA 94305 amaaron@stanford.e du, bgir o d@stanford.e du Abstract
More informationChannel Coding I. Exercises SS 2017
Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut
More informationThe Concept of Soft Channel Encoding and its Applications in Wireless Relay Networks
The Concept of Soft Channel Encoding and its Applications in Wireless Relay Networks Gerald Matz Institute of Telecommunications Vienna University of Technology institute of telecommunications Acknowledgements
More informationEfficient Decoding of Permutation Codes Obtained from Distance Preserving Maps
2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical
More informationEfficient LLR Calculation for Non-Binary Modulations over Fading Channels
Efficient LLR Calculation for Non-Binary Modulations over Fading Channels Raman Yazdani, Student Member, IEEE, and arxiv:12.2164v1 [cs.it] 1 Feb 21 Masoud Ardaani, Senior Member, IEEE Department of Electrical
More informationMaximum Likelihood Sequence Detection
1 The Channel... 1.1 Delay Spread... 1. Channel Model... 1.3 Matched Filter as Receiver Front End... 4 Detection... 5.1 Terms... 5. Maximum Lielihood Detection of a Single Symbol... 6.3 Maximum Lielihood
More informationCommunication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi
Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking
More information