Turbo Codes for Deep-Space Communications

Similar documents
Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels

Interleaver Design for Turbo Codes

Simplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation

THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES

New Puncturing Pattern for Bad Interleavers in Turbo-Codes

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Code design: Computer search

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

QPP Interleaver Based Turbo-code For DVB-RCS Standard

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

The Super-Trellis Structure of Turbo Codes

Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with Iterative Decoding

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

OPTIMUM fixed-rate scalar quantizers, introduced by Max

Convolutional Codes ddd, Houshou Chen. May 28, 2012

ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES

Introduction to Convolutional Codes, Part 1

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Soft-Output Decision-Feedback Equalization with a Priori Information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

PUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Soft-Output Trellis Waveform Coding

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

State-of-the-Art Channel Coding

Proceedings of the DATA COMPRESSION CONFERENCE (DCC 02) /02 $ IEEE

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

ECEN 655: Advanced Channel Coding

Digital Communications

Symbol Interleaved Parallel Concatenated Trellis Coded Modulation

Trellis-based Detection Techniques

A Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

1e

A t super-channel. trellis code and the channel. inner X t. Y t. S t-1. S t. S t+1. stages into. group two. one stage P 12 / 0,-2 P 21 / 0,2

Minimum Distance Bounds for Multiple-Serially Concatenated Code Ensembles

Bifurcations and Chaos in Turbo Decoding Algorithms

The Geometry of Turbo-Decoding Dynamics

Lecture 4 : Introduction to Low-density Parity-check Codes

Channel Codes for Short Blocks: A Survey

IN MOST OF the theory and practice of error-control

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Turbo Codes for xdsl modems

Bounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes

Chapter 7: Channel coding:convolutional codes

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

Coding on a Trellis: Convolutional Codes

Interleaver Properties and Their Applications to the Trellis Complexity Analysis of Turbo Codes

Efficient Computation of EXIT Functions for Non-Binary Iterative Decoding

Capacity-approaching codes

Turbo Codes are Low Density Parity Check Codes

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Unequal Error Protection Turbo Codes

AALTO UNIVERSITY School of Electrical Engineering. Sergio Damian Lembo MODELING BLER PERFORMANCE OF PUNCTURED TURBO CODES

New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding


On the Computation of EXIT Characteristics for Symbol-Based Iterative Decoding

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Graph-based Codes and Iterative Decoding

CoherentDetectionof OFDM

2 Information transmission is typically corrupted by noise during transmission. Various strategies have been adopted for reducing or eliminating the n

Modulation codes for the deep-space optical channel

Successive Cancellation Decoding of Single Parity-Check Product Codes

Construction of coset-based low rate convolutional codes and their application to low rate turbo-like code design

A contribution to the reduction of the dynamic power dissipation in the turbo decoder

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

A Systematic Description of Source Significance Information

Analysis of Sum-Product Decoding of Low-Density Parity-Check Codes Using a Gaussian Approximation

Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel.

High rate soft output Viterbi decoder

The Concept of Soft Channel Encoding and its Applications in Wireless Relay Networks

Turbo Code Design for Short Blocks

Hyper-Trellis Decoding of Pixel-Domain Wyner-Ziv Video Coding

QAM Constellations for BICM-ID

Code Concatenation and Low-Density Parity Check Codes

The Distribution of Cycle Lengths in Graphical Models for Turbo Decoding

THE seminal paper of Gallager [1, p. 48] suggested to evaluate

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 8, AUGUST Linear Turbo Equalization Analysis via BER Transfer and EXIT Charts

Recent Results on Capacity-Achieving Codes for the Erasure Channel with Bounded Complexity

Hybrid Concatenated Codes with Asymptotically Good Distance Growth

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Algebraic Soft-Decision Decoding of Reed Solomon Codes

Performance of Low Density Parity Check Codes. as a Function of Actual and Assumed Noise Levels. David J.C. MacKay & Christopher P.

Low-complexity decoders for non-binary turbo codes

Introduction to Binary Convolutional Codes [1]

Low Density Lattice Codes

Least-Squares Performance of Analog Product Codes

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Fast Joint Source-Channel Decoding of

OPTIMAL FINITE ALPHABET SOURCES OVER PARTIAL RESPONSE CHANNELS. A Thesis DEEPAK KUMAR

Error Correction Methods

TURBO equalization is a joint equalization and decoding

The Turbo Principle in Wireless Communications

Binary Convolutional Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Parallel Concatenated Chaos Coded Modulations

Transcription:

TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux, and Thitimajshima [2], and it has been claimed these codes achieve near-shannon-limit error correction performance with relatively simple component codes and large interleavers. A required E b /N o of0.7dbwasreportedforabiterrorrateof 5, using a rate 1/2 turbo code [2]. However, some important details that are necessary to reproduce these results were omitted. This article confirms the accuracy of these claims, and presents a complete description of an encoder/decoder pair that could be suitable for deep-space applications, where lower rate codes can be used. We describe a new simple method for trellis termination, analyze the effect of interleaver choice on the weight distribution of the code, and introduce the use of unequal rate component codes, which yields better performance. I. Introduction Turbo codes were recently proposed by Berrou, Glavieux, and Thitimajshima [2] as a remarkable step forward in high-gain, low-complexity coding. It has been claimed these codes achieve near-shannon-limit error correction performance with relatively simple component codes and large interleavers. A required E b /N 0 of 0.7 db was reported for a bit error rate (BER) of 5, using a rate 1/2 turbo code [2]. However, some important details that are necessary to reproduce these results were omitted. The purpose of this article is to shed some light on the accuracy of these claims and to present a complete description of an encoder/decoder pair that could be suitable for deep-space applications, where lower rate codes can be used. Two new contributions are reported in this article: a new, simple method for trellis termination and the use of unequal component codes, which results in better performance. II. Parallel Concatenation of Convolutional Codes The codes considered in this article consist of the parallel concatenation of two convolutional codes with a random interleaver between the encoders. Figure 1 illustrates a particular example that will be used in this article to verify the performance of these codes. The encoder contains two recursive binary convolutional encoders, with M 1 and M 2 memory cells, respectively. In general, the two component encoders may not be identical. The first component encoder operates directly on the information bit sequence u = (u 1,,u N ) of length N, producing the two output sequences x 1i and x 1p. The second component encoder operates on a reordered sequence of information bits, u, produced by an interleaver of length N, and outputs the two sequences x 2i and x 2p. The interleaver is a pseudorandom block scrambler defined by a permutation of N elements with no repetitions: a complete block is read into the interleaver and read out in a specified permuted order. Figure 1 shows an example where a rate r =1/n =1/4 code is generated by two component codes with M 1 = M 2 = M = 4, producing the 29

u x 1i D D D D INTERLEAVER ENCODER 1 x 1p u x 2i D D D D ENCODER 2 Fig. 1. Example of an encoder. x 2p outputs x 1i = u, x 1p = u g a /g b, x 2i = u,andx 2p =u g a /g b, where the generator polynomials g a and g b have an octal representation of 21 and 37, respectively. Note that various code rates can be obtained by puncturing the outputs. A. Trellis Termination We use the encoder in Fig. 1 to generate a (n(n + M),N) block code. Since the component encoders are recursive, it is not sufficient to set the last M information bits to zero in order to drive the encoder to the all-zero state, i.e., to terminate the trellis. The termination (tail) sequence depends on the state of each component encoder after N bits, which makes it impossible to terminate both component encoders with the same M bits. Fortunately, the simple stratagem illustrated in Fig. 2 is sufficient to terminate the trellis. Here the switch is in position A for the first N clock cycles and is in position B for M additional cycles, which will flush the encoders with zeros. The decoder does not assume knowledge of the M tail bits. The same termination method can be used for unequal rate and memory encoders. INPUT DATA A B D D D D x p x i Fig. 2. Trellis termination. 30

B. Weight Distribution In order to estimate the performance of a code, it is necessary to have information about its minimum distance, d, weight distribution, or actual code geometry, depending on the accuracy required for the bounds or approximations. The example of turbo code shown in Fig. 1 produces two sets of codewords, x 1 =(x 1i,x 1p )andx 2 =(x 2i,x 2p ), whose weights can be easily computed. The challenge is in finding the pairing of codewords from each set, induced by a particular interleaver. Intuitively, we would like to avoid pairing low-weight codewords from one encoder with low-weight words from the other encoder. Many such pairings can be avoided by proper design of the interleaver. However, if the encoders are not recursive, the low-weight codeword generated by the input sequence u =(00 00000 000) with a single 1 will always appear again in the second encoder, for any choice of interleaver. This motivates the use of recursive encoders, where the key ingredient is the recursiveness and not the fact that the encoders are systematic. For our example using a recursive encoder, the input sequence u =(00 000000 000) generates the minimum weight codeword (weight = 6). If the interleaver does not properly break this input pattern, the resulting minimum distance will be 12. However, the minimum distance is not the most important quantity of the code, except for its asymptotic performance, at very high E b /N o. At moderate signal-to-noise ratios (SNRs), the weight distribution at the first several possible weights is necessary to compute the code performance. Estimating the complete weight distribution for a large N is still an open problem for these codes. We have investigated the effect of the interleaver on the weight distribution on a small-scale example where N = 16. This yields an (80,16) code whose weight distribution can be found by exhaustive enumeration. Some of our results are shown in Fig. 3(a), where it is apparent that a good choice of the interleaver can increase the minimum distance from 12 to 14, and, more importantly, can reduce the count of codewords at low weights. Figure 3(a) shows the weight distribution obtained by using no interleaver, a reverse permutation, and a4 4 block interleaver, all with d = 12. Better weight distributions are obtained by the random permutation {2, 13, 0, 3, 11, 15, 6, 14, 8, 9,, 4, 12, 1, 7, 5} with d = 12, and by the best-found permutation {12, 3, 14, 15, 13, 11, 1, 5, 6, 0, 9, 7, 4, 2,, 8} with d = 14. For comparison, the binomial distribution is also shown. The best known (80,16) linear block code has a minimum distance of 28. For an interleaver length of N = 24, we were only able to enumerate all codewords produced by input sequences with weights 1, 2, and 3. This again confirmed the importance of the interleaver choice for reducing the number of low-weight codewords. Better weight distributions were obtained by using random permutations than by using structured permutations as block or reverse permutations. NUMBER OF CODEWORDS (CUMULATIVE) 5 (a) 4 3 2 1 12 NO INTERLEAVING REVERSE BLOCK BINOMIAL RANDOM BEST FOUND 16 20 24 28 32 WEIGHT 36 40 BER -1-2 -3-4 -5-6 TURBO DECODING MAXIMUM LIKELIHOOD UNION BOUND 0 1 2 3 4 5 E b /N o, db 6 (b) 7 Fig. 3. The (80, 16) code (a) weight distribution and (b) performance. 31

For the (80,16) code using the best-found permutation, we have compared the performance of a maximum-likelihood decoder (obtained by simulation) to that of a turbo decoder with iterations, as described in Section 3, and to the union bound computed from the weight distribution, as shown in Fig. 3(b). As expected, the performance of the turbo decoder is slightly suboptimum. III. Turbo Decoding Let u k be a binary random variable taking values in {+1, 1}, representing the sequence of information bits. The maximum a posteriori (MAP) algorithm, summarized in the Appendix, provides the log likelihood ratio L(k) given the received symbols y: L(k) =log P(u k =+1 y) P(u k = 1 y) (1) ThesignofL(k) is an estimate, û k,ofu k, and the magnitude L(k) is the reliability of this estimate, as suggested in [3]. The channel model is shown in Fig. 4, where the n 1ik s and the n 1pk s are independent identically distributed (i.i.d.) zero-mean Gaussian random variables with unit variance, and ρ = 2E s /N o = 2rEb /N o is the SNR. A similar model applies for encoder 2. n 1i u ENCODER 1 ρ ρ y 1i =ρ u + n 1i y 1p =ρ x 1p + n 1p n 1p Fig. 4. The channel model. Given the turbo code structure in Fig. 1, the optimum decoding rule maximizes either P (u k y 1, y 2 ) (minimum bit-error probability rule) or P (u y 1, y 2 ) (maximum-likelihood sequence rule). Since this rule is obviously too complex to compute, we resort to a suboptimum decoding rule [2,3] that separately uses the two observations y 1 and y 2, as shown in Fig. 5. Each decoder in Fig. 5 computes the a posteriori probabilities P (u k y i, ũ i ),i =1,2see Fig. 6(a), or equivalently the log-likelihood ratio L i (k) =log(p(u k =+1 y i,ũ i )) / (P (u k = 1 y i, ũ i )) where ũ 1 is provided by decoder 2 and ũ 2 is provided by decoder 1 (see Fig. 6(b)). The quantities ũ i correspond to new data estimates, innovations, or extrinsic information provided by decoders 1 and 2, which can be used to generate a priori probabilities on the information sequence u for branch metric computation in each decoder. The question is how to generate the probabilities P (ũ i,k u k ) that should be used for computation of the branch transition probabilities in MAP decoding. It can be shown that the probabilities P (u k ũ i,k ) or, equivalently, log (P (u k =+1 ũ i,k )) / (P (u k = 1 ũ i,k )), i =1,2can be used instead of P (ũ i,k u k ) for branch metric computations in the decoders. When decoder 1 generates P (u k ũ 2,k )or log (P (u k =+1 ũ 2,k )) / (P (u k = 1 ũ 2,k )) for decoder 2, this quantity should not include the contribution due to ũ 1,k, which has already been generated by decoder 2. Thus, we should have log P (u k =+1 ũ 2,k ) P (u k = 1 ũ 2,k ) =logp(u k =+1 y 1,ũ 1,1,,ũ 1,k 1, ũ 1,k+1,,ũ 1,N ) P (u k = 1 y 1, ũ 1,1,,ũ 1,k 1, ũ 1,k+1,,ũ 1,N ) (2) 32

FEEDBACK L 1 L 1 L 1 INTERLEAVER L 2 L 2 L 2 DEINTERLEAVER y 1 y 1i y 1p DECODER 1 L 1 DECODER 2 L 2 DEINTERLEAVER y 2i y 2 y 2p INPUTS FROM MATCHED FILTER DECODED BITS Fig. 5. The turbo decoder. (a) u 1 y 1 P u k =+1 u 2,k L 2 ()=log k P uk = 1 u 2,k Pu k =+1y 1, u 1,1,..., u 1,k 1, u 1,k +1,..., u 1,N = log Pu k = 1 y 1, u 1,1,..., u 1,k 1, u 1,k +1,..., u 1,N MAP DECODER 1 Puk ( y 1, u 1 ) EXTRINSIC INFORMATION A POSTERIORI PROBABILITY (b) () L 1 () k L 1 k y 1 MAP DECODER 1 () L 1 k L 1 () k () Fig. 6. Input/output of the MAP decoder: (a) a posteriori probability and (b) log-likelihood ratio. To compute log (P (u k =+1 ũ 2,k )) / (P (u k = 1 ũ 2,k )), we note [see Fig. 6(a)] that P (u k y 1, ũ 1 )= P(u k y 1,ũ 1,1,,ũ 1,k 1, ũ 1,k+1,,ũ 1,N )P (ũ 1,k u k, y 1, ũ 1,1,,ũ 1,k 1, ũ 1,k+1,,ũ 1,N ) P (ũ 1,k y 1, ũ 1,1,,ũ 1,k 1, ũ 1,k+1,,ũ 1,N ) (3) Since ũ 1,k was generated by decoder 2 and deinterleaving is used, this quantity depends only weakly on y 1 and ũ 1,j, j k. Thus, we can have the following approximation: P (ũ 1,k u k, y 1, ũ 1,1,,ũ 1,k 1, ũ 1,k+1,,ũ 1,N ) P (ũ 1,k u k )=2P(u k ũ 1,k )P (ũ 1,k ) (4) Using Eq. (4) in Eq. (3), we obtain 33

P (u k y 1, ũ 1,1,,ũ 1,k 1, ũ 1,k+1,,ũ 1,N )= P(u k y 1,ũ 1 )P(ũ 1,k y 1, ũ 1,1,,ũ 1,k 1, ũ 1,k+1,,ũ 1,N ) 2P (u k ũ 1,k )P (ũ 1,k ) (5) It is preferable to work with likelihood ratios to avoid computing probabilities not involving u k Fig. 6(b)). Define (see L i (k) =log P(u k =+1 ũ i,k ), i =1,2 (6) P (u k = 1 ũ i,k ) (m) From Eqs. (2) and (5), we obtain L 2 (k) =L (m) 1 (k) L (m 1) 1 (k) at the output of decoder 1, before (m) interleaving, for the mth iteration. Similarly, we can obtain L 1 (k) =L (m) 2 (k) L (m) 2 (k) at the output of decoder 2, after deinterleaving. Using the above definitions, the a priori probabilities can be computed as e L i(k) P (u k =+1 ũ i,k )= =1 P(u k = 1 ũ i,k ), i =1,2 (7) 1+e L i(k) Then the update equation for the mth iteration of the decoder in Fig. 5 becomes [ ] L (m) 1 (k) = L (m 1) 1 (k)+α m L (m) 2 (k) L (m) 1 (k), α m =1 (8) This looks like the update equation of a steepest descent method, where the rate of change of L(k) for a given u k,andα m is the step size. [ ] L (m) 2 (k) L (m) 1 (k) represents Figure 7 shows the probability density function of L 1 (k) at the output of the second decoder in Fig. 1, after deinterleaving and given u k = +1. As shown in Fig. 7, this density function shifts to the right as the number of iterations, m, increases. The area under each density function to the left of the origin represents the BER if decoding stops after m iterations. At this point, certain observations can be made. Note that L 2 (k ) at the input of decoder 2 includes an additive component 2ρy 1ik, which contributes to the branch metric computations in decoder 2 at observation y 2ik. This improves by 3 db the SNR of the noisy information symbols at the input of decoder 2. Similar arguments hold for L 1 (k). An apparently more powerful decoding structure can be considered, as shown in Fig. 8. However, the performances of the decoding structures in Figs. 8 and 5 are equivalent for a large number of iterations (the actual difference is one-half iteration). If the structure in Fig. 8 is used, then the log-likelihood ratio L 2 (k) fed to decoder 2 should not depend on ũ 1k and y 1ik, and, similarly, L 1 (k) should not depend on ũ 2k and y 2ik. Using analogous derivations based on Eqs. (2) through (5), we obtain L 2 (k) =L 1 (k) L 1 (k) 2ρy 1ik L 1 (k) =L 2 (k) L 2 (k) 2ρy 2ik where y 1i is the sum of y 1i with the deinterleaved version of y 2i and y 2i is the sum of y 2i with the interleaved version of y 1i. Thus, the net effect of the decoding structure in Fig. 8 is to explicitly pass to decoder 2 the information contained in y 1i (and vice versa), but to remove the identical term from the input log-likelihood ratio. 34

( ) PROBABILITY DENSITY FUNCTION OF L L 1 k 0.4 0.3 0.2 0.1 E b /N o = 0.3 db r =1/4 N = 4096 m = 1 m = 5 m = m = 20 m = 40 0.0 0 NOT RELIABLE 20 () RELIABILITY VALUE L 1 k 30 RELIABLE 40 DECODING ERROR CORRECT DECODING Fig. 7. The reliability function. FEEDBACK INTERLEAVER DEINTERLEAVER y 1i DECODER y 1i 1 y 2i DECODER 2 y 1p DEINTERLEAVER y 2i y 2p INPUTS FROM MATCHED FILTER DEINTERLEAVER INTERLEAVER DECODED OUTPUT BITS Fig. 8. An equivalent turbo decoder. 35

IV. Performance The performance obtained by turbo decoding the code in Fig. 1 with random permutations of lengths N = 4096 and N = 16384 is compared in Fig. 9 to the capacity of a binary-input Gaussian channel for rate r =1/4 and to the performance of a (15,1/4) convolutional code originally developed at JPL for the Galileo mission. At BER = 5 3, the turbo code is better than the (15,1/4) code by 0.25 db for N = 4096 and by 0.4 db for N = 16384. 1 BER 2 3 4 5 1.5 1.0 N = 16,384, m = 20 (TWO RATES) CAPACITY (BINARY INPUT, RATE =1/4) N = 4,096, m = (TWO RATES) 0.5 0.0 E b /N o, db (15,1/4) GALILEO CODE N = 4,096, m = N = 16,384, m = 20 0.5 1.0 1.5 Fig. 9. Turbo codes performance, r = 1/4. So far we have considered only component codes with identical rates, as shown in Fig. 1. Now we propose to extend the results to encoders with unequal rates, as shown in Fig.. This structure improves the performance of the overall rate 1/4 code, as shown in Fig. 9. The gains at BER = 5 3 relative to the (15,1/4) code are 0.55 db for N = 4096 and 0.7 db for N = 16384. For both cases, the performance is within 1 db of the Shannon limit at BER = 5 3, and the gap narrows to 0.7 db for N = 16384 at a low BER. V. Conclusions We have shown how turbo codes and decoders can be used to improve the coding gain for deep-space communications while decreasing the decoding complexity with respect to the large constraint-length convolutional codes currently in use. These are just preliminary results that require extensive further analysis. In particular, we need to improve our understanding of the influence of the interleaver choice on the code performance, to explore the sensitivity of the decoder performance to the precision with which we can estimate E b /N o, and to establish whether there might be a flattening of the performance curves at higher E b /N o, as it appears in one of the curves in Fig. 9. An interesting theoretical question is to determine how random these codes can be so as to draw conclusions on their performance based on comparison with random coding bounds. 36

In this article, we have explored turbo codes using only two encoders, but similar constructions can be used to build multiple-encoder turbo codes and generalize the turbo decoding concept to a truly distributed decoding system where each subdecoder works on a piece of the total observation and tentative estimates are shared among decoders until an acceptable degree of consensus is reached. u ENCODER 1 x 1i D D D D x 1p INTERLEAVER x 1p u D D D D x 2p ENCODER 2 Fig.. The two-rate encoder. Acknowledgments The authors are grateful to Sam Dolinar and Robert McEliece for their helpful comments. References [1] L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate, IEEE Transactions on Information Theory, vol. 20, pp. 284 287, March 1974. [2] C. Berrou, A. Glavieux, and P. Thitimajshima, Near Shannon Limit Error- Correcting Coding and Decoding: Turbo-Codes, Proceedings of ICC 93, Geneva, Switzerland, pp. 64 70, May 1993. 37

[3] J. Hagenauer and P. Robertson, Iterative (Turbo) Decoding of Systematic Convolutional Codes With the MAP and SOVA Algorithms, Proceedings of the ITG Conference on Source and Channel Coding, Frankfurt, Germany, pp. 1-9, October 1994. [4] P. L. McAdam, L. R. Welch, and C. L. Weber, M.A.P. Bit Decoding of Convolutional Codes (Abstract), 1972 International Symposium on Information Theory, Asilomar, California, p. 91, May 1972. Appendix The MAP Algorithm Let u k be the information bit associated with the transition from time k 1totimek, and use s as an index for the states. The MAP algorithm [1,4] provides the log likelihood given the received symbols y k, as shown in Fig. A-1. y MAP ALGORITHM L (u k) k = 1,..., N Fig. A-1. The MAP algorithm. L(k) =log P(u k =+1 y) P(u k = 1 y) =log s s γ +1(y k,s,s)α k 1 (s )β k (s) s s γ 1(y k,s,s)α k 1 (s )β k (s) (A-1) The estimate of the transmitted bits is then given by sign[l(k)] and their reliability by L(k). In order to compute Eq. (A-1), we need the forward and backward recursions, α k (s) = s s i=±1 γ i(y k,s,s)α k 1 (s ) s j=±1 γ j(y k,s,s)α k 1 (s ) (A-2) β k (s) = s s i=±1 γ i(y k+1,s,s )β k+1 (s ) s j=±1 γ j(y k+1,s,s)α k (s ) where 38

γ i (y k,s,s)= {η k e ρ n y k,νx k,ν (s,i) ν=1 if transition s s is allowable for u k = i 0 otherwise (A-3) ρ = 2(E s /N 0 ), η k = P (u k = ±1 ũ k ), except for the first iteration in the first decoder, where η k =1/2, and x k,ν are code symbols. The operation of these recursions is shown in Fig. A-2. The evaluation of Eq. (A-1) can be organized as follows: Step 0: α 0 (0)=1 β N (0)=1 α 0 (s)=0, s 0 β N (s)=0, s 0 Step 1: Compute the γ k s using Eq. (A-3) for each received set of symbols y k. Step 2: Compute the α k s using Eq. (A-2) for k =1,,N. Step 3: Use Eq. (A-2) and the results of Steps 1 and 2 to compute the β k s for k = N,,1. Step 4: Compute L(k) using Eq. (A-1) for k =1,,N. α k 1 (s 0) β k+1 (s 0) s 0 s 0 γ i 0 (s 0,s) α k (s) γ i (s, s 0 ) s γ i 1 (s 1, s) β k (s) γ i (s, s 1 ) α k 1 (s 1) s 1 s 1 β k+1 (s 1) k 1 k k + 1 TIME Fig. A-2. Forward and backward recursions. 39