Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University

Similar documents
A Brief Introduction to Markov Chains and Hidden Markov Models

On the Achievable Extrinsic Information of Inner Decoders in Serial Concatenation

Trapping Set Enumerators for Repeat Multiple Accumulate Code Ensembles

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Fast Blind Recognition of Channel Codes

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

ESTIMATION OF SAMPLING TIME MISALIGNMENTS IN IFDMA UPLINK

Improved Min-Sum Decoding of LDPC Codes Using 2-Dimensional Normalization

Utilization of multi-dimensional sou correlation in multi-dimensional sin check codes. Izhar, Mohd Azri Mohd; Zhou, Xiaobo; Author(s) Tad

Turbo Codes for Deep-Space Communications

Limited magnitude error detecting codes over Z q

Partial permutation decoding for MacDonald codes

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels

8 Digifl'.11 Cth:uits and devices

Unconditional security of differential phase shift quantum key distribution

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

CS229 Lecture notes. Andrew Ng

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008

BALANCING REGULAR MATRIX PENCILS

Error Floor Approximation for LDPC Codes in the AWGN Channel

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1

Efficiently Generating Random Bits from Finite State Markov Chains

BICM Performance Improvement via Online LLR Optimization

Problem set 6 The Perron Frobenius theorem.

VI.G Exact free energy of the Square Lattice Ising model

arxiv: v1 [physics.flu-dyn] 2 Nov 2007

Cryptanalysis of PKP: A New Approach

A Branch and Cut Algorithm to Design. LDPC Codes without Small Cycles in. Communication Systems

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem

A Statistical Framework for Real-time Event Detection in Power Systems

Discrete Techniques. Chapter Introduction

On Efficient Decoding of Polar Codes with Large Kernels

Discrete Techniques. Chapter Introduction

XSAT of linear CNF formulas

In-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017

Code design: Computer search

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)

STA 216 Project: Spline Approach to Discrete Survival Analysis

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION

Digital Communications

Tracking Control of Multiple Mobile Robots

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Lecture Note 3: Stationary Iterative Methods

Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework

From Margins to Probabilities in Multiclass Learning Problems

Recursive Constructions of Parallel FIFO and LIFO Queues with Switched Delay Lines

The Streaming-DMT of Fading Channels

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

A Robust Voice Activity Detection based on Noise Eigenspace Projection

Stochastic Variational Inference with Gradient Linearization

PERFORMANCE ANALYSIS OF MULTIPLE ACCESS CHAOTIC-SEQUENCE SPREAD-SPECTRUM COMMUNICATION SYSTEMS USING PARALLEL INTERFERENCE CANCELLATION RECEIVERS

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations

Coded Caching for Files with Distinct File Sizes

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

A. Distribution of the test statistic

Efficient Generation of Random Bits from Finite State Markov Chains

Manipulation in Financial Markets and the Implications for Debt Financing

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

Haar Decomposition and Reconstruction Algorithms

Approximated MLC shape matrix decomposition with interleaf collision constraint

<C 2 2. λ 2 l. λ 1 l 1 < C 1

Target Location Estimation in Wireless Sensor Networks Using Binary Data

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

Chemical Kinetics Part 2

Related Topics Maxwell s equations, electrical eddy field, magnetic field of coils, coil, magnetic flux, induced voltage

The EM Algorithm applied to determining new limit points of Mahler measures

Approximated MLC shape matrix decomposition with interleaf collision constraint

A Novel Learning Method for Elman Neural Network Using Local Search

arxiv: v1 [math.co] 17 Dec 2018

C. Fourier Sine Series Overview

Paragraph Topic Classification

1. Measurements and error calculus

$, (2.1) n="# #. (2.2)

The distribution of the number of nodes in the relative interior of the typical I-segment in homogeneous planar anisotropic STIT Tessellations

Space-time coding techniques with bit-interleaved coded. modulations for MIMO block-fading channels

A GENERAL METHOD FOR EVALUATING OUTAGE PROBABILITIES USING PADÉ APPROXIMATIONS

Rate-Distortion Theory of Finite Point Processes

Combining reaction kinetics to the multi-phase Gibbs energy calculation

2146 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 61, NO. 6, JUNE 2013

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY

A proposed nonparametric mixture density estimation using B-spline functions

The influence of temperature of photovoltaic modules on performance of solar power plant

A simple reliability block diagram method for safety integrity verification

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Stochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract

State-of-the-Art Channel Coding

FOURIER SERIES ON ANY INTERVAL

arxiv:math/ v2 [math.pr] 6 Mar 2005

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009

Random maps and attractors in random Boolean networks

Transcription:

Turbo Codes Coding and Communication Laboratory Dept. of Eectrica Engineering, Nationa Chung Hsing University

Turbo codes 1 Chapter 12: Turbo Codes 1. Introduction 2. Turbo code encoder 3. Design of intereaver 4. Iterative decoding of turbo codes

Turbo codes 2 Reference 1. Lin, Error Contro Coding chapter 16 2. Moon, Error Correction Coding chapter 14 3. C. Heegard and S. B. Wicker, Turbo Coding 4. B. Vucetic and J. Yuan, Turbo Codes

Turbo codes 3 Introduction

Turbo codes 4 What are turbo codes? Turbo codes, which beong to a cass of Shannon-capacity approaching error correcting codes, are introduced by Berrou and Gavieux in ICC93. Turbo codes can come coser to approaching Shannon s imit than any other cass of error correcting codes. Turbo codes achieve their remarkabe performance with reativey ow compexity encoding and decoding agorithms.

Turbo codes 5 We can tak about Bock turbo codes or (convoutiona) turbo codes A firm understanding of convoutiona codes is an important prerequisite to the understanding of turbo codes. Tow fundamenta ideas of Turbo code: Encoder: It produce a codeword with randomike properties. Decoder: It make use of soft-output vaues and iterative decoding. These are done by introducing an intereaver in the transmitter and an iterative decoding in the receivers. Appy the MAP, Log-MAP, Max-Log-MAP (SOVA) for decoding of component codes.

Turbo codes 6 Power efficiency of existing standards

Turbo codes 7 (37, 21, 65536) turbo codes with G(D) = [1, 1+D 4 1+D+D 2 +D 3 +D 4 ]

Turbo codes 8 Turbo code encoder

Turbo codes 9 Turbo code encoder The fundamenta turbo code encoder: Two identica recursive systematic convoutiona (RSC) codes with parae concatenation. An RSC encoder a is termed a component encoder. The two component encoders are separated by an intereaver. Figure A: Fundamenta Turbo Code Encoder (r = 1 3 ) a In genera, an RSC encoder is a (2, 1) convoutiona code.

Turbo codes 10 To achieve performance cose to the Shannon imit, the information bock ength (intereaver size) is chosen to be very arge, usuay at east severa thousand bits. RSC codes, generated by systematic feedback encoders, give much better performance than nonrecursive systematic convoutiona codes, that is, feedforward encoders. Because ony the ordering of the bits changed by the intereaver, the sequence that enters the second RSC encoder has the same weight as the sequence x that enters the first encoder.

Turbo codes 11 Turbo codes suffer from two disadvantages: 1. A arge decoding deay, owing to the arge bock engths and many iterations of decoding required for near capacity performance. 2. Good performance in ow signa to noise ratio (waterfa region) and modest performance in high signa to noise ratio (error foor region). 3. It significanty weakened performance at BERs beow 10 5 owing to the fact that the codes have a reativey poor minimum distance, which manifests itsef at very ow BERs.

Turbo codes 12 Figure A-1: Performance comparison of convoutiona codes and turbo codes.

Turbo codes 13 Recursive systematic convoutiona (RSC) encoder The RSC encoder: The conventiona convoutiona encoder by feeding back one of its encoded outputs to its input. Exampe. Consider the conventiona convoutiona encoder in which Generator sequence g 1 = [111],g 2 = [101] Compact form G = [g 1, g 2 ]

Turbo codes 14 Figure B: Conventiona convoutiona encoder with r = 1 2 and K = 3

Turbo codes 15 The RSC encoder of the conventiona convoutiona encoder: G = [1, g 2 /g 1 ] where the first output is fed back to the input. In the above representation: 1 the systematic output g 2 g 1 the feed forward output the feedback to the input of the RSC encoder Figure C shows the resuting RSC encoder.

Turbo codes 16 Figure C: The RSC encoder obtained from figure B with r = 1 2 and m = 2.

Turbo codes 17 Treis termination For the conventiona encoder, the treis is terminated by inserting m = K 1 additiona zero bits after the input sequence. These additiona bits drive the conventiona convoutiona encoder to the a-zero state (Treis termination). However, this strategy is not possibe for the RSC encoder due to the feedback. Convoutiona encoder are time-invariant, and it is this property that accounts for the reativey arge numbers of ow-weight codewords in terminated convoutiona codes. Figure D shows a simpe strategy that has been deveoped in a which overcomes this probem. adivsaar, D. and Poara, F., Turbo Codes for Deep-Space Communications, JPL TDA Progress Report 42-120, Feb. 15, 1995.

Turbo codes 18 For encoding the input sequence, the switch is turned on to position A and for terminating the treis, the switch is turned on to position B. Figure D: Treis termination strategy for RSC encoder

Turbo codes 19 Recursive and nonrecursive convoutiona encoders Exampe. Figure E shows a simpe nonrecursive convoution encoder with generator sequence g1 = [11] and g2 = [10]. Figure E: Nonrecursive r = 1 2 and K = 2 convoutiona encoder with input and output sequences.

Turbo codes 20 Exampe. Figure F shows the[ equivaent recursive convoutiona encoder of Figure E with G = 1, g 2 g 1 ]. Figure F: Recursive r = 1 2 and K = 2 convoutiona encoder of Figure E with input and output sequences.

Turbo codes 21 Compare Figure E with Figure F: The nonrecursive encoder output codeword with weight of 3 The recursive encoder output codeword with weight of 5 State diagram: Figure G-1: State diagram of the nonrecursive encoder in Figure E.

Turbo codes 22 Figure G-2: State diagram of the recursive encoder in Figure F. A recursive convoutiona encoder tends to produce codewords with increased weight reative to a nonrecursive encoder. This resuts in fewer codewords with ower weights and this eads to better error performance. For turbo codes, the main purpose of impementing RSC encoders as component encoders is to utiize the recursive nature of the encoders and not the fact that the encoders are systematic.

Turbo codes 23 Ceary, the state diagrams of the encoders are very simiar. The transfer function of Figure G-1 and Figure G-2 : T(D) = D3 1 D Where N and J are negected.

Turbo codes 24 The two codes have the same minimum free distance and can be described by the same treis structure. These two codes have different bit error rates. This is due to the fact that BER depends on the input output correspondence of the encoders. b It has been shown that the BER for a recursive convoutiona code is ower than that of the corresponding nonrecursive convoutiona code at ow signa-to-noise ratios E b /N 0. c b c Benedetto, S., and Montorsi, G., Unveiing Turbo Codes: Some Resuts on Parae Concatenated Coding Schemes, IEEE Transactions on Information Theory, Vo. 42, No. 2, pp. 409-428, March 1996. Berrou, C., and Gavieux, A., Near Optimum Error Correcting Coding and Decoding: Turbo-Codes, IEEE Transactions on Communications, Vo. 44, No. 10, pp. 1261-1271, Oct. 10, 1996.

Turbo codes 25 Concatenation of codes A concatenated code is composed of two separate codes that combined to form a arge code. There are two types of concatenation: Seria concatenation Parae concatenation

Turbo codes 26 The tota code rate for seria concatenation is r tot = k 1k 2 n 1 n 2 which is equa to the product of the two code rates. Figure H: Seria concatenated code

Turbo codes 27 The tota code rate for parae concatenation is: k r tot = n 1 + n 2 Figure I: Parae concatenated code.

Turbo codes 28 For seria and parae concatenation schemes: An intereaver is often between the encoders to improve burst error correction capacity or to increase the randomness of the code. Turbo codes use the parae concatenated encoding scheme. However, the turbo code decoder is based on the seria concatenated decoding scheme. The seria concatenated decoders are used because they perform better than the parae concatenated decoding scheme due to the fact that the seria concatenation scheme has the abiity to share information between the concatenated decoders whereas the decoders for the parae concatenation scheme are primariy decoding independenty.

Turbo codes 29 Intereaver design The intereaver is used to provide randomness to the input sequences. Aso, it is used to increase the weights of the codewords as shown in Figure J. Figure J: The intereaver increases the code weight for RSC Encoder 2 as compared to RSC Encoder 1.

Turbo codes 30 Form Figure K, the input sequence x i produces output sequences c 1i and c 2i respectivey. The input sequence x 1 and x 2 are different permuted sequences of x 0. Figure K: An iustrative exampe of an intereaver s capabiity.

Turbo codes 31 Tabe: Input an Output Sequences for Encoder in Figure K. Input Output Output Codeword Sequence x i Sequence c 1i Sequence c 2i Weight i i = 0 1100 1100 1000 3 i = 1 1010 1010 1100 4 i = 2 1001 1001 1110 5 The interever affect the performance of turbo codes because it directy affects the distance properties of the code.

Turbo codes 32 Bock intereaver The bock intereaver is the most commony used intereaver in communication system. It writes in coumn wise from top to bottom and eft to right and reads out row wise from eft to right and top to bottom. Figure L: Bock intereaver.

Turbo codes 33 Random (Pseudo-Random) intereaver The random intereaver uses a fixed random permutation and maps the input sequence according to the permutation order. The ength of the input sequence is assumed to be L. The best intereaver reorder the bits in a pseudo-random manner. Conventiona bock (row-coumn) intereavers do not perform we in turbo codes, except at reativey short bock engths.

Turbo codes 34 Figure M: A random (pseudo-random) intereaver with L = 8.

Turbo codes 35 Circuar-Shifting intereaver The permutation p of the circuar-shifting intereaver is defined by p(i) = (ai + s) mod L satisfying a < L, a is reativey prime to L, and s < L where i is the index, a is the step size, and s is the offset.

Turbo codes 36 Figure N: A circuar-shifting intereaver with L = 8, a = 3, s = 0.

Turbo codes 37 Iterative decoding of turbo codes

Turbo codes 38 The notation of turbo code enocder The information sequence (incuding termination bits) is considered to a bock of ength K = K + v and is represented by the vector u = (u 0, u 1,...,u K 1 ).

Turbo codes 39 Because encoding is systematic the information sequence u is the first transmitted sequence; that is u = v (0) = (v (0) 0, v(0) 1,...,v(0) K 1 ). The first encoder generates the parity sequence v (1) = (v (1) 0, v(1) 1,...,v(1) K 1 ). The parity sequence by the second encoder is represented as v (2) = (v (2) 0, v(2) 1,...,v(2) K 1 ). The fina transmitted sequence (codeword) is given by the vector v = (v (0) 0 v(1) 0 v(2) 0, v(0) 1 v(1) 1 v(2) 1,...,v(0) K 1 v(1) K 1 v(2) K 1 )

Turbo codes 40 The basic structure of an iterative turbo decoder The basic structure of an iterative turbo decoder is shown in Figure O. (We assume here a rate R = 1/3 parae concatenated code without puncturing.) Figure O: Basic structure of an iterative turbo decoder.

Turbo codes 41 Figure 1: Another view of Turbo iterative decoder

Turbo codes 42 At each time unit, three output vaues are received from the channe, one for the information bit u = v (0), denoted by r (0), and two for the parity bits v (1) and v (2), denote by r (1) and r (2), and the 3K-dimensiona received vector is denoted by r = (r (0) 0 r(1) 0 r(2) 0, r(0) 1 r(1) 1 r(2) 1,...,r(0) K 1 r(1) K 1 r(2) K 1 ) Let each transmitted bit represented using the mapping 0 1 and 1 +1.

Turbo codes 43 The genera operation of turbo iterative decoding is done as shown in the above two figures. The received data (r (0), r (1) ) and the prior probabiities (it is assume equa for the first iteration) are fed to decoder I and the decoder I produces the extrinsic probabiities. After the intereaver, they are used as the prior probabiities of the second decoder II aong with the received data (r (0), r (2) ). The extrinsic output probabiities of decoder II are then deintereaver and passed back to become prior probabiities of the decoder I. Compare this structure with the sum-product decoding in factor graph.

Turbo codes 44 The process of passing probabiities back and forth continues unti the decoder determine that the process has converged, or unti some maximum number of iterations is reached. Construct the treis for each component codes, reguar for convoutiona codes and irreguar for bock codes. The decoding of each component code commony used is the MAP, Log-MAP, and Max-Log-MAP (SOVA).

Turbo codes 45 For an AWGN channe with unquantized (soft) outputs, we define the og-ikeihood ratio (L-vaue) L(v (0) r (0) ) = L(u r (0) ) (before decoding) of a transmitted information bit u given the received vaue r (0) as L(u r (0) ) = n P(u =+1 r (0) = n P(r(0) ) P(u = 1 r (0) P(r (0) = n P(r(0) P(r (0) = n e (E s/n 0 )(r ) u =+1)P(u =+1) u = 1)P(u = 1) u =+1) P(u = 1) u = 1) + n P(u =+1) e (E s/n 0 )(r (0) (0) 1) 2 +1) 2 + n P(u =+1) P(u = 1) where E s /N 0 is the channe SNR, and u and r (0) normaized by a factor of E s. have both been

Turbo codes 46 This equation simpifies to { } L(u r (0) ) = E s N 0 (r (0) 1) 2 (r (0) + 1) 2 = E s N 0 r (0) + n P(u =+1) P(u = 1) = L c r (0) + L a (u ), + n P(u =+1) P(u = 1) where L c = 4(E s /N 0 ) is the channe reiabiity factor, and L a (u ) is the a priori L-vaue of the bit u. In the case of a transmitted parity bit v (j), given the received vaue r (j), j = 1, 2, the L-vaue (before decoding) is given by L(v (j) r (j) ) = L c r (j) + L a (v (j) ) = L c r (j), j = 1, 2,

Turbo codes 47 In a inear bock code with equay ikey information bits, the parity bits are aso equay ikey to be +1 or 1, and thus the a priori L vaues of the parity bits are 0; that is, L a (v (j) ) = n P(v(j) = +1) P(v (j) = 1) = 0, j = 1, 2. Remark: the a prior L vaues of the information bits L a (u ) are aso equa to 0 for the first iteration of decoder I, but that thereafter the a prior L vaues are then repaced by extrinsic L vaues from the other decoder.

Turbo codes 48 Iterative decoding of decoder I The output of decoder 1 contains two terms: [ ] 1. L (1) (u ) = n P(u = +1 r 1,L (1) a )/P(u = 1 r 1,L (1) a ), the a posteriori L-vaue (after decoding) of each information bit produced by decoder 1 given the (partia) received vector r [ ] 1 = r (0) 0 r(1) 0, r(0) 1 r(1) 1,..., r(0) K 1 r(1) K 1 and the a priori input vector [ ] L (1) a = L (1) a (u 0 ), L (1) a (u 1 ),..., L (1) a (u K 1 ) for decoder 1. [ ] 2. L (1) e (u ) = L (1) (u ) L c r (0) + L (2) e (u ), the extrinsic a posteriori L-vaue (after decoding) associated with each information bit produced by decoder 1, which, after intereaving, is passed to the input of decoder 2 as the a priori vaue L (2) a (u ).

Turbo codes 49 The received soft channe L-vaued L c r (0) for u and L c r (1) enter decoder 1 aong with a prior L vaues of the v (1) information bits L (1) a (u ) = L (2) e (u ). Subtracting the term in brackets, namey, L c r (0) + L (2) e (u ), removes the effect of the current information bit u from L (1) (u ), eaving ony the effect of the parity constraint, thus providing an independent estimate of the information bit u to decoder 2 in addition to the received soft channe L-vaues at time. for

Turbo codes 50 Iterative decoding of decoder II The output of decoder 2 contains two terms: [ ] 1. L (2) (u ) = n P(u = +1 r 2,L (2) a )/P(u = 1 r 2,L (2) a ), the a posteriori L-vaue (after decoding) of each information bit produced by decoder 2 given the (partia) received vector r [ ] 2 = r (0) 0 r(2) 0, r(0) 1 r(2) 1,..., r(0) K 1 r(2) K 1 and the a priori input vector [ ] L (2) a = L (2) a (u 0 ), L (2) a (u 1 ),..., L (2) a (u K 1 ) for decoder 2. [ ] 2. L (2) e (u ) = L (2) (u ) L c r (0) + L (1) e (u ), the extrinsic a posteriori L vaues L (2) e (u ) produced by decoder 2, after deintereaving, are passed back to the input of decoder 1 as the a priori vaues L (1) a (u ).

Turbo codes 51 The (propery intereaved) received soft channe L vaued L c r (0) for u and the soft channe L vaued L c r (2) for v (2) enter decoder 2 aong with a (propery intereaved) prior L vaues of the information bits L (2) a (u ) = L (1) e (u ). Subtracting the term in brackets, namey, L c r (0) + L (1) e (u ), removes the effect of the current information bit u from L (2) (u ), eaving ony the effect of the parity constraint, thus providing an independent estimate of the information bit u to decoder 1 in addition to the received soft channe L-vaues at time.

Turbo codes 52 Figure 2: The factor graph of turbo codes

Turbo codes 53 In summary, the input to each decoder contains three terms: 1. the soft channe L vaues L c r (0) 2. the soft channe L vaues L c r (1) or L c r (2) 3. the extrinsic a posterior L vaues as new prior L vaues L (1) a (u ) = L (2) e (u ) or L (2) a (u ) = L (1) e (u ) The term turbo in turbo coding is reated to decoding, not encoding. The feedback of extrinsic information form the SISO decoders in the iterative decoding that mimics the feedback of exhaust gases in a turbo engine.

Turbo codes 54 Pease review the SISO decoding presented in convoutiona codes or treis of bock codes, such as BCJR, Log-BCJR, and Max-Log-BCJR (SOVA). Present two exampes: one using og-bcjr and the other using max-og-bcjr

Turbo codes 55 Iterative decoding using the og-map agorithm Exampe. Consider the parae concatenated convoutiona code (PCCC) formed by using the 2 state (2, 1, 1) systematic recursive convoutiona code (SRCC) with generator matrix G(D) = [1 1 1 + D ] as the constituent code. A bock diagram of the encoder is shown in Figure P(a). Aso consider an input sequence of ength K = 4, incuding one termination bit, aong with a 2 2 bock (row coumn) intereaver, resuting in a (12, 3) PCCC with overa rate R = 1/4.

Turbo codes 56 Figure P: (a) A 2-state turbo encoder and (b) the decoding treis for (2, 1, 1) constituent code with K = 4.

Turbo codes 57 The ength K = 4 decoding treis for the component code is show in Figure P(b), where the branches are abeed using the mapping 0 1 and 1 +1. The input bock is given by the vector u = [u 0, u 1, u 2, u 3 ], the intereaved input bock is u = [u 0, u 1, u 2, u 3] = [u 0, u 2, u 1, u 3 ], the parity [ vector for the first ] component code is given by p (1) = p (1) 0, p(1) 1, p(1) 2, p(1) 3, and the parity vector for the [ ] second component code is p (2) = p (2) 0, p(2) 1, p(2) 2, p(2) 3. We can represent the 12 transmitted bits in a rectanguar array, as shown in Figure R(a), where the input vector u determines the parity vector p (1) in the first two rows, and the intereaved input vector u determines the parity vector p (2) in the first two coumns.

Turbo codes 58 Figure R: Iterative decoding exampe for a (12,3) PCCC.

Turbo codes 59 For purposes of iustration, we assume the particuar bit vaues shown in Figure R(b). We aso assume a channe SNR of E s /N 0 = 1/4 ( 6.02dB), so that the received channe L-vaues corresponding to the received vector [ ] r = r (0) 0 r(1) 0 r(2) 0, r(0) 1 r(1) 1 r(2) 1, r(0) 2 r(1) 2 r(2) 2, r(0) 3 r(1) 3 r(2) 3 are given by L c r (j) = 4( E s )r (j) N 0 = r (j), = 0, 1, 2, 3, j = 0, 1, 2. Again for purposes of iustration, a set of particuar received channe L-vaues is given in Figure R(c).

Turbo codes 60 In the first iteration of decoder 1 (row decoding), the og MAP agorithm is appied to the treis of the 2-state (2, 1, 1) code shown in Figure P(b) to compute the a posteriori L-vaues L (1) (u ) for each of the four input bits and the corresponding extrinsic a posteriori L-vaues L (1) e (u ) to pass to decoder 2 (the coumn decoder). Simiary, in the first iteration of decoder 2, the og-map agorithm uses the extrinsic posteriori L-vaues L (1) e (u ) received from decoder 1 as the a priori L-vaues, L (2) a (u ) to compute the a posteriori L-vaues L (2) (u ) for each of the four input bits and the corresponding extrinsic a posteriori L-vaues L (2) e (u ) to pass back to decoder 1. Further decoding proceeds iterativey in this fashion.

Turbo codes 61 To simpify notation, we denote the transmitted vector as v = (v 0,v 1,v 2,v 3 ), where v = (u, p ), = 0, 1, 2, 3, u is an input bit, and p is a parity bit. Simiary, the received vector is denoted as r = (r 0,r 1,r 2,r 3 ), where r = (r u, r p ), = 0, 1, 2, 3, r u is the received symbo corresponding to the transmitted input bit u, and r p is the received symbo corresponding to the transmitted parity bit p.

Turbo codes 62 An input bit a posteriori L-vaue is given by where L(u ) = n P(u =+1 r) = n P(u = 1 r) (s,s) + p(s,s,r) p(s,s,r) (s,s) s represents a state at time (denote by s σ ). s represents a state at time + 1 (denoted by s σ +1 ). The sums are over a state pairs (s, s) for which u = +1 or 1, respectivey.

Turbo codes 63 We can write the joint probabiities p(s, s,r) as p(s, s,r) = e α (s )+γ (s,s)+β +1 (s), where α (s ), γ (s, s), and β+1 (s) are the famiiar og-domain α s, γ s and β s of the MAP agorithm.

Turbo codes 64 For a continuous-output AWGN channe with an SNR of E s /N 0, we can write the MAP decoding equations as Branch metric: r (s, s) = u L a (u ) 2 + L c 2 r v, = 0, 1, 2, 3 Forward metric: α +1(s) = max s α [ γ (s, s) + α (s ) ], = 0, 1, 2, 3 Backward metric: β (s ) = max [ γ (s, s) + β+1(s) ] = 0, 1, 2, 3 s σ +1 where the max function is defined in max(x, y) n(e x + e y ) = max(x, y) + n(1 + e x+y ) and the initia conditions are α 0(S 0 ) = β 4(S 0 ) = 0, and α 0(S 1 ) = β 4(S 1 ) =.

Turbo codes 65 Further simpifying the branch metric, we obtain r (s, s) = u L a (u ) + L c 2 2 (u r u + p r p ) = u 2 [L a(u ) + L c r u ] + p 2 L c r p, = 0, 1, 2, 3.

Turbo codes 66 Figure 3: Another view of Turbo iterative decoder

Turbo codes 67 computation of L(u 0 ) We can express the a posteriori L-vaue of u 0 as L(u 0 ) = n p(s = S 0, s = S 1,r) n p(s = S 0, s = S 0,r) = [ α 0 (S 0) + γ 0 (s = S 0, s = S 1 ) + β 1 (S 1) ] [ α 0 (S 0 ) + γ 0 (s = S 0, s = S 0 ) + β 1 (S 1) ] = { + 1 2 [L a(u 0 ) + L c r u0 ] + 1 2 L cr p0 + β1 (S 1) } { 1 2 [L a(u 0 ) + L c r u0 ] 1 2 L cr p0 + β1 (S 0) } = { + 1 2 [L a(u 0 ) + L c r u0 ] } { 1 2 [L a(u 0 ) + L c r u0 ] } + { + 1 2 L cr p0 + β1 (S 1) + 1 2 L cr p0 β1 (S 0) } = L c r u0 + L a (u 0 ) + L e (u 0 ), where L e (u 0 ) L c r p0 + β 1 (S 1) β 1 (S 0) represents the extrinsic a posterior (output) L-vaue of u 0.

Turbo codes 68 The fina form of above equation iustrate ceary the three components of the a posteriori L-vaue of u 0 computed at the output of a og-map decoder: L c r u0 : the received channe L-vaue corresponding to bit u 0, which was part of the decoder input. L a (u 0 ): the a priori L-vaue of u 0, which was aso part of the decoder input. Expect for the first iteration of decoder 1, this term equas the extrinsic a posteriori L-vaue of u 0 received from the output of the other decoder. L e (u 0 ): the extrinsic part of the a posteriori L-vaue of u 0, which dose not depend on L c r u0 or L a (u 0 ). This term is then sent to the other decoder as its a priori input.

Turbo codes 69 computation of L(u 1 ) We now proceed in a simiar manner to compute the a posteriori L-vaue of bit u 1. We see from Figure P(b) that in this case there are two terms in each of the sums in L(u ) = n (s,s) + (s,s) p(s,s,r) p(s,s,r), because at this time there are two +1 and two 1 transitions in the treis diagram.

Turbo codes 70 L(u 1 ) = n [ p(s = S 0, s = S 1,r) + p(s = S 1, s = S 0,r) ] n [ p(s = S 0, s = S 0,r) + p(s = S 1, s = S 1,r) ] = max {[ α 1 (S 0) + γ 1 (s = S 0, s = S 1 ) + β 2 (S 1) ], [ α 1 (S 1) + γ 1 (s = S 1, s = S 0 ) + β 2 (S 0) ]} max {[ α 1 (S 0) + γ 1 (s = S 0, s = S 0 ) + β 2 (S 0) ], [ α 1 (S 1) + γ 1 (s = S 1, s = S 1 ) + β 2 (S 1) ]} {( = max + 1 2 [L a(u 1 ) + L c r u1 ] + 1 ) 2 L cr p1 + α 1 (S 0) + β 2 (S 1), ( + 1 2 [L a(u 1 ) + L c r u1 ] 1 2 L cr p1 + α 1 (S 1) + β 2 (S 0) )} {( max 1 2 [L a(u 1 ) + L c r u1 ] 1 ) 2 L cr p1 + α 1 (S 0) + β 2 (S 0), ( 1 2 [L a(u 1 ) + L c r u1 ] + 1 2 L cr p1 + α 1 (S 1) + β 2 (S 1) )} = { + 1 2 [L a(u 1 ) + L c r u1 ] } { 1 2 [L a(u 1 ) + L c r u1 ] } {[ + max + 1 ] [ 2 L cr p1 + α 1 (S 0) + β 2 (S 1), 1 ]} 2 L cr p1 + α 1 (S 1) + β 2 (S 0) {[ max 1 ] [ 2 L cr p1 + α 1 (S 0) + β 2 (S 0), + 1 ]} 2 L cr p1 + α 1 (S 1) + β 2 (S 1) = L c r u1 + L a (u 1 ) + L e (u 1 ) max(w + x, w + y) w + max(x, y)

Turbo codes 71 computation of L(u 2 ) and L(u 3 ) Continuing, we can use the same procedure to compute the a posteriori L-vaues of bits u 2 and u 3 as where L(u 2 ) = L c r u2 + L a (u 2 ) + L e (u 2 ), {[ L e (u 2 ) = max {[ max + 1 ] 2 L cr p2 + α 2 (S 0) + β 3 (S 1) ] 1 2 L cr p2 + α 2 (S 0) + β 3 (S 0) [,, 1 ]} 2 L cr p2 + α 2 (S 1) + β 3 (S 0) ]} [ + 1 2 L cr p2 + α 2 (S 1) + β 3 (S 1)

Turbo codes 72 and where L(u 3 ) = L c r u3 + L a (u 3 ) + L e (u 3 ), L(u 3 ) = [ 1 2 L cr p3 + α 3 (S 1) + β 4 (S 0) ] [ 1 2 L cr p3 + α 3 (S 0) + β 4 (S 0) ] = α 3 (S 1) α 3 (S 0)

Turbo codes 73 We now need expressions for the terms α 1(S 0 ), α 1(S 1 ), α 2(S 0 ), α 2(S 1 ), α 3(S 0 ), α 3(S 1 ), β 1(S 0 ), β 1(S 1 ), β 2(S 0 ), β 2(S 1 ), β 3(S 0 ), and β 3(S 1 ) that are used to cacuate the extrinsic a posteriori L-vaues L e (u ), = 0, 1, 2, 3. We use the shorthand notation L u = L c r u + L a (u ) and L p = L c r p, = 0, 1, 2, 3, for intrinsic information bit L vaues and parity bit L vaues respectivey.

Turbo codes 74 We can obtain the foowing: α 1 (S 0) = 1 2 (L u0 + L p0 ) α 1 (S 1) = 1 2 (L u0 + L p0 ) {[ α 2 (S 0) = max 1 ] [ 2 (L u1 + L p1 ) + α 1 (S 0), + 1 ]} 2 (L u1 L p1 ) + α 1 (S 1) {[ α 2 (S 1) = max + 1 ] [ 2 (L u1 + L p1 ) + α 1 (S 0), 1 ]} 2 (L u1 L p1 ) + α 1 (S 1) {[ α 3 (S 0) = max 1 ] [ 2 (L u2 + L p2 ) + α 2 (S 0), + 1 ]} 2 (L u2 L p2 ) + α 2 (S 1) {[ α 3 (S 1) = max + 1 ] [ 2 (L u2 + L p2 ) + α 2 (S 0), 1 ]} 2 (L u2 L p2 ) + α 2 (S 1)

Turbo codes 75 β 3 (S 0) = 1 2 (L u3 + L p3 ) β 1 (S 1) = + 1 2 (L u3 L p3 ) {[ β 2 (S 0) = max 1 ] 2 (L u2 + L p2 ) + β 3 (S 0) {[ β 2 (S 1) = max + 1 ] 2 (L u2 L p2 ) + β 3 (S 0) {[ β 1 (S 0) = max 1 ] 2 (L u1 + L p1 ) + β 2 (S 0) {[ β 1 (S 1) = max + 1 ] 2 (L u1 L p1 ) + β 2 (S 0) [, + 1 ]} 2 (L u2 + L p2 ) + β 3 (S 1) [, 1 ]} 2 (L u2 L p2 ) + β 3 (S 1) [, + 1 ]} 2 (L u1 + L p1 ) + β 2 (S 1) [, 1 ]} 2 (L u1 L p1 ) + β 2 (S 1) We note here that the a priori L-vaue of a parity bit L a (p ) = 0 for a, since for a inear code with equay ikey.

Turbo codes 76 We can write the extrinsic a posteriori L-vaues in terms of L u2 and L p2 as L e (u 0 ) = L p0 + β 1 (S 1) β {[ 1 (S 0), L e (u 1 ) = max + 1 ] [ 2 L p1 + α 1 (S 0) + β 2 (S 1), 1 ]} 2 L p1 + α 1 (S 1) + β 2 (S 0) {[ max 1 ] [ 2 L p1 + α 1 (S 0) + β 2 (S 0), + 1 ]} 2 L p1 + α 1 (S 1) + β 2 (S 1) {[ L e (u 2 ) = max + 1 ] [ 2 L p2 + α 2 (S 0) + β 3 (S 1), 1 ]} 2 L p2 + α 2 (S 1) + β 3 (S 0) {[ max 1 ] [ 2 L p2 + α 2 (S 0) + β 3 (S 0), + 1 ]} 2 L p2 + α 2 (S 1) + β 3 (S 1) and L e (u 3 ) = α 3 (S 1) α 3 (S 0). The extrinsic L-vaue of bit u does not depend directy on either the received or a priori L-vaues of u.

Turbo codes 77 Pease see the detai computation in Exampe 16.14 in Lin s book at page 833 836.

Turbo codes 78

Turbo codes 79

Turbo codes 80

Turbo codes 81

Turbo codes 82

Turbo codes 83

Turbo codes 84 Iterative decoding using the max-og-map agorithm Exampe. When the approximation max(x, y) max(x, y) is appied to the forward and backward recursions, we obtain for the first iteration of decoder 1 α 2 (S 0) max{ 0.70, 1.20} = 1.20 α 2 (S 1) max{ 0.20, 0.30} = 0.20 {[ α 3 (S 0) max 12 ] ( 1.8 + 1.1) + 1.20 = max {1.55, 1.65} = 1.55 {[ α 3 (S 1) max + 12 ] ( 1.8 + 1.1) + 1.20, [+ 12 ]} ( 1.8 1.1) 0.20, [ 12 ]} ( 1.8 1.1) 0.20 = max {0.85, 1.25} = 1.25

Turbo codes 85 β 2 (S 0) max{0.35, 1.25} = 1.25 β 2 (S 1) max{ 1.45, 3.05} = 3.05 {[ β 1 (S 0) max 12 ] (1.0 0.5) + 1.25 = max {1.00, 3.30} = 3.30 {[ β 1 (S 1) max + 12 ] (1.0 + 0.5) + 1.25, [+ 12 ]} (1.0 0.5) + 3.05, [ 12 ]} (1.0 + 0.5) + 3.05 = max {2.00, 2.30} = 2.30 L (1) e (u 0) 0.1 + 2.30 3.30 = 0.90 L (1) e (u 0) max {[ 0.25 0.45 + 3.05], [0.25, +0.45 + 1.25]} max {[0.25 0.45 + 1.25], [ 0.25 + 0.45 + 3.05]} = max{2.35, 1.95} max{1.05, 3.25} = 2.35 3.25 = 0.90, and, using simiar cacuations, we have. L (1) e (u 2 ) +1.4 and L (1) e (u 3 ) 0.3

Turbo codes 86 Using these approximate extrinsic a posteriori L-vaues as a posteriori L-vaue as a priori L-vaues for decoder 2, and recaing that the roes of u 1 and u 2 are reversed for decoder 2, we obtain α 1 (S 0) = 1 2 (0.8 0.9 1.2) = 0.65 α 1 (S 1) = + 1 2 (0.8 0.9 1.2) = 0.65 α 2 (S 0) max {[ 1 2 ( 1.8 + 1.4 + 1.2) + 0.65], [ + 1 2 ( 1.8 + 1.4 1.2) 0.65]} = max{0.25, 1.45} = 0.25 α 2 (S 1) max {[ + 1 2 ( 1.8 + 1.4 + 1.2) + 0.65], [ 1 2 ( 1.8 + 1.4 1.2) 0.65]} = max{1.05, 0.15} = 1.05 α 3 (S 0) max {[ 1 2 (1.0 0.9 + 0.2) + 0.25], [ + 1 2 (1.0 0.9 0.2) + 1.05]} = max{0.10, 1.00} = 1.00 α 3 (S 1) max {[ + 1 2 (1.0 0.9 + 0.2) + 0.25], [ 1 2 (1.0 0.9 0.2) + 1.05]} = max{0.40, 1.10} = 1.10

Turbo codes 87 β 3 (S 0) = 1 2 (1.6 0.3 1.1) = 0.10 β 3 (S 1) = + 1 2 (1.6 0.3 + 1.1) = 1.20 β 2 (S 0) max {[ 1 2 (1.0 0.9 + 0.2) 0.10], [ + 1 2 (1.0 0.9 + 0.2) + 1.20]} = max{ 0.25, 1.35} = 1.35 β 2 (S 1) max {[ + 1 2 (1.0 0.9 0.2) 0.10], [ 1 2 (1.0 0.9 0.2) + 1.20]} = max{ 0.15, 1.25} = 1.25 β 1 (S 0) max {[ 1 2 ( 1.8 + 1.4 + 1.2) + 1.35], [ + 1 2 ( 1.8 + 1.4 + 1.2) + 1.25]} = max{0.95, 1.65} = 1.65 β 1 (S 1) max {[ + 1 2 ( 1.8 + 1.4 1.2) + 1.35], [ 1 2 ( 1.8 + 1.4 1.2) + 1.25]} = max{0.55, 2.05} = 2.05 L (2) e (u 0) 1.2 + 2.05 1.65 = 0.80 L (2) e (u 2) max {[0.6 + 0.65 + 1.25], [ 0.6 0.65 + 1.35]} max {[ 0.6 + 0.65 + 1.35], [0.6 0.65 + 1.25]} = max {2.5, 0.1} max {1.4, 1.2} = 2.5 1.4 = 1.10 and, using simiar cacuations, we have L (2) e (u 1 ) 0.8 and L (2) e (u 3 ) +0.1.

Turbo codes 88 We cacuate the approximate a posteriori L-vaue of information bit u 0 after the first compete iteration of decoding as L (2) (u 0 ) = L c r u0 + L (2) a (u 0 ) + Le(u 0 ) 0.8 0.9 0.8 = 0.9, and we simiar obtain the remaining approximate a posteriori L-vaues as L (2) (u 2 ) +0.7, L (2) (u 1 ) 0.7, and L (2) (u 3 ) +1.4.

Turbo codes 89 Pease see the detai computation in Exampe 16.15 in Lin s book at page 837 838.

Turbo codes 90 Fundamenta principe of turbo decoding We now summarize our discussion of iterative decoding using the og-map and Max-og-MAP agorithm: The extrinsic a posteriori L-vaues are no onger stricty independent of the other terms after the first iteration of decoding, which causes the performance improvement from successive iterations to diminish over time. The concept of iterative decoding is simiar to negative feedback in contro theory, in the sense that the extrinsic information from the output that is fed back to the input has the effect of ampifying the SNR at the input, eading to a stabe system output.

Turbo codes 91 Decoding speed can be improved by a factor of 2 by aowing the two decoders to work in parae. In this case, the a priori L-vaues for the first iteration of decoder 2 wi be the same as for decoder 1 (normay equa to 0), and the extrinsic a posteriori L-vaues wi then be exchanged at the same time prior to each succeeding iteration. After a sufficient number of iterations, the fina decoding decision can be taken from the a posteriori L-vaues of either decoder, or form the average of these vaues, without noticeaby affect performance.

Turbo codes 92 As noted earier, the L-vaues of the parity bits remain constant throughout decoding. In seriay concatenated iterative decoding systems, however, parity bits from the outer decoder enter the inner decoder, and thus the L-vaues of these parity bits must be updated during the iterations. The forgoing approach to iterative decoding is ineffective for nonsystematic constituent codes, since channe L-vaues for the information bits are not avaiabe as inputs to decoder 2; however, the iterative decoder of Figure O can be modified to decode PCCCs with nonsystematic component codes.

Turbo codes 93 As noted previousy, better performance is normay achieved with pseudorandom intereavers, particuary for arge bock engths, and the iterative decoding procedure remains the same. It is possibe, however, particuary on very noisy channes, for the decoder to converge to the correct decision and then diverge again, or even to osciate between correct and incorrect decision.

Turbo codes 94 Iterations can be stopped after some fixed number, typicay in the range 10 20 for most turbo codes, or stopping rues based on reiabiity statistics can be used to hat decoding. The Max-og-MAP agorithm is simper to impement than the og-map agorithm; however, it typicay suffers a performance degradation of about 0.5 db. It can be shown that MAX-og-MAP agorithm is equivaent to the SOVA agorithm.

Turbo codes 95 The stopping rues for iterative decoding 1. One method is based on the cross-entropy (CE) of the APP distributions at the outputs of the two decoders. - The cross-entropy D(P Q) of two joint probabiity distributions P(u) and Q(u), assume statistica independence of the bits in the vector u = [u 0, u 1,...,u K 1 ], is defined as { D(P Q) = E p og P(u) } Q(u) = K 1 =0 { E p og P(u } ). Q(u ) where E p { } denote expectation with respect to the probabiity distribution P(u ).

Turbo codes 96 - D(P Q) is a measure of the coseness of two distributions, and D(P Q) = 0 iff P(u ) = Q(u ), u = ±1, = 0, 1,...,K 1. - The CE stopping rue is based on the different between the a posteriori L-vaues after successive iterations at the outputs of the two decoders. For exampe, et L (1) (i) (u ) = L c r u + L (1) a(i) (u ) + L (1) e(i) (u ) represent the a posteriori L-vaue at the output decoder 1 after iteration i, and et L (2) (i) (u ) = L c r u + L (2) a(i) (u ) + L (2) e(i) (u ) represent the a posteriori L-vaue at the output decoder 2 after iteration i.

Turbo codes 97 - Now, using the facts that L (1) a(i) (u ) = L (2) e(i 1) (u ) and L (2) a(i) (u ) = L (1) e(i) (u ), and etting Q(u ) and P(u ) represent the a posteriori probabiity distributions at the outputs of decoders 1 and 2, respectivey, we can write and L (Q) (i) (u ) = L c r u + L (P) e(i 1) (u ) + L (Q) e(i) (u ) L (P) (i) (u ) = L c r u + L (Q) e(i) (u ) + L (P) e(i) (u ). - We can write the difference in the two soft outputs as L (P) (i) (u ) L (Q) (i) (u ) = L (P) e(i) (u ) L (P) e(i 1) (u ) = L (P) e(i) (u ); that is, L (P) e(i) (u ) represents the difference in the extrinsic a posteriori L-vaues of decoder 2 in two successive iterations.

Turbo codes 98 - We now compute the CE of the a posteriori probabiity distributions P(u ) and Q(u ) as foows: E p { og P(u ) Q(u ) } = P(u = +1)og P(u = +1) Q(u = +1) = +P(u = 1)og P(u = 1) Q(u = 1) e L(P) (i) (u ) 1 + e L(P) (i) (u ) e L(P) (i) (u ) 1 + e L(P) (i) (u ) og og e L(P) (i) (u ) 1 + e L(P) (i) (u ) e L(P) (i) (u ) 1 + e L(P) (i) (u ) (Q) 1 + el (i) (u ) e L(Q) (i) (u ) + (Q) 1 + e L (i) (u ) e L(Q) (i) (u ) where we have used expressions for the a posteriori distributions P(u = ±1) and Q(u = ±1) anaogous to those given in P(u = ±1) = e±l a(u ) {1+e ±L a(u ) }.,

Turbo codes 99 - The above equation can simpify as { } E p og P(u ) Q(u ) = L(P) e(i) (u ) 1+e L(P) (i) (u ) + og 1+e - The hard decisions after iteration i, û (i) u (i) = sgn [ ] L (P) (i) (u ) = sgn L (Q) (i) (u ). 1+e L(P) (i) (u ), satisfy [ L (Q) (i) (u ) ].

Turbo codes 100 - Using above equation and noting that [ ] L (P) (i) (u ) = sgn L (P) (i) (u ) L (P) (i) (u ) = û (i) L (P) (i) (u ) and [ ] L (Q) (i) (u ) = sgn L (Q) (i) (u ) L (Q) (i) (u ) = û (i) L (Q) (i) (u ), we can show that { E p og P(u ) Q(u ) } = L(P) e(i) (u ) further to E p { og P(u ) Q(u ) 1+e L(P) (i) (u L (Q) (i) (u ) 1+e + og ) } û(i) L (P) e(i) (u ) + og 1+e (i) (u ) 1+e L(P) 1+e L(P) (i) (u ) simpifies L (Q) (i) (u ). (i) (u ) 1+e L(P)

Turbo codes 101 - We now use the facts that once decoding has converged, the magnitudes of the a posteriori L-vaues are arge; that is, L (P) (i) (u ) 0 and L (Q) (i) (u ) 0, and that when x is arge, e x is sma, and 1 + e x 1 and og(1 + e x ) e x - Appying these approximations to E p { og P(u ) Q(u ) show that { E p og P(u } ) Q(u ) L e (Q) (i) (u ) ( 1 e û(i) ) (1 + û (i) L (P) e(i) (u )) }, we can L (P) e(i) (u )

Turbo codes 102 - Noting that the magnitude of L (P) e(i) (u ) wi be smaer than 1 when decoding converges, we can approximate the term e û(i) L (P) as foows: E p { og P(u ) Q(u ) e(i) (u ) using the first two terms of its series expansion e û(i) L (P) e(i) (u ) 1 û (i) which eads to the simpified expression } L e (Q) (i) (u ) = e L (Q) (i) (u ) L (P) e(i) (u ), [( ) ( 1 û (i) L (P) e(i) (u ) 1 + û (i) [ ] 2 L û (i) L (P) (P) e(i) (u ) = e(i) (u ) e L(Q) (i) (u ) 2 L (P) )] e(i) (u )

Turbo codes 103 We can write the CE of the probabiity distributions P(u) and Q(u) at iteration i as { } D (i) (P Q) = E p og P(u) K 1 =0 Q(u) L (P) e(i) (u ) e L(Q) (i) (u ) where we note that the statistica independence assumption does not hod exacty as the iterations proceed. We next define T(i) = L (P) e(i) (u ) 2 (i) (u ) L (Q) e as the approximate vaue of the CE at iteration i. T(i) can be computed after each iteration. 2,

Turbo codes 104 Experience with computer simuations has shown that once convergence is achieved, T(i) drops by a factor of 10 2 to 10 4 compared with its initia vaue, and thus it is reasonabe to use T(i) < 10 3 T(1) as a stopping rue for iterative decoding.

Turbo codes 105 2. Another approach to stopping the iterations in turbo decoding is to concatenate a high-rate outer cycic code with an inner turbo code. Figure : A concatenation of an outer cycic code with an inner turbo code.

Turbo codes 106 After each iteration, the hard-decision output of the turbo decoder is used to check the syndrome of the cycic code. If no errors are detected, decoding is assumed correct and the iterations are stopped. It is important to choose an outer code with a ow undetected error probabiity, so that iterative decoding is not stopped prematurey. For this reason it is usuay advisabe not to check the syndrome of the outer code during the first few iterations, when the probabiity of undetected error may be arger than the probabiity that the turbo decoder is error free.

Turbo codes 107 This method of stopping the iterations is particuary effective for arge bock engths, since in this case the rate of the outer code can be made very high, thus resuting in a negigibe overa rate oss. For arge bock engths, the foregoing idea can be extended to incude outer codes, such as BCH codes, that can correct a sma number of errors and sti maintain a ow undetected error probabiity. In this case, the iterations are stopped once the number of hard-decision errors at the output of the turbo decoder is within the error-correcting capabiity of the outer code.

Turbo codes 108 This method aso provides a ow word-error probabiity for the compete system; that is, the probabiity that the entire information bock contains one or more decoding errors can be made very sma.

Turbo codes 109 Q&A