Convolutional Codes ddd, Houshou Chen. May 28, 2012

Similar documents
Lecture 3 : Introduction to Binary Convolutional Codes

Chapter 7: Channel coding:convolutional codes

Introduction to Binary Convolutional Codes [1]

Example of Convolutional Codec

Digital Communications

An Introduction to Low Density Parity Check (LDPC) Codes

Binary Convolutional Codes

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

Coding on a Trellis: Convolutional Codes

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Code design: Computer search

Introduction to Convolutional Codes, Part 1

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Turbo Codes for Deep-Space Communications

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

ECEN 655: Advanced Channel Coding

The Turbo Principle in Wireless Communications

HIGH DIMENSIONAL TRELLIS CODED MODULATION

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

Trellis-based Detection Techniques

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels

The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes

Channel Codes for Short Blocks: A Survey

Appendix D: Basics of convolutional codes

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Introduction to convolutional codes

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

ELEC 405/511 Error Control Coding. Binary Convolutional Codes

New Puncturing Pattern for Bad Interleavers in Turbo-Codes

On Weight Enumerators and MacWilliams Identity for Convolutional Codes

A NEW CHANNEL CODING TECHNIQUE TO APPROACH THE CHANNEL CAPACITY

Turbo Codes for xdsl modems

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Physical Layer and Coding

High rate soft output Viterbi decoder

Low-density parity-check codes

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Bifurcations and Chaos in Turbo Decoding Algorithms

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Simplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation

BASICS OF DETECTION AND ESTIMATION THEORY

State-of-the-Art Channel Coding

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Iterative Solutions Coded Modulation Library Theory of Operation

Decision-Point Signal to Noise Ratio (SNR)

Error Correction Methods

Unequal Error Protection Turbo Codes

Efficient LLR Calculation for Non-Binary Modulations over Fading Channels

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels

Rapport technique #INRS-EMT Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping

Lecture 4: Linear Codes. Copyright G. Caire 88

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding

A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Interleaver Design for Turbo Codes

LDPC Codes. Slides originally from I. Land p.1

Lecture 12. Block Diagram

Channel Coding and Interleaving

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Graph-based Codes and Iterative Decoding

THIS paper is aimed at designing efficient decoding algorithms

Low-Density Parity-Check codes An introduction

Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with Iterative Decoding

Channel Coding I. Exercises SS 2017

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications

ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES

Parallel Concatenated Chaos Coded Modulations

Belief-Propagation Decoding of LDPC Codes

QPP Interleaver Based Turbo-code For DVB-RCS Standard

THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

Capacity-approaching codes

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

The Super-Trellis Structure of Turbo Codes

CHAPTER 8 Viterbi Decoding of Convolutional Codes

Construction of coset-based low rate convolutional codes and their application to low rate turbo-like code design

AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES

Lecture 4 : Introduction to Low-density Parity-check Codes

On the exact bit error probability for Viterbi decoding of convolutional codes

Girth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes

Solutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2)

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

Optimum Soft Decision Decoding of Linear Block Codes

Research Article Generalized Punctured Convolutional Codes with Unequal Error Protection

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Low-density parity-check (LDPC) codes

Transcription:

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional Codes ddd, Houshou Chen Department of Electrical Engineering National Chung Hsing University Taichung, 402 Taiwan May 28, 202

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Representation I and II shift register representation scalar G matrix representation 2 Representation III and IV impulse response representation polynomial G(D) matrix representation example 3 Trellis of state diagram tree diagram regular trellis diagram 4 Viterbi decoding the algorithm Viterbi for BSC Viterbi for AWGN 5 Turbo codes properties performance

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes k Information birs (m+)k stages ui ui u i m 2... k 2... k 2... k G G 0 G m + + + + 2 3 n v i Encoded Sequence to modulator For a [n, k, m] convolutional code, the encoder output v i depends not only on the inputs u i but also on some number of previous inputs: v i = f (u i, u i,, u i m ), where v i F n 2 and u i F k 2. A rate convolutional encoder with memory order m can be realized as a k-input, n-output linear sequential circuit with input memory m.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes () v i u i u i 2 v i u i (2) v i A [n = 2, k =, m = 2] convolutional code v () i = u i + u i + u i 2, v (2) i = u i + u i 2 v i = (v () i, v (2) i )

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional codes were first introduced by Elias in 955. There are several methods to describe a. sequential circuit: shift register representation. 2 algebraic description: scalar G matrix 3 LTI system: impulse response g(l) in time domain 4 LTI system: polynomial G(D) matrix in D domain 5 combinatorial description: state diagram, tree, and trellis

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes 訊息序列 ui 2 i 過去訊息區塊 u 目前訊息區塊 ( 包含 k 個位元 ) u i u i u i encoder v i v i u i = (u () i,, u (k) i ) v i = (v () i,, v (n) i ) (time domain) G 2 迴旋碼編碼器 字碼區塊 ( 包含 n 個位元 ) v i G G0 u(d) = P i u id i G(D) v(d) = P i v id i v(d) = u(d)g(d) (frequency domain) There are two types of codes in general Block codes: G(D) = G = v i = u i G Convolutional codes: G(D) = G 0 + G D + + G md m = v i = u i G 0 + u i G + u i m G m

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes shift register representation ui u i ui v i v i Figure: Encoder of a [7, 4, 3] Hamming codes Use combinational logic to implement block codes. A information block u i of length k at time i is mapped to a codeword v i of length n at time i by a k n generator matrix for each i, i.e., no memory. Usually, n and k are large. v i = u i G

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes shift register representation u u i i ui m G0 G G m G m v i Figure: Encoder of Use sequential logic to implement. We need m + matrices G 0, G,, G m of size k n: Usually, n and k are small. v i = u i G 0 + u i G + u i 2 G 2 + + u i m G m

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes scalar G matrix representation Scalar G matrix of linear block codes Since v i = u i G, we have [u 0, u,u 2, ] 2 6 4 G G G... 3 7 5 = [v 0, v, v 2, ] Figure: A time-varying trellis of a block code.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes scalar G matrix representation Scalar G matrix of Since v i = u i G 0 + u i G + u i 2 G 2 + + u i m G m, we have [u 0, u, u 2,, u m, u m, u m+, ] 2 6 4 G 0 G G m G 0 G m G m G m 2 G m G G 2 G 0 G G 0 = [v 0, v,, v m, v m+, ] 3 7 5 a = (00) = 0 0/00 0/00 0/00 0/00 0/00 0/00 / / / / / / b = (0) = 0/ /00 0/ /00 0/ /00 0/ /00 c = (0) = 2 0/0 /0 0/0 /0 0/0 /0 0/0 /0 0/0 /0 d = () = 3 0/0 0/0 0/0 0/0 /0 /0 /0 /0 A time-invariant trellis of a convolutional code.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes impulse response representation We assume that the initial values in the m memories are all set to zeros. (i.e., the conventional trellis with zero-state as the starting state) The output v i is then depend on the current input u i and the states u i,, u i m. For example, the scalar matrix shows that v 0 = u 0 G 0 + u G + u 2 G 2 + + u m G m = u 0 G 0 since u = u 2 = u m = 0. v = u G 0 + u 0 G + u G 2 + + u m+ G m = u G 0 + u 0 G. From the scalar G matrix representation, we have v i = u i G 0 + u i G + u i 2 G 2 + + u i m G m = u i G i As a LTI systems with k inputs and n outputs, we can describe a convolutional code by k n impulse responses g (j) i = g (j) i (l), 0 l m, ( i k, and j n), which can be obtained from m + matrices {G 0,, G m}.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes impulse response representation Define the input sequence due to the ith stream, j k, as u = 0 B @ u (j) = u (j) 0 u(j) u(j) 2 u(j) 3 u () 0 u () u () i u (2) 0 u (2) u (2) i.. u (j) 0 u (j).. u (k) 0 u (k)......... u (j) i......... u (k) i u 0 u u i C A = u () = u (2).. = u (j).. = u (k)

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes impulse response representation Define the output sequence due to the jth stream, j n, as v = 0 B @ v (j) = v (j) 0 v(j) v(j) 2 v(j) 3 v () 0 v () v () i v (2) 0 v (2) v (2) i.. v (j) 0 v (j).. v (n) 0 v (n)......... v (j) i......... v (n) i v 0 v v i C A = v () = v (2).. = v (j).. = v (n)

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes impulse response representation A [n, k, m] convolutional code can be represented as a MIMO LTI system with k input streams (u (), u (2),, u (k) ), and n output streams (v (), v (2),, v (n) ), and a k n impulse response matrix g(l) = {g (j) i (l)}. v (j) = u () g (j) + u (2) g (j) 2 + u (k) g (j) k = This is the origin of the name convolutional code. kx i= u (i) g (j) i The impulse response g (j) i of the ith input with the response to the jth output is found by stimulating the encoder with the discrete impulse (000 ) at the ith input and by observing the jth output when all other inputs are set to (0000 ).

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes polynomial G(D) matrix representation Polynomial matrix representation The k n matrix g(l) consisting of impulse responses g (j) i (l) and the k n matrix G(D) consisting of G (j) i (D) form a Fourier pair. Or we can form a k n matrix G(D) by G(D) = [G (j) i (D)] = G 0 + G D + G 2 D 2 + + G md m This is the polynomial matrix of a convolutional code. Let U(D) = (U (D), U 2 (D),..., U k (D)) be the z transform of (u (), u (2),, u (k) ). Let V(D) = (V (D), V 2 (D),..., V n(d)) be the z transform of (v (), v (2),, v (n) ). Then We have (in D-domain) V(D) = U(D) G(D).

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes example example Shift register: v () =(000) u=( 0 ) v=( 0 0 0 00 0 ) v (2) =(000) Figure: (2,,2) convolutional code encoder Scalar G matrix: G 0 = (, ), G = (, 0), G 2 = (, ) impulse response g(l) g(l) = (, 0) = (7, 5) polynomial matrix G(D): G(D) = G 0 + G D + G 2 D 2 = ( + D + D 2, + D 2 )

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes example v () =(000) u=( 0 ) v=( 0 0 0 00 0 ) v (2) =(000) Figure: (2,,2) convolutional code encoder Input: u = (,,, 0, ) in time domain, v can be obtained from the scalar G matrix: (, 0, 0, 0, 00, 0, ) = (,,, 0, ) 2 6 4 0 0 0 0 0 3 7 5

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes example v () =(000) u=( 0 ) v=( 0 0 0 00 0 ) v (2) =(000) Figure: (2,,2) convolutional code encoder v can be obtained from D domain by U(D) = + D + D 2 + D 4 V(D) = U(D) ( + D + D 2, + D 2 ) V (D) = + D 2 + D 5 + D 6 V 2 (D) = + D + D 3 + D 6 v () = (, 0,, 0, 0,, ) v (2) = (,, 0,, 0, 0, ) In time domain: v = (, 0, 0, 0, 00, 0, )

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes example example 2 Shift register: Figure: (3,2,3) convolutional code encoder Scalar G matrix: impulse response g(l) G 0 = 0 0 G = 0 0 0 G 2 = 0 0 0 0 g(l) = 0 00 00 00 00 polynomial matrix G(D): G(D) = = 3 2 4 7 + D D D 2 + D + D 2

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes example Let the input be u = (u, u 2 ) = (, 0), then u () = (0) and u (2) = (): U (D) = U 2 (D) = + D V (D) V 2 (D) V 3 (D) = U (D) U 2 (D) + D D D 2 + D + D 2 + D D = + D D 2 + D + D 2 = + D + D 2 + D 3 D 3 v () = (,,, ), v (2) = (, 0, 0, 0), v (3) = (0, 0, 0, ) v 0 = (,, 0) v = (, 0, 0) v 2 = (, 0, 0) v 3 = (, 0, )

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram State diagram from encoder G 0 = (, ), G = (, 0), G 2 = (, ) g(l) = (, 0) = (7, 5) G(D) = [ + D + D 2, + D 2 ] = [, ] + [, 0]D + [, ]D 2

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram In the state diagram, we use (u t/v t) to indicate the input/output pair during the transition from the state s i to state s i+.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram State diagram from a scalar G matrix u i 2 u i u i 2 6 4 s i v i s i+ 0 0 0 0 0 current input and current state: u i, s i = (u i, u i 2 ) The next state s i+ and the output v i is a function of the current input u i and state s i. next state: s i+ = (u i, u i ) = f (s i, u i ) output: v i = u i () + u i (0) + u i 2 () = g(s i, u i ) The encoder of a convolutional code can be regarded as a time-invariant finite state machine. 3 7 5

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes tree diagram Any codeword of a convolutional code can be represented as a path in a code tree. Assume the h input sequences u, u 2,..., u h, u h+ = u h+2 = = u h+m = 0 with zero-biting. The code tree then has h + m + level with the leftmost level 0 and the rightmost level h + m. In the first h levels, each node has 2 k branches and after h level, each node has one branch. Each path from the original node to the terminal code form a codeword of a convolutional code.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes regular trellis diagram Trellis of a = (00) = 0 0/00 / 0/00 / 0/00 / 0/00 / 0/00 / 0/00 / b = (0) = 0/ /00 0/ /00 0/ /00 0/ /00 c = (0) = 2 0/0 /0 0/0 /0 0/0 /0 0/0 /0 0/0 /0 d = () = 3 0/0 /0 0/0 /0 0/0 /0 0/0 /0 We can expand the state diagram of a convolutional code to get a trellis of a convolutional code with infinite length. Since a convolutional code has a regular G matrix, the trellis of it is time-invariant. In practice, we have to truncate a convolutional code to be a block code. There are in general three methods: zero-biting, truncation, and tail-biting.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes regular trellis diagram zero biting: h information blocks, followed by m zero blocks. [n, k] convolutional code = [n(h + m), kh] block code. Here h = 4 and m = 2. 2 6 4 G 0 G G 2 G 0 G G 2 G 0 G G 2 G 0 G G 2 Trellis with zero state at time 0 to zero state at time h + m, R = k h n h+m. 3 7 5

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes regular trellis diagram truncation: h information blocks. [n, k] convolutional code = [nh, kh] block code. Here h = 4 and m = 2. 2 6 4 G 0 G G 2 0 0 G 0 G G 2 0 0 G 0 G 0 0 0 G 0 3 7 5 Trellis with zero state at time 0 to any state at time Lh. R = k n.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes regular trellis diagram tailbiting: h information blocks. [n, k] convolutional code = [nh, kh] block code. Here h = 4 and m = 2. 2 6 4 G 0 G G 2 0 0 G 0 G G 2 G 2 0 G 0 G G G 2 0 G 0 3 7 5 Trellis with any state at time 0 to the same state at time h. R = k n.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Trellis-based decoding Three decoding algorithms: Viterbi decoding: 967 by Viterbi 2 SOVA decoding: 989 by Hagenauer 3 BCJR decoding: 974 by Bahl etc. These algorithms are operated on the trellis of the codes and the complexity depends on the number of states in the trellis. We can apply these algorithms on block and since the former has the irregular trellis and the latter has the regular trellis. Viterbi and SOVA decoding minimize the codeword error probability and BCJR minimize the information bit error probability.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes the algorithm The Viterbi algorithm We use the (zero-biting) trellis of for illustration. Assume that an information sequence u = (u 0, u,..., u h ) of the length K = kh is encoded. Then a codeword v = (v 0, v,, v h, v h,, v h+m ) of length N = n(h + m) is generated after the convolutional encoder. Thus, with zero-biting of mk zero bits, we have a [n(h + m), kh] linear block code. After the discrete memoryless channel (DMC), a sequence r = (r 0, r,..., r h+m ) is received. We focus on BSC and AWGN channels.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes the algorithm A (ML) decoder for a DMC chooses ˆv as the codeword v that maximizes the log-likelihood function log p(r v). Since the channel is memoryless, we have Y h+m N p(r v) = p(r l v l ) = p(r l v l ) l=0 X h+m M(r v) = log p(r v) = l=0 Y l=0 log p(r l v l ) {z } M(r l v l ) X N = M(r v): path metric, M(r l v l ): branch metric, M(r l v l ): bit metric. l=0 log p(r l v l ) {z } M(r l v l ) Similarly, a partial metric for the first t branch of a path can be written as X t M([r v] t) = l=0 X nt M(r l v l ) = l=0 M(r l v l ) The bit metric M(r l v l ) = log p(r l v l ) can be replaced by c log p(r l v l ) + c 2, where c > 0.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes the algorithm Summary of Viterbi algorithm The operations of the Viterbi algorithm is the addition, comparison, and selection (ACS). At time t +, considering each state s t+ and 2 k previous state s t connecting to s t+, we compute the (optimal) partial metric of s t+ by adding the branch metric connecting between s t and s t+ to the partial metric of s t and selecting the largest metric. We store the optimum path (survivor) with the partial metric for each state s t at time t. The final survivor ˆv in the Viterbi algorithm is the maximum likelihood path; that is M(r ˆv) M(r v), all v ˆv

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes the algorithm Assume Viterbi algorithm is finished at time k:

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes the algorithm Viterbi for a Binary-input, Quaternary-Output DMC A(3,, 2) convolutional code with h = 5 and G(D) = [ + D, + D 2, + D + D 2 ] The bit metric is as follows. M(r l v l ) 0 0 2 2 0 0 8 5 0 0 5 8 0 The quaternary received sequence is r = ( 2 0, 0 2, 0,, 0 2 0, The final survivor is 2 0 2, 2 0 ) ˆv = (, 00, 0, 0, 000, 000, 000) The decoded information is û = (,, 0, 0, 0) M( 20 000) = M( 0) + M( 2 0) + M(0 0) = 0 + 5 + 0 = 5

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for BSC Viterbi decoding for BSC In BSC with transition probability p < /2, the received sequence r is binary and the log-likelihood function becomes log p(r v) = log[p d(r,v) ( p) N d(r,v) ] = d(r, v) log p + N log( p). p p Because log < 0 and N log( p) is a constant for all v, an MLD for a BSC chooses p v as the codeword that minimizes the Hamming distance X h+m N d(r, v) = d(r l, v l ) = d(r l, v l ) l=0 Hence the Hamming distance d(r l, v l ) can be used as a branch metric and the algorithm finds the path through the trellis, which has the smallest Hamming distance to r. X l=0

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for BSC A (2,, 2) convolutional code with h = 4 and G(D) = [ + D + D 2, + D 2 ] The received sequence is r = (0, 0, 00, 00, 0, ) The final survivor is ˆv = (, 0, 00, 0, 0, ) The decoded information sequence is û = (, 0,, ) That final survivor has a Hamming metric of 2.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for BSC r = (0, 0, 00, 00, 0, ) 0 0 00 0 2 2 2 3 00 00 0 2(3) 2 0 2 3(4) 0 0 (4) 00 0 3(4) 0

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for BSC r = (0, 0, 00, 00, 0, ) 0 0 0 2 3 3 00 0 2 2 0 00 2(5) 0 2(4) 0 3(4) 00 0 2(4) 0 0 0 0 0 0 2 2 3 2 0 2 0 00 3(3) 0 2(5) 0

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for BSC 3 2 00 2 2(5) 0 0 0 0 r = (0, 0, 00, 00, 0, ). The decoded codeword is ˆv = (, 0, 00, 0, 0, ) The decoded information is û = (, 0,, )

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for AWGN Viterbi decoding for AWGN Consider the AWGN channel with binary input, i.e., where v i {, }. The path metric can be simplified as v = (v 0, v,, v N ), N Y M(r v) = log p(r v) = log( X l=0 e (r l v l ) πn0 = N (r l v l ) 2 + N N 0 2 log. πn l=0 0 2 N 0 ) A codeword v minimize the Euclidean distance P N l=0 (r l v l ) 2 also maximize the log-likelihood function log p(r v). Viterbi algorithm finds the optimal path v with minimum Euclidean distance from r.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for AWGN The path metric can also be simplified as M(r v) = log p(r v) X = N (rl 2 2r l v l + v 2 l N ) + N 0 2 log πn l=0 0 = 2 N 0 N X l=0 r l v l N 0 ( r 2 + N) + N 2 log πn 0 A codeword v maximize the correlation r r = P N l=0 r l v l also maximize the log-likelihood function log p(r v).

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes What are turbo codes? Turbo codes, introduced by Berrou and Glavieux in ICC 93., is a Shannon-capacity approaching code. Turbo codes can come closer to approaching Shannon s limit than any other class of error correcting codes. Turbo codes achieve their remarkable performance with relatively low complexity encoding and decoding algorithms.

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes properties We have Parallel concatenated (PCCC, turbo codes) 2 Parallel concatenated block codes (PCBC, block turbo codes) 3 Serial concatenated 4 Serial concatenated block codes

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes properties One Decoder (D) for every Encoder (C) Iterative Decoding: D D2 D D2 D Improved Bit-Error Rate Performance Turbo Encoder Turbo Decoder Û U C D Π Π Π C 2 D 2 Π - Encoder: produce a code with randomlike property. (interleaver) Decoder: use soft-in soft-out iterative decoding. (turbo principle)

dback, like a turbo engine. Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes properties Turbo codes get their name because the decoder uses feedback, like a turbo engine. The term turbo in Turbo turbo coding and LDPC is related Codes to decoding, not encoding. The feedback of extrinsic information form the SISO decoders in the iterative decoding that mimics the feedback of exhaust gases in a turbo engine. 3/33

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes performance Power efficiency of existing standards Power Efficiency of Standard Binary Channel Codes.0 BPSK Capacity Bound Uncoded BPSK Spectral Efficiency Code Rate R 0.5 Turbo Code 993 LDPC Code 200 Chung, Forney, Richardson, Urbanke Shannon Capacity Bound Pioneer 968-72 Voyager 977 IS-95 99 Iridium 998 Odenwalder Convolutional Codes 976 Galileo:LGA 996 Galileo:BVD 992 Mariner 969 arbitrarily low BER: = 0 5 P b -2-0 2 3 4 5 6 7 8 9 0 Eb/No in db

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes performance (37, 2, 65536) turbo codes with G(D) = [, +D 4 +D+D 2 +D 3 +D 4 ]

Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes performance References S. Lin and D. J. Costello, Jr., Error Control Coding, 2nd ed., Prentice Hall, 2004. Chapter Todd K. Moon, Error Correction Coding, John Wiley, 2005 Chapter. and.2 Ezio Biglieri, Coding for Wireless Channels, Springer, 2005 Chapter 6.-6.3 J. G. Proakis, Digital Communication, 4th ed., McGraw-Hill, 2008. Chapter 8.