ELEC 405/511 Error Control Coding. Binary Convolutional Codes

Similar documents
Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

Example of Convolutional Codec

Binary Convolutional Codes

Introduction to Binary Convolutional Codes [1]

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Coding on a Trellis: Convolutional Codes

Lecture 3 : Introduction to Binary Convolutional Codes

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

Introduction to Convolutional Codes, Part 1

Physical Layer and Coding

EE 229B ERROR CONTROL CODING Spring 2005

Convolutional Codes ddd, Houshou Chen. May 28, 2012

Code design: Computer search

Chapter 7: Channel coding:convolutional codes

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

Appendix D: Basics of convolutional codes

Solutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2)

Chapter 3 Linear Block Codes

An Introduction to Low Density Parity Check (LDPC) Codes

Channel Coding and Interleaving


Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

CSCI 2570 Introduction to Nanocomputing

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

Cyclic codes: overview

Turbo Codes are Low Density Parity Check Codes

Lecture 4: Linear Codes. Copyright G. Caire 88

Introduction to convolutional codes

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

LDPC Codes. Slides originally from I. Land p.1

Fault Tolerance & Reliability CDA Chapter 2 Cyclic Polynomial Codes

ECEN 655: Advanced Channel Coding

Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel.

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Turbo Codes for Deep-Space Communications

Solutions to problems from Chapter 3

G Solution (10 points) Using elementary row operations, we transform the original generator matrix as follows.

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications

Graph-based codes for flash memory

A Short Length Low Complexity Low Delay Recursive LDPC Code

CHAPTER 8 Viterbi Decoding of Convolutional Codes

Asymptotically Good Convolutional Codes with Feedback Encoders. Peter J. Sallaway OCT

The Concept of Soft Channel Encoding and its Applications in Wireless Relay Networks

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

Today. Wrapup of Polynomials...and modular arithmetic. Coutability and Uncountability.

Fountain Uncorrectable Sets and Finite-Length Analysis

VHDL Implementation of Reed Solomon Improved Encoding Algorithm

Chapter 4. Combinational: Circuits with logic gates whose outputs depend on the present combination of the inputs. elements. Dr.

Optimum Soft Decision Decoding of Linear Block Codes

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Lecture 10: Synchronous Sequential Circuits Design

4 An Introduction to Channel Coding and Decoding over BSC

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Trellis Coded Modulation

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

Fully-parallel linear error block coding and decoding a Boolean approach

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Lecture 3: Error Correcting Codes

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

MT5821 Advanced Combinatorics

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Reed-Solomon code. P(n + 2k)

CSE 200 Lecture Notes Turing machine vs. RAM machine vs. circuits

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

1 Introduction to information theory

Lecture 6: Expander Codes

MATH3302. Coding and Cryptography. Coding Theory

9 Forward-backward algorithm, sum-product on factor graphs

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Combinational Logic. By : Ali Mustafa

Communication Theory II

Contents. A Finite-Field Algebra Review 254 A.1 Groups, Rings, and Fields

A Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Entropy as a measure of surprise

Symbol Interleaved Parallel Concatenated Trellis Coded Modulation

UNIT I INFORMATION THEORY. I k log 2

Lecture 2 Linear Codes

Chapter 5: Data Compression

Synchronous Sequential Circuit

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding

Rate-adaptive turbo-syndrome scheme for Slepian-Wolf Coding

Cyclic Redundancy Check Codes

ECE 4450:427/527 - Computer Networks Spring 2017

Chapter 2. Digital Logic Basics

Polar Code Construction for List Decoding

Sample Test Paper - I

arxiv: v1 [quant-ph] 29 Apr 2010

6.1.1 What is channel coding and why do we use it?

Error Detection and Correction: Hamming Code; Reed-Muller Code

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

16.36 Communication Systems Engineering

Error Detection & Correction

Transcription:

ELEC 405/511 Error Control Coding Binary Convolutional Codes

Peter Elias (1923-2001) Coding for Noisy Channels, 1955 2

With block codes, the input data is divided into blocks of length k and the codewords have blocklength n. With convolutional codes, the input data is encoded in a continuous manner. the output is a single codeword encoding is done using only shift registers and XOR gates (binary arithmetic) the encoder output is a function of the current input bits and the shift register contents 3

Rate 1/2 Linear Convolutional Encoder Figure 11.1 4

Encoding Input data stream x = ( x, x, x, ) 0 1 2 Output data streams y = ( y, y, y, ) (0) (0) (0) (0) 0 1 2 y = ( y, y, y, ) (1) (1) (1) (1) 0 1 2 Multiplex into a single data stream y = ( y, y, y, y, y, y, ) (0) (1) (0) (1) (0) (1) 0 0 1 1 2 2 5

ETSI TS 136 212 V8.4.0 (2008-11) Technical Specification LTE; Evolved Universal Terrestrial Radio Access (E- UTRA); Multiplexing and channel coding Rate 1/3 Convolutional Encoder 6

7

Rate 2/3 Linear Convolutional Encoder 8

Rate 1/2 Linear Convolutional Encoder 9

Example 11-1 Input data stream first input bit Output data streams x = (10110) y (0) = (10001010) y (1) = (11111110) Multiplex into a single data stream y = (11,01,01,01,11,01,11,00) 10

Fractional Rate Loss Message length L Codeword length is L(n/k) + nm Effective code rate is L Reffective = L n / k + nm Fractional rate loss is γ ( ) R R km R L km effective = = + 11

Example 11-2 From Example 11-1, the message length is L = 5 Codeword length is L(n/k) + nm = 5(2/1)+2 3 = 16 Effective code rate is L 5 5 Reffective = = = L n / k + nm 10 + 6 16 ( ) Fractional rate loss is γ R R km R L + km effective = = = 3 8 12

Fractional Rate Loss Typically k = 1 and L is large: L» m In this case R effective 1 L 1 = as L n L+ m n ( ) Fractional rate loss becomes m γ = 0 L+ m 13

Memory order m: length of the longest shift register m = max m i Constraint length: K = 1+ max m i Total memory (overall constraint length): sum of all memory elements M = Σ m i Number of input streams: k Number of output streams: n Code rate: k/n State: contents of the shift registers Initial state: 00 0 End state: 00 0 (input zeros at the end) 14

Punctured Convolutional Codes Complexity is a function of the total memory or overall constraint length Thus codes with k > 1 are rarely used Puncturing allows for higher code rates from a rate 1/2 code For rate 2/3, delete one of every 4 output bits input x 1 x 2 output y 1 y 2 y 3 y 4 remove y 3 15

Rate 1/2 Linear Convolutional Encoder 16

Rate 2/3 Linear Convolutional Encoder 17

Generator Sequences () i g j represents the ith output of an encoder when the jth input is a single 1 followed by a string of zeros ( j) x = δ = (10000 ) Rate 1/2 example (K = 4) g g (0) (1) = (1011) = (1101) Rate 2/3 example g g g = (1001) g = (0110) (0) (0) 0 1 = (0111) g = (1010) (1) (1) 0 1 = (1100) g = (0100) (2) (2) 0 1 18

Why Convolutional Code? For the rate 1/2 (2,1,4) code with impulse response g (0) = 1011 g (1) = 1101 The output of the encoder is y (0) = x * g (0) y (1) = x * g (1) If x = (10110), the output is y (0) = (10110)*(1011)=10001010 y (1) = (10110)*(1101)=11111110 19

Rate 1/2 Linear Convolutional Encoder (2,1,3) code g (0) = 111 g (1) = 101 20

A convolutional code is in fact a linear block code for finite length L. For a rate 1/2 code, the generator matrix can be formed by interleaving g (0) and g (1) G = (0) (1) (0) (1) (0) (1) g0 g 0 g1 g 1 gmg m (0) (1) (0) (1) (0) (1) g0 g 0 g1 g 1 gmgm 0 0 L x 2(m+L) 21

For the rate 1/2 (2,1,4) code 11 01 10 11 0 G = 11 01 10 11 0 For the rate 2/3 (3,2,4) code 101 011 010 110 0 010 101 110 000 G = 101 011 010 110 010 101 110 000 0 22

Example 11-3 (2,1,4) code input x = (1011), L = 4, m = 3 G has dimension 4 x 14 11 01 10 11 00 00 00 00 11 01 10 11 00 00 G = 00 00 11 01 10 11 00 00 00 00 11 01 10 11 y=xg = (11,01,01,01,11,01,11) 23

Polynomial Representation Introduce the delay operator D y y x (,,, ) ( ) 2 = x0 x1 x2 X D= x0 + xd 1 + xd 2 + (0) (0) (0) (0) (0) (0) (0) (0) 2 0 1 2 0 1 2 = ( y, y, y, ) Y ( D) = y + y D+ y D + = ( y, y, y, ) Y ( D) = y + y D+ y D + (1) (1) (1) (1) (1) (1) (1) (1) 2 0 1 2 0 1 2 () i () i g G ( D) 24

(2,1,4) Convolutional Encoder 25

Polynomial Representation For the (2,1,4) code example G (0) (D) = 1+D 2 +D 3 X(D) = 1+D 2 +D 3 G (1) (D) = 1+D+D 3 Use polynomial multiplication to obtain the output Y (0) (D) = X(D) G (0) (D) = 1+D 4 +D 6 Y (1) (D) = X(D) G (1) (D) = 1+D+D 2 +D 3 +D 4 +D 5 +D 6 Y(D) = Y (0) (D 2 ) + DY (1) (D 2 ) = 1+D+D 3 +D 5 +D 7 +D 8 +D 9 +D 11 +D 12 +D 13 26

Rate 1/2 (2,1,3) Code G (0) (D) = 1+D+D 2 G (1) (D) = 1+D 2 X(D) = 1+D 3 +D 4 Y (0) (D) = X(D) G (0) (D) = 1+D+D 2 +D 3 +D 6 Y (1) (D) = X(D) G (1) (D) = 1+D 2 +D 3 +D 4 +D 5 +D 6 Y(D) = Y (0) (D 2 ) + DY (1) (D 2 ) = 1+D+D 2 +D 4 +D 5 +D 6 +D 7 +D 9 +D 11 +D 12 +D 13 27

For 3 outputs Y(D) = Y (0) (D 3 ) + DY (1) (D 3 ) + D 2 Y (2) (D 3 ) For a multiple input, multiple output system k 1 () i ( j) () i = j j= 0 Y ( D) X ( D) G ( D) G = [ ( D) ( D) ( D)] G X X X G () i 0 () i (0) (1) ( k 1) 1 () i k 1 ( D) ( D) ( D) 28

( (0) (1) ( n 1) ) Y( D) = Y ( D), Y ( D),, Y ( D) (0) (1) ( n 1) G0 ( D) G0 ( D) G0 ( D) (0) (1) ( n 1) (0) (1) ( k 1) 1 ( D) 1 ( D) 1 ( D) ( D) ( D) [ ( D) ( D) ( D)] G G G = X G = X X X (0) (1) ( n 1) Gk 1( D) Gk 1( D) Gk 1 ( D) Y n 1 i () i n ( ) ( ) D = D Y D i= 0 G(D) G(D) is called the transfer-function matrix for the encoder 29

For the (2,1,4) code G(D) = [1+D 2 +D 3 1+D+D 3 ] For the (3,2,4) code G G G = 1 + D G = D+ D (0) 3 (0) 2 0 1 = D+ D + D G = 1 + D (1) 2 3 (1) 2 0 1 = 1 + D G = D (2) (2) 0 1 3 2 3 1+ D D+ D + D 1+ D G( D) = 2 2 D+ D 1 + D D 30

Example 11-4 Rate 2/3 Code Let x = (11,10,11) X (0) (D) = 1+D+D 2 X (1) (D) = 1+D 2 Y(D) = X(D)G(D) D+ D 1 + D D 3 2 3 2 2 1+ D D+ D + D 1+ D = 1+ D+ D 1+ D 2 2 5 3 4 5 = 1+ D 1+ D+ D + D + D 1+ D 31

To convert from matrix form to the output string Y 2 i () i 3 ( ) ( ) D = D Y D i= 0 15 3 9 12 15 2 3 1 D D(1 D D D D ) D (1 D ) = + + + + + + + + = 1 + D+ D + D + D + D + D + D + D 2 4 5 10 13 15 16 The output codeword is then y = (111,011,000,010,010,110) 32

Systematic Convolutional Codes 33

Systematic Convolutional Codes The input bits are reproduced unaltered in the codeword G P l I P0 0 P1 0 P2 0 Pm I P 0 P 0 P 0 P 0 1 m 1 m = I P0 0 Pm 2 0 Pm 1 0 Pm ( k) ( k+1) ( n-1) g0,l g0,l g 0,l ( k) ( k+1) ( n-1) g1, l g1, l g1, l = ( k) ( k+1) ( n-1) gk-1, lgk-1, l g k-1, l I: k x k identity matrix 0: k x k all-zero matrix P l : k x (n-k) matrix 34

Systematic Encoder Example (2,1,4) systematic encoder g = (1000) g = (1101) (0) (1) 35

Systematic Encoder The generator matrix is 11 01 00 01 G = 11 01 00 01 G(D) = [1 1+D+D 3 ] For X(D) = 1+D 2 +D 3 Y (0) (D) = X(D) G (0) (D) = X(D) = 1+D 2 +D 3 Y (1) (D) = X(D) G (1) (D) = 1+D+D 2 +D 3 +D 4 +D 5 +D 6 36

Y(D) = Y (0) (D 2 ) + DY (1) (D 2 ) = 1+D 4 +D 6 + D(1+D 2 +D 4 +D 6 +D 8 +D 10 +D 12 ) = 1+D+D 3 +D 4 +D 5 +D 6 +D 7 +D 9 +D 11 +D 13 The output codeword is then y = (11,01,11,11,01,01,01) x = (1011) y = xg 11 01 00 01 00 00 00 00 11 01 00 01 00 00 G = 00 00 11 01 00 01 00 00 00 00 11 01 00 01 37

LTE Turbo Code GD ( ) 1, 0 1 g( D) 1 = g( 0 D) g( D) = 1+ D + D 2 3 g( D) = 1+ D+ D 3 38

State Diagrams The state of an encoder is defined as the shift register contents For an (n,1,k) code there are 2 K-1 = 2 m states Each branch in the state diagram has a label of the form X/Y, where X is the input bit that causes the state transition and Y is the corresponding output bits. Path: a sequence of nodes connected by branches Circuit: a path that starts and stops at the same node Loop: a circuit that does not enter any node more than once. 39

Rate 1/2 Linear Convolutional Encoder 40

State Diagram for the (2,1,4) Encoder 41

Example 11-5: Consider the input x = (1011) Begin at state S 0 input output state - - S 0 1 11 S 1 0 01 S 2 1 01 S 5 1 01 S 3 0 11 S 6 0 01 S 4 0 11 S 0 42

Catastrophic Codes Catastrophic code: A code for which an infinite weight input causes a finite weight output Result: A finite number of channel errors can cause an infinite number of decoding errors In terms of the state diagram: A loop corresponding to a nonzero input for which all the output bits are zero denotes a catastrophic code 43

Catastrophic Encoder 44

g (0) = (1111) g (1) = (1111) x = (111111 ) y = (11,00,11,00,00,00, ) Now suppose r = (10,00,00,00,00,00, ) A maximum likelihood decoder will choose c' = (00,00,00,00,00,00, ) which corresponds to x' = (000000 ) and an infinite number of errors 45

State Diagram for a Catastrophic Code 46

Another Catastrophic Encoder g (0) = (0111) g (1) = (1110) x = (1,1,0,1,1,0,1,1, ) y = (01,10,00,00,00,00, ) 47

The Corresponding State Diagram 48

Theorem 11-1 Catastrophic Codes Let C be a rate 1/n convolutional code with transfer function matrix G(D) whose generator sequences have the transforms {G (0) (D), G (1) (D),, G (n-1) (D)}. C is not catastrophic if and only if GCD(G (0) (D), G (1) (D),, G (n-1) (D)) = D l for l 0. 49

Catastrophic Code Examples Example 11-6 (previous) code g (0) = (0111) g (1) = (1110) G (0) (D) = D+D 2 +D 3 G (1) (D) = 1+D+D 2 GCD(G (0) (D),G (1) (D)) = 1+D+D 2 D l For the other code g (0) = (1111) g (1) = (1111) GCD(G (0) (D),G (1) (D)) = 1+D+D 2 +D 3 D l 50

The Weight Enumerator Label each branch in the state diagram with X i Y j, where i is the weight of the output sequence and j is the weight of the input sequence. The transfer function T( X, Y) = i i a j i, jxy j can be obtained using Mason s formula. 51

The Weight Enumerator Each branch can also be labeled with X i L k, where i is the weight of the output sequence and k is the branch length. The transfer function T ( X,) L b XL = i can be obtained using Mason s formula. k ik, i k 52

(2,1,3) Convolutional Encoder 53

State Diagram for the (2,1,3) Encoder Solid lines represent input 0 Dashed lines represent input 1 state SR contents a 00 b 10 c 01 d 11 54

Example Consider the input sequence x = (10011) Begin at state a input 1 0 0 1 1 output 11 10 11 11 01 new state b c a b d To return to the all zero state: input 0 0 output 01 11 new state c a 55

Modified State Diagram for the (2,1,3) Code XY X = output weight Y = input weight XY X X 2 Y X X 2 Y 56

Modified State Diagram for the (2,1,3) Code XL X = output weight L = branch length XL XL X 2 L XL X 2 L L 57

Mason s Formula TXY (, ) = i F i i Forward paths F i Graph determinant = 1 C + CC CC C + l l m l m n L L L L L L (, ) (,, ) l l m l m n Cofactor of path K i = 1 C + CC CC C + i l l m l m n ( K, L ) ( K, L, L ) ( K, L, L, L ) i l i l m i l m n 58

(2,1,3) Code Two forward paths a 0 bca 1 F 1 = X 2 Y X X 2 =X 5 Y a 0 bdca 1 F 2 = X 2 Y XY X X 2 =X 6 Y 2 Three loops dd C 1 = XY bdcb C 2 = XY X Y=X 2 Y 2 bcb C 3 = X Y=XY =1-(XY+X 2 Y 2 +XY)+XY XY=1-2XY 1 =1-XY 2 =1 59

(2,1,3) Code TXY (, ) F i i i = = X Y(1 XY) + X Y 1 2XY 5 6 2 5 XY = 1 2XY X TX ( ) X 2X 4X 1 2X 5 5 6 7 = = + + + 60

(2,1,3) Code Two forward paths a 0 bca 1 F 1 = X 2 L XL X 2 L=X 5 L 3 a 0 bdca 1 F 2 = X 2 L XL XL X 2 L=X 6 L 4 Three loops dd C 1 = XL bdcb C 2 = XL XL L=X 2 L 3 bcb C 3 = XL L=XL 2 =1-(XL+X 2 L 3 +XL 2 )+XL XL 2 =1-XL-XL 2 1 =1-XL 2 =1 61

(2,1,3) Code TXL (,) Fi i i X L (1 XL) + X L = = 1 XL(1 + L) 5 3 6 4 5 3 XL = 1 XL(1 + L) X TX ( ) X 2X 4X 1 2X 5 5 6 7 = = + + + 62

(2,1,4) Code T(X,Y) = X 6 (Y+Y 3 )+X 8 (3Y 2 +5Y 4 +2Y 6 )+ T(X) = 2X 6 +10X 8 +49X 10 + 63

Minimum Free Distance (2,1,4) code T(X) = 2X 6 +10X 8 +49X 10 + The minimum free distance d free is the minimum Hamming distance between all pairs of convolutional codewords. It is the minimum weight of an output sequence starting from the all-zero state and ending in the all-zero state. dfree = min{ d( y', y'') y' y''} = min{ w( y) y 0} For the (2,1,4) code, there are two codewords of weight 6, thus d free = 6. 64

Maximum Free Distance Codes Nonsystematic convolutional codes typically provide a higher minimum free distance than systematic codes with the same constraint length and rate. 65

66

Best Known Convolutional Codes 67

68

Peter Elias (1923-2001) Coding for Noisy Channels 69