Lecture 3 : Introduction to Binary Convolutional Codes

Similar documents
Introduction to Binary Convolutional Codes [1]

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

Example of Convolutional Codec

Coding on a Trellis: Convolutional Codes

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Convolutional Codes ddd, Houshou Chen. May 28, 2012

Lecture 19 IIR Filters

Introduction to convolutional codes

ELEC 405/511 Error Control Coding. Binary Convolutional Codes

On Weight Enumerators and MacWilliams Identity for Convolutional Codes

Binary Convolutional Codes

Physical Layer and Coding

Fault Tolerance & Reliability CDA Chapter 2 Cyclic Polynomial Codes

Lecture 11 FIR Filters

Implementation of Discrete-Time Systems

Lecture 4: Linear Codes. Copyright G. Caire 88

Solutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2)

Introduction to Convolutional Codes, Part 1

CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

Chapter 3 Linear Block Codes

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

LECTURE NOTES DIGITAL SIGNAL PROCESSING III B.TECH II SEMESTER (JNTUK R 13)

Channel Coding and Interleaving

Cyclic codes: overview

Communication Theory II

Digital Signal Processing Lecture 5

Error Correction Review

Contents. A Finite-Field Algebra Review 254 A.1 Groups, Rings, and Fields

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Minimal-span bases, linear system theory, and the invariant factor theorem

Information redundancy

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Objective: To become acquainted with the basic concepts of cyclic codes and some aspects of encoder implementations for them.

Digital Filter Structures. Basic IIR Digital Filter Structures. of an LTI digital filter is given by the convolution sum or, by the linear constant

ELEG 305: Digital Signal Processing

DSP Algorithm Original PowerPoint slides prepared by S. K. Mitra

The z-transform Part 2

Berlekamp-Massey decoding of RS code

Appendix D: Basics of convolutional codes

Cyclic Redundancy Check Codes

Codes on Graphs and More. Hug, Florian. Published: Link to publication

Convolutional Codes Klaus von der Heide

On the exact bit error probability for Viterbi decoding of convolutional codes

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

GF(2 m ) arithmetic: summary

Combinational Logic. Jee-Hwan Ryu. School of Mechanical Engineering Korea University of Technology and Education

Responses of Digital Filters Chapter Intended Learning Outcomes:

(i) Represent discrete-time signals using transform. (ii) Understand the relationship between transform and discrete-time Fourier transform

Symbol Interleaved Parallel Concatenated Trellis Coded Modulation

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

Cyclic codes. Vahid Meghdadi Reference: Error Correction Coding by Todd K. Moon. February 2008

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

UNIT 1. SIGNALS AND SYSTEM

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Shift Property of z-transform. Lecture 16. More z-transform (Lathi 5.2, ) More Properties of z-transform. Convolution property of z-transform

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.

ADAPTIVE FILTER ALGORITHMS. Prepared by Deepa.T, Asst.Prof. /TCE

ELCT201: DIGITAL LOGIC DESIGN

ECEN 5682 Theory and Practice of Error Control Codes

QPP Interleaver Based Turbo-code For DVB-RCS Standard

EE260: Digital Design, Spring n Digital Computers. n Number Systems. n Representations. n Conversions. n Arithmetic Operations.

(Reprint of pp in Proc. 2nd Int. Workshop on Algebraic and Combinatorial coding Theory, Leningrad, Sept , 1990)

Convolutional Coding LECTURE Overview

Digital Signal Processing Lecture 4

Code Concatenation and Low-Density Parity Check Codes

Theory and Problems of Signals and Systems

Discrete-Time Systems

Binary Convolutional Codes of High Rate Øyvind Ytrehus

4 An Introduction to Channel Coding and Decoding over BSC

Chapter 7: Channel coding:convolutional codes

Turbo Codes are Low Density Parity Check Codes

CSE 20 DISCRETE MATH. Fall

Professor Fearing EECS150/Problem Set Solution Fall 2013 Due at 10 am, Thu. Oct. 3 (homework box under stairs)

CSE 200 Lecture Notes Turing machine vs. RAM machine vs. circuits

Chapter Intended Learning Outcomes: (i) Understanding the relationship between transform and the Fourier transform for discrete-time signals

Analysis and Synthesis of Weighted-Sum Functions

EECS150 - Digital Design Lecture 21 - Design Blocks

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Some Extended Results on the Search for Good Convolutional Codes

Outline. EECS Components and Design Techniques for Digital Systems. Lec 18 Error Coding. In the real world. Our beautiful digital world.

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Topic 3. Design of Sequences with Low Correlation

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications

Error Correction Methods

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

MODEL ANSWER SUMMER 17 EXAMINATION Subject Title: Principles of Digital Techniques

EECS Components and Design Techniques for Digital Systems. Lec 26 CRCs, LFSRs (and a little power)

EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs)

DHANALAKSHMI COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING EC2314- DIGITAL SIGNAL PROCESSING UNIT I INTRODUCTION PART A

Vandermonde-form Preserving Matrices And The Generalized Signal Richness Preservation Problem

Motivation for Arithmetic Coding

How does the computer generate observations from various distributions specified after input analysis?

Some Easily Decoded, Efficient, Burst Error Correcting Block Codes

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

EECS150 - Digital Design Lecture 23 - FFs revisited, FIFOs, ECCs, LSFRs. Cross-coupled NOR gates

Logic Design II (17.342) Spring Lecture Outline

R13 SET - 1

Transcription:

Lecture 3 : Introduction to Binary Convolutional Codes Binary Convolutional Codes 1. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes. In contrast with a block code, whose encoder assigns an n-bit codeword to each block of k information bits, a convolutional encoder assigns code bits to an incoming information bit stream continuously. 2. Convolutional codes differ from block codes in that the encoder contains memory and the n encoder outputs at any time unit depend not only on the k inputs but also on m previous input blocks. 3. A binary convolutional code is denoted by a three-tuple (n, k, m) with a k-input, n-output linear sequential circuit with input memory m. 4. The current n outputs are linear combinations of not only the present k input bits, but also the previous m k input bits. 1

5. m denotes the number of previous k-bit input blocks which would be memorized in the encoder. 6. m is called the memory order of the convolutional code. Typically, n and k are small integers with k < n, but the memory order m must be made large to achieve low error probabilities. 7. Then in 1967, Viterbi proposed a maximum likelihood decoding scheme that was relatively easy to implement for cods with small memory orders. 8. This scheme, called Viterbi decoding, together with improved versions of sequential decoding, led to the application of convolutional codes to deep-space and satellite communication in early 1970s. 9. Code rate R c = k n 2

Encoders for the Convolutional Codes 1. A binary convolutional encoder is structured as a mechanism of shift registers and modulo-2 adders, where the output bits are mod-2 additions of selective shift register contents and present input bits. 2. n is the number of output sequences. 3. k is the number of input sequences (k = 1 is usually used). 4. m is the maximum length of the k parallel shift registers. If the number of stages of the j-th shift registers is K j, m = max 1 j k K j. 5. K = k j=1 K j is the total memories utilized in the encoder. K is also called the overall constraint lengths. 6. The definition of constraint length of a convolutional code is defined as m + 1. 3

Example 1: Encoder for a binary (2,1,2) code u is the information sequence and v = MUX[v 1, v 2 ] is the corresponding codeword. In octal form, the generator is denoted as (7,5). 4

Example 2: Encoder for a binary (3, 2, 2) convolutional code 5

Impose Response and Convolution 1. The encoders of convolutional codes can be represented by linear time-invariant (LTI) systems. 2. v j = u 1 g (j) 1 + u 2 g (j) 2 + + u k g (j) k = k i=1 u i g (i) j, where denotes the convolutional operation and g (j) i denotes the impulse response of the ith input sequence with the response to the jth output. 3. g (j) i can be obtained by stimulating the discrete impulse (1, 0, 0, ) at the ith input and observing the jth output, when all other inputs are fed the zero sequence (0, 0, 0, ). 4. The impulse responses are also called the generator sequences of the encoder. 6

Example 3: Impulse response for a binary (2,1,2) code 7

Example 4: Impulse response for a binary (3, 2, 2) convolutional code 8

Generator Matrix in the Time Domain 1. The convolutional codes can be generated by a generator matrix multiplied by the information sequences. 2. Let u 1, u 2,, u k are the information sequences and v 1, v 2,, v n the output sequences. 3. Arrange the information sequences as u = (u 1,0, u 2,0,, u k,0, u 1,1, u 2,1,, u k,1,, u 1,l, u 2,l,, u k,l, ) = (w 0, w 1,, w l, ), 4. Arrange the output sequences as v = (v 1,0, v 2,0,, v n,0, v 1,1, v 2,1,, v n,1,, v 1,l, v 2,l,, v n,l, ) = (z 0, z 1,, z l, ), 5. The relation between v and u can characterized as v = u G, 9

where G is the generator matrix of the code G 0 G 1 G 2 G m G 0 G 1 G m 1 G m G = G 0 G m 2 G m 1 G. m...... with the k n submatrices G l = g (1) 1,l g (2) 1,l g (n) 1,l g (1) 2,l g (2) 2,l g (n) 2,l.. g (1) k,l g (2) k,l g (n) k,l.. 6. The element g (j) i,l, for i [1, k] and j [1, n], are the impulse response of the 10

ith input with respect to jth output: g (j) i = (g (j) i,0, g(j) i,1,, g(j) i,l,, g(j) i,m ) 7. Since the code word v is a linear combination of rows of the generator matrix G, an (n, k, m) convolutional code is a linear code. 8. For an kl finite length information sequence, the corresponding code word has length n(l + m), where the final nm outputs are generated after the last nonzero information block has entered the encoder. 11

Example 5: Generator matrix of a binary (2,1,2) code 12

Example 6: Generator matrix of a binary (3, 2, 2) convolutional code 13

Generator Matrix in the Z Domain 1. In a linear system, time-domain operations involving convolution can be replaced by more convenient transform-domain operations involving polynomial multiplication. 2. Since a convolutional encoder is a linear system, each sequence in the encoding equations can be replaced by corresponding polynomial, and the convolution operation replaced by polynomial multiplication. 3. According to the Z transform, u i u i (D) = v j v j (D) = g (j) i g (j) i (D) = 14 u i,t D t t=0 v j,t D t t=0 t=0 g (j) i,t Dt

4. The convolutional relation of the Z transform Z{u g} = u(d)g(d) is used to transform the convolution of input sequences and generator sequences to a multiplication in the Z domain. 5. v j (D) = k i=1 u i(d)g (j) i (D) 6. g (j) i (D) are called generator polynomials. 7. The indeterminate D can be interpreted as a delay operator, and the power of D denoting the number of time units a bit is delayed with respect to the initial bit. 8. We can write the above equations into a matrix multiplication: V(D) = U(D) G(D) where U(D) = (u 1 (D), u 2 (D),, u k (D)) V(D) = (v 1 (D), v 2 (D),, v n (D)) [ ] G(D) = (D) g (j) i 15

Example 7: Generator matrix of a binary (2,1,2) code 16

Example 8: Generator matrix of a binary (3, 2, 2) convolutional code 17

FIR and IIR systems 1. All examples in previous slides are finite impulse response (FIR) systems that are with finite impulse responses. 2. The above example is an infinite impulse response (IIR) system that is with infinite impulse response. 3. The generator sequences of the above example are g (1) 1 = (1) g (2) 1 = (1, 1, 1, 0, 1, 1, 0, 1, 1, 0, ) 18

4. The infinite sequence of g (2) 1 is caused by the recursive structure of the encoder. 5. By introducing the variable x, we have x t = u t + x t 1 + x t 2 v 2,t = x t + x t 2 Accordingly, we can have the following difference equations: v 1,t = u t v 2,t + v 2,t 1 + v 2,t 2 = u t + u t 2 6. We then apply the z transform to the second equation: v 2,t D t + v 2,t 1 D t + v 2,t 2 D t = u t D t + u t 2 D t, t=0 t=0 t=0 t=0 t=0 v 2 (D) + Dv 2 (D) + D 2 v 2 (D) = u(d) + D 2 u(d). 19

v 2 (D) = 1 + D2 1 + D + D 2 u(d) = g 12(D)u(D). 7. The generator matrix is obtained: G(D) = ( 1 1 + D 2 ) 1 + D + D 2. 20

' & Polynomial/Systematic Form of the Generator Matrix 21 $ %

G(D), G sys (D), and G(D) poly all generate the same code. 22

Encoder Realizations and Classifications 1. When g i,j (D) is a rational function, that is, g j i has the form 2. It implies that g (j) i (D) = a 0 + a 1 D + + a m D m 1 + b 1 D + + b m D m. v j (D) = u i (D) a(d) b(d). a(d) (D) = b(d), we assume that it b 0 v j,t = a 0 u i,t + a 1 u i,t 1 + + a m u i,t m b 1 v j,t 1 b 2 v j,t 2 b m v j,t m 3. Fig 4.2 depics the Type I IIR realization of the transfer function g (j) j (D) = a(d) b(d), while Fig 4.3 depics the Type II IIR realization. 4. The FIR case, i.e., without feedback, is the special case in Fig. 4.2 and 4.3 with b 2 = b 3 = = b m = 0. 23

5. The Type I form is also called the controller canonical form or the direct canonical form. 6. The Type II form is also called the observer canonical form or the transposed canonical form. 24

25

26

27

' & $ 28 %

' & $ 29 %

' Four Classes of Convolutional Codes $ 1. Parallel turbo codes require that the constituent convolutional codes be of the RSC type. & 30 %

Termination 1. The effective code rate, R effective, is defined as the average number of input bits carried by an output bit. 2. In practice, the input sequences are with finite length. 3. In order to terminate a convolutional code, some bits are appended onto the information sequence such that the shift registers return to the zero. 4. Each of the k input sequences of length L bits is padded with m zeros, and these k input sequences jointly induce n(l + m) output bits. 5. The effective rate of the terminated convolutional code is now R effective = 6. When L is large, R effective R. kl n(l + m) = R L L + m 7. All examples presented are terminated convolutional codes. 31

Truncation 1. The second option to terminate a convolutional code is to stop for t > L no matter what contents of shift registers have. 2. The effective code rate is still R. 3. The generator matrix is clipped after the Lth column: G 0 G 1 G m G 0 G 1 G m...... G c [L] = G 0 G m..... G 0 G 1 G 0 32

Tail Biting 1. The drawback of truncation method is that the last few blocks of information sequences are less protected. 2. Tail biting is to start the convolutional encoder in the same contents of all shift registers (state) where it will stop after the input of L information blocks. 3. Equal protection of all information bits of the entire information sequences is possible. The effective rate of the code is still R. 4. The generator matrix has to be clipped after the Lth column, and manipulated 33

as follows: G c [L] = G 0 G 1 G m G 0 G 1 G m...... G 0 G m.. G.. m..... G 0 G 1 G 1 G m G 0 where G c [L] 34