Appendix D: Basics of convolutional codes

Similar documents
Chapter 7: Channel coding:convolutional codes

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

Introduction to Convolutional Codes, Part 1

Communication Theory II

Binary Convolutional Codes

Code design: Computer search

Convolutional Codes ddd, Houshou Chen. May 28, 2012

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

CHAPTER 8 Viterbi Decoding of Convolutional Codes

Convolutional Coding LECTURE Overview

Introduction to Binary Convolutional Codes [1]

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

ELEC 405/511 Error Control Coding. Binary Convolutional Codes

Channel Coding and Interleaving

Example of Convolutional Codec

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

High rate soft output Viterbi decoder

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

An Introduction to Low Density Parity Check (LDPC) Codes

SENS'2006 Second Scientific Conference with International Participation SPACE, ECOLOGY, NANOTECHNOLOGY, SAFETY June 2006, Varna, Bulgaria

Coding on a Trellis: Convolutional Codes

Lecture 3 : Introduction to Binary Convolutional Codes

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur

THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES

Physical Layer and Coding

The Concept of Soft Channel Encoding and its Applications in Wireless Relay Networks

BASICS OF DETECTION AND ESTIMATION THEORY

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

THIS paper is aimed at designing efficient decoding algorithms

On the exact bit error probability for Viterbi decoding of convolutional codes

Soft-Output Trellis Waveform Coding

Maximum Likelihood Sequence Detection

Channel Coding I. Exercises SS 2017

New Trellis Codes Based on Lattices and Cosets *

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Fall Quiz II.

Error Correction Methods

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Lecture 12. Block Diagram

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Linear Codes and Syndrome Decoding

Pipelined Viterbi Decoder Using FPGA

Sequential Decoding of Binary Convolutional Codes

PROBABILISTIC ALGORITHM FOR LIST VITERBI DECODING

Convolutional Codes Klaus von der Heide

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

6.451 Principles of Digital Communication II Wednesday, May 4, 2005 MIT, Spring 2005 Handout #22. Problem Set 9 Solutions

Image Compression using DPCM with LMS Algorithm

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Intro to Information Theory

Cyclic Redundancy Check Codes

An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

2018/5/3. YU Xiangyu

Is It As Easy To Discover As To Verify?

The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes

Error Correction and Trellis Coding

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

HIGH DIMENSIONAL TRELLIS CODED MODULATION

EE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Correlation. We don't consider one variable independent and the other dependent. Does x go up as y goes up? Does x go down as y goes up?

Boolean Algebra and Digital Logic 2009, University of Colombo School of Computing

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks

A Study on Simulating Convolutional Codes and Turbo Codes

Handout 12: Error Probability Analysis of Binary Detection

INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson

Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver

Turbo Codes are Low Density Parity Check Codes

Lecture 2e Row Echelon Form (pages 73-74)

Trellis-based Detection Techniques

1 Introduction to information theory

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Hidden Markov Models 1

Digital Communications

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Polar Codes: Graph Representation and Duality

On Weight Enumerators and MacWilliams Identity for Convolutional Codes

Theoretical Analysis and Performance Limits of Noncoherent Sequence Detection of Coded PSK

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Error Floors of LDPC Coded BICM

Constant Space and Non-Constant Time in Distributed Computing

Trellis Coded Modulation

Index coding with side information

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE)

Digital Logic: Boolean Algebra and Gates. Textbook Chapter 3

Parallel Concatenated Chaos Coded Modulations

Factor Graphs and Message Passing Algorithms Part 1: Introduction

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

Turbo Codes for xdsl modems

MAXIMUM LIKELIHOOD SEQUENCE ESTIMATION FROM THE LATTICE VIEWPOINT. By Mow Wai Ho

홀로그램저장재료. National Creative Research Center for Active Plasmonics Applications Systems

Decision-Point Signal to Noise Ratio (SNR)

( c ) E p s t e i n, C a r t e r a n d B o l l i n g e r C h a p t e r 1 7 : I n f o r m a t i o n S c i e n c e P a g e 1

Transcription:

Appendix D: Basics of convolutional codes Convolutional encoder: In convolutional code (B. P. Lathi, 2009; S. G. Wilson, 1996; E. Biglieri, 2005; T. Oberg, 2001), the block of n code bits generated by the encoder in a particular time instant depends not only on the block of k message bits within that time instant but also on the block of data bits within a previous span of N-1 time instants (N>1). A convolutional code with constraint length N consists of an N-stage shift register (SR) and ν modulo-2 adders. ν1 ν 2 ν = s s s ; ν = s s 1 1 2 3 2 1 3 Fig. D.1 Convolutional coder Fig. D.1 shows such a coder for the case N=3 and ν=2. The message bits are applied at the input of the SR. The coded digit stream is obtained at the commutator output. The commutator samples the ν modulo-2 adders in a sequence, once during each input-bit interval. Example: Assume that the input digits are 1010. Find the coded sequence output. Solution: Initially, the SRs s1=s2=s3=0. Note that SR just shifts the input data to the next SR in the next time instant. 1

When the first message bit 1 enters the SR, s1= 1, s2 = s3=0. Then ν1=1, ν2=1. The coder output is 11. When the second message bit 0 enters the SR, s1=0, s2=1, s3=0. Then ν1=1 and ν2=0. The coder output is 10. When the third message bit 1 enters the SR, s1=1, s2=0 and s3=1 Then ν1=0 and ν2=0. The coder output is 00. When the fourth message bit 0 enters the SR, s1=0, s2=1 and s3=0 Then ν1=1 and ν2=0. The coder output is 10. In order to stop, we add N-1 number of 0 s to the input stream and make sure that the last data digit (0 in this case) proceeds all the way through the SR in order to influence the N groups of ν digits. Hence when the input digits are 0101, we actually apply 000101 (L R) to the SR. The coder output is (L R) 111000101100. There are in all n=(n+k-1) ν digits in the coded output for every k input bits. 2

output = ν ν 1 2 ν = s s s 1 1 2 3 ν = s s 2 1 3 Fig. D.2 (a) 3-shift registers showing states a, b, c and d (b) State diagram for the coder State diagram: When a message bit enters the SR (s1) the coder outputs are determined not only by the message bit in s1 but also by the two previous bits already in s3 and s2. There are four possible combinations of the two previous bits in s3 and s2: 00, 01,10,11. We will name these four states as a, b, c, d respectively as shown in Fig. D.2 (a). The number of states is equal to 2 (N-1). A message bit 0 or 1 generates four different outputs depending on the encoder state. This entire behavior can be concisely expressed by the state diagram of Fig. D.2 (b). This is a four-state directed graph 3

used to represent the input-output relation of encoder. Convention: we will use solid lines when the input bit is 0, and dashed lines when the input bit is 1. Interpretations from State diagram D.2 (b): (1) State a goes to State a when the input is 0 and the output is 00 (2) State a goes to State b when the input is 1 and the output is 11 (3) State b goes to State c when the input is 0 and the output is 10 (4) State b goes to State d when the input is 1 and the output is 01 (5) State c goes to State a when the input is 0 and the output is 11 (6) State c goes to State b when the input is 1 and the output is 00 (7) State d goes to State c when the input is 0 and the output is 01 (8) State d goes to State d when the input is 1 and the output is 10 Note that the encoder cannot go directly from state a to states c or d. From any given state, the encoder can go to only two states directly by inputting a single message bit. Trellis diagram: Trellis diagram can be readily drawn using the above state diagram. It starts from scratch (all 0 s in the SR, i.e., state a) and makes transitions corresponding to each input data digit. These transitions are denoted by a solid line for the next data digit 0 and by a dashed line for the next data digit 1. Thus when the first input digit is 0, the encoder output is 00 (solid line) and when the input digit is 1, the encoder output is 11 (dashed line). We continue this way for the second input digit and so on as depicted in Fig. D.3. 4

Fig. D.3 Trellis diagram for the convolution coder Fig. D.4 Survivor paths after the 3 rd branch of the Trellis diagram for received sequence 01 00 01 Decoding: We shall consider maximum-likelihood (ML) decoding (Viterbi s algorithm). Among various decoding methods for convolutional codes, Viterbi s ML algorithm is one of the best techniques in digital communications. As usual, ML receiver implies selecting a code word closest to the received code word. Because there are 2 k code words (k input data digits), the ML decision involves storage of 2 k code words and their comparison with the received word. The calculation 5

is extremely difficult for large k and result in exponential increase in complexity of the decoder. A major simplification was made by Viterbi in the ML calculation by noting that each of the four nodes (a, b, c and d) has only two predecessors. Each node can be reached through two nodes only and only the path agrees most with the received sequence (the minimum distance path) need to be retained for each node. Given a received sequence of bits, we need to find a path in the trellis diagram with the output digit sequence agrees most with the received sequence. Example: Suppose that the first six received digits are 01 00 01. Find the survivor paths (minimum-distance path with the received sequence). Solution: Table D.1 Survivor paths after the 3 rd branch of the Trellis diagram for received sequence 01 00 01 After 3 rd branches Paths Distance with received sequence Survivor? Node a 00 00 00 2 Yes 11 10 11 3 Node b 00 00 11 2 Yes 11 10 00 3 Node c 00 11 10 5 11 01 01 2 Yes Node d 00 11 01 3 Yes 11 01 10 4 With four paths eliminated as illustrated in Table D.1, the four survivor paths are the only contenders. What we need to remember is the four survivor paths and their distances from the 6

received sequences. In general, the number of survivor paths is equal to the number of states, that is, 2 N-1. Once we have survivors at all the third-level nodes, we look at the next two received digits. To truncate the Viterbi algorithm and ultimately we need to decide on one path rather than four. This is done by forcing the last two data digits to be 00. Hence the received sequence is (L R) 01 00 01 00. When the first dummy 0 enters the register, we consider the survivors only at nodes a and c. The survivors at nodes b and d are discarded because these nodes can be reached only when input bit is 1, as seen from the trellis diagram. When the second dummy 0 enters the register, we consider survivor at node a. We discard the survivor at node c because the last two dummy data 00 leads to the encoder state a. In terms of trellis diagram, this means that the number of states is reduced from four to two (a and c) by insertion of the first zero and to a single state (a) by insertion of the second zero. With the Viterbi algorithm, storage and computational complexity reduces considerably (proportional to 2N) and are very attractive for constraint length N<10. References: 1) B. P. Lathi, Modern digital and analog communication systems, Oxford University Press, 2009 2) E. Biglieri, Coding for wireless channels, Springer, 2005 3) S. G. Wilson, Digital Modulation and Coding, Pearson, 1996 4) T. Oberg, Modulation, Detection and Coding, John Wiley and Sons, 2001 7