Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

Similar documents
Convolutional Codes ddd, Houshou Chen. May 28, 2012

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

Introduction to Convolutional Codes, Part 1

An Introduction to Low Density Parity Check (LDPC) Codes

3F1 Information Theory, Lecture 3

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

3F1 Information Theory, Lecture 3

Appendix D: Basics of convolutional codes

Chapter 7: Channel coding:convolutional codes

Turbo Codes for Deep-Space Communications

Communication Theory II

UNIT I INFORMATION THEORY. I k log 2

Error Correction Review

Common Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Introduction to Binary Convolutional Codes [1]

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Belief propagation decoding of quantum channels by passing quantum messages

CHAPTER 8 Viterbi Decoding of Convolutional Codes

Simplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation

Error Correction Methods

ELEC 405/511 Error Control Coding. Binary Convolutional Codes

Source Coding Techniques

Turbo Codes are Low Density Parity Check Codes

Chapter 2: Source coding

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

High rate soft output Viterbi decoder

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur


Lecture 16. Error-free variable length schemes (contd.): Shannon-Fano-Elias code, Huffman code

A NEW CHANNEL CODING TECHNIQUE TO APPROACH THE CHANNEL CAPACITY

Digital Communication Systems ECS 452

Motivation for Arithmetic Coding

ECEN 655: Advanced Channel Coding

Binary Convolutional Codes

Fibonacci Coding for Lossless Data Compression A Review

An Introduction to (Network) Coding Theory

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

Chapter 5: Data Compression

Lecture 3 : Introduction to Binary Convolutional Codes

ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES

Physical Layer and Coding

Example of Convolutional Codec

Convolutional Codes Klaus von der Heide

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

Coding on a Trellis: Convolutional Codes

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Polynomial Codes over Certain Finite Fields

Image and Multidimensional Signal Processing

Code design: Computer search

arxiv: v1 [cs.it] 1 Sep 2015

PUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS

1e

U Logo Use Guidelines

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

Polar Coding. Part 1 - Background. Erdal Arıkan. Electrical-Electronics Engineering Department, Bilkent University, Ankara, Turkey

DEPARTMENT OF EECS MASSACHUSETTS INSTITUTE OF TECHNOLOGY. 6.02: Digital Communication Systems, Fall Quiz I. October 11, 2012

Channel Coding I. Exercises SS 2017

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission

Channel Coding and Interleaving

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding

CSCI 2570 Introduction to Nanocomputing

Making Error Correcting Codes Work for Flash Memory

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Algebraic Codes for Error Control

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

On the exact bit error probability for Viterbi decoding of convolutional codes

1 Reed Solomon Decoder Final Project. Group 3 Abhinav Agarwal S Branavan Grant Elliott. 14 th May 2007

Turbo Codes for xdsl modems

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

Chapter 9 Fundamental Limits in Information Theory

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

Lecture 4: Linear Codes. Copyright G. Caire 88

Data Compression Techniques (Spring 2012) Model Solutions for Exercise 2

Error Correction and Trellis Coding

Constant Weight Codes: An Approach Based on Knuth s Balancing Method

Hidden Markov models

Convolutional Coding LECTURE Overview

Logic. Combinational. inputs. outputs. the result. system can

New Puncturing Pattern for Bad Interleavers in Turbo-Codes

An Approximation Algorithm for Constructing Error Detecting Prefix Codes

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

Lecture 4 : Introduction to Low-density Parity-check Codes

Error-Correction Coding for Digital Communications

Information & Correlation

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

The Super-Trellis Structure of Turbo Codes

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Part I. Cyclic codes and channel codes. CODING, CRYPTOGRAPHY and CRYPTOGRAPHIC PROTOCOLS

State-of-the-Art Channel Coding

Part III. Cyclic codes

Codes on graphs. Chapter Elementary realizations of linear block codes

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Transcription:

Digital Communication Systems ECS 452 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th 5.2 Binary Convolutional Codes 35

Binary Convolutional Codes Introduced by Elias in 1955 There, it is referred to as convolutional parity-check symbols codes. Peter Elias received Claude E. Shannon Award in 1977 IEEE Richard W. Hamming Medal in 22 for "fundamental and pioneering contributions to information theory and its applications The encoder has memory. In other words, the encoder is a sequential circuit or a finitestate machine. Easily implemented by shift register(s). The state of the encoder is defined as the contents of its memory. 36

Binary Convolutional Codes The encoding is done on a continuous running basis rather than by blocks of k data digits. So, we use the terms bit streams or sequences for the input and output of the encoder. In theory, these sequences have infinite duration. In practice, the state of the convolutional code is periodically forced to a known state and therefore code sequences are produced in a block-wise manner. 37

Binary Convolutional Codes In general, a rate- convolutional encoder has k shift registers, one per input information bit, and n output coded bits that are given by linear combinations (over the binary field, of the contents of the registers and the input information bits. k and n are usually small. For simplicity of exposition, and for practical purposes, only rate- binary convolutional codes are considered here. k = 1. These are the most widely used binary codes. 38

Example: n = 2, k = 1 39

Graphical Representations Three different but related graphical representations have been devised for the study of convolutional encoding: 1. the state diagram 2. the code tree 3. the trellis diagram 40

State (transition) Diagram The encoder behavior can be seen from the perspective of a finite state machine with its state (transition) diagram. 41 A four-state directed graph that uniquely represents the input-output relation of the encoder. 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/

State Diagram 42

State Diagram Input bit 0 1 0 0 0 1 0 0 43

State Diagram Input bit 0/ Corresponding output bits 0 1/ 0 0 0 0 1 0 0 1 1 44

State Diagram 0/ 1 1/ 0/ 0 0 1 1 1/ 1 0 1 0 0 45

State Diagram b s 0 s 1 x (1) x (2) 0 0 0 0 0 1 0 0 1 1 0 0 1 1 1 1 0 1 0 0 0 1 0 1 0 1 1 0 0 1 0 1 1 0 1 1 1 1 1 0 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 46

State Diagram Input 1 1 0 1 1 1 Output 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 47

Parts for Code Tree 0/ 0/ 1/ 0/ 1/ 1/ 0/ 1/ 0/ 0/ 0/ 1/ 0/ 1/ 1/ 1/ Two branches initiate from each node, the upper one for 0 and the lower one for 1. 48

49 Code Tree 0/ Initially, we always assume that all the contents of the register are 0. Start Show the coded output for any possible sequence of data digits. 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 1/ 1/ 0/ 0/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 1/ 0/ 0/ 1/ 1/ 0/ 0/ 1/

50 Code Tree 0/ 0/ 1/ Start 0/ 1/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ Input 1 1 0 1 Output

Code Trellis [Carlson & Crilly, 29, p. 620] Current State 0/ Next State 0/ 1/ 0/ 1/ 0/ 1/ 0/ 1/ 1/ 51

Trellis Diagram Another useful way of representing the code tree. 0/ 0/ 0/ 0/ 0/ 0/ 1/ 1/ 1/ 1/ 1/ 1/ 52

Trellis Diagram 0/ 0/ 0/ 0/ 0/ 0/ Initially, we always assume that all the contents of the 1/ 1/ 1/ 1/ register are 0. Each path that traverses through the trellis represents a valid codeword. 53

Trellis Diagram 0/ 0/ 0/ 0/ 0/ 0/ 1/ 1/ 1/ 1/ Input 1 1 0 1 1 1 Output 54

Directly Finding the Output b s 0 s 1 x (1) x (2) 1 0 0 1 1 1 1 0 0 1 0 1 1 0 1 1 0 1 0 0 1 1 0 0 1 1 1 1 1 1 Input 1 1 0 1 1 1 Output 55

Direct Minimum Distance Decoding Suppose = [ ]. Find. Find the message which corresponds to the (valid) codeword with minimum (Hamming) distance from. 56

Direct Minimum Distance Decoding Suppose = [ ]. Find. Find the message which corresponds to the (valid) codeword with minimum (Hamming) distance from. = [ ]. 0/ 0/ 0/ 57 1/

Direct Minimum Distance Decoding Suppose = [ ]. Find. Find the message which corresponds to the (valid) codeword with minimum (Hamming) distance from. = [ ]. 0/ (2) 0/ (1) 0/ (2) The number in parentheses on each branch is the branch metric, obtained by counting the differences between the encoded bits and the corresponding bits in. 58 1/ (1)

Direct Minimum Distance Decoding Suppose = [ ]. Find. Find the message which corresponds to the (valid) codeword with minimum (Hamming) distance from. 59 = [ ]. 0/ (2) 0/ (1) 0/ (2) 1/ (1) b d(x,y) 0 212 = 5 1 210 = 3 0 2 = 4 0 2 = 4 0 020 = 2 1 022 = 4 1 1 = 1 1 1 = 1

Viterbi decoding Developed by Andrew J. Viterbi Also co-founded Qualcomm Inc. Published in the paper "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm", IEEE Transactions on Information Theory, Volume IT-13, pages 260-269, in April, 1967. https://en.wikipedia.org/wiki/andrew_viterbi 60

Viterbi and His Decoding Algorithm 62 [http://viterbi.usc.edu/about/viterbi/viterbi_video.htm]

Andrew J. Viterbi 63 1991: Claude E. Shannon Award 1952-1957: MIT BS & MS Studied electronics and communications theory under such renowned scholars as Norbert Wiener, Claude Shannon, Bruno Rossi and Roberto Fano. 1962: Earned one of the first doctorates in electrical engineering granted at the University of Southern California (USC) Ph.D. dissertation: error correcting codes 24: USC Viterbi School of Engineering named in recognition of his $52 million gift

Andrew J. Viterbi 64 [http://viterbi.usc.edu/about/viterbi/viterbi_video.htm]

Andrew J. Viterbi Cofounded Qualcomm Helped to develop the CDMA standard for cellular networks. 1998 Golden Jubilee Award for Technological Innovation To commemorate the 50th Anniversary of Information Theory Given to the authors of discoveries, advances and inventions that have had a profound impact in the technology of information transmission, processing and compression. 1. Norman Abramson: For the invention of the first random-access communication protocol. 2. Elwyn Berlekamp: For the invention of a computationally efficient algebraic decoding algorithm. 3. Claude Berrou, Alain Glavieux and Punya Thitimajshima: For the invention of turbo codes. 4. Ingrid Daubechies: For the invention of wavelet-based methods for signal processing. 5. Whitfield Diffie and Martin Hellman: For the invention of public-key cryptography. 6. Peter Elias: For the invention of convolutional codes. 7. G. David Forney, Jr: For the invention of concatenated codes and a generalized minimum-distance decoding algorithm. 8. Robert M. Gray: For the invention and development of training mode vector quantization. 9. David Huffman: For the invention of the Huffman minimum-length lossless datacompression code.. Kees A. Schouhamer Immink: For the invention of constrained codes for commercial recording systems.. Abraham Lempel and Jacob Ziv: For the invention of the Lempel-Ziv universal data compression algorithm. 12. Robert W. Lucky: For the invention of pioneering adaptive equalization methods. 13. Dwight O. North: For the invention of the matched filter. 14. Irving S. Reed: For the co-invention of the Reed-Solomon error correction codes. 15. Jorma Rissanen: For the invention of arithmetic coding. 16. Gottfried Ungerboeck: For the invention of trellis coded modulation. 17.Andrew J. Viterbi: For the invention of the Viterbi algorithm. 65

Viterbi Decoding Suppose = [ ]. Find. Find the message which corresponds to the (valid) codeword with minimum (Hamming) distance from. 61 = [ ]. 2 3 0/ (2) 0/ (1) 0 3 2 0 0/ (2) 1/ (1) Each circled number at a node is the running (cumulative) path metric, obtained by summing branch metrics (distance) up to that node.

Viterbi Decoding 62 Suppose = [ ]. Find. Find the message which corresponds to the (valid) codeword with minimum (Hamming) distance from. = [ ]. 2 0/ (2) 0/ (1) 3 5 0/ (2) 2 0 3 2 0 1/ (1) 3 4 4 1 4 1 For the last column of nodes, each of the nodes has two branches going into it. So, there are two possible cumulative distance values. We discard the largermetric path because, regardless of what happens subsequently, this path will have a larger Hamming distance from y.

Viterbi Decoding 63 Suppose = [ ]. Find. Find the message which corresponds to the (valid) codeword with minimum (Hamming) distance from. = [ ]. 2 0/ (2) 0/ (1) 3 5 0/ (2) 2 0 3 2 0 1/ (1) 3 4 4 1 4 1 For the last column of nodes, each of the nodes has two branches going into it. So, there are two possible cumulative distance values. We discard the largermetric path because, regardless of what happens subsequently, this path will have a larger Hamming distance from y.

Viterbi Decoding 64 Suppose = [ ]. Find. Find the message which corresponds to the (valid) codeword with minimum (Hamming) distance from. = [ ]. 2 3 2 0/ (2) 0/ (1) 0 3 2 0 1/ (1) 3 1 1 For the last column of nodes, each of the nodes has two branches going into it. So, there are two possible cumulative distance values. We discard the largermetric path because, regardless of what happens subsequently, this path will have a larger Hamming distance from y.

65 Viterbi Decoding Suppose = [ ]. Find. Find the message which corresponds to the (valid) codeword with minimum (Hamming) distance from. = [ ]. 2 3 2 0/ (2) 0/ (1) 0 3 2 0 1/ (1) 3 1 1 So, the codewords which are nearest to [ ] or [ ]. The corresponding messages are [1] or [1], respectively. is

Ex. Viterbi Decoding Suppose = [ ]. After two stages, there is exactly one optimum (surviving) path to each state. 66

Ex. Viterbi Decoding Suppose = [ ]. Each state at stage 3 has two possible paths. We keep the optimum path with the minimum distance (solid line). 67 The same procedure is repeated for stages 4, 5, and 6.

Ex. Viterbi Decoding Suppose = [ ]. 68

Ex. Viterbi Decoding Suppose = [ ]. 69

Ex. Viterbi Decoding Suppose = [ ]. 70 = [ ] = [1 0 0 0 0 0]

Viterbi Decoding Suppose = [ ]. Find. = [ ]. 2 3 2 0/ (2) 0/ (1) 0/ (0) 0/ (1) 0/ (1) 0 3 3 2 1 0 1/ (1) 1 1/ (1) 1/ (2) 1/ (0) 71

Viterbi Decoding Suppose = [ ]. Find. = [ ]. 2 3 2 2 3 3 0/ (2) 0/ (1) 0/ (0) 0/ (1) 0 3 3 1 3 3 2 1 2 2 3 0 1/ (1) 1 1/ (1) 2 1 1/ (0) 1 72 = [ ] = [1 1 0 1 1 1]

78 MATLAB: Generator Polynomial Matrix Build a binary number representation by placing a 1 in each spot where a connection line from the shift register feeds into the adder, and a 0 elsewhere. The leftmost spot in the binary number represents the current input, while the rightmost spot represents the oldest input that still remains in the shift register. Convert this binary representation into an octal representation. by considering consecutive triplets of bits For example, interpret 1 as 1 1 0 and convert it to 152. str2num(dec2base(bin2dec('1'),8)) For example, the binary numbers corresponding to the upper and lower adders in the figure here are 1 and 1, respectively. These binary numbers are equivalent to the octal numbers 6 and 7, respectively, so the generator polynomial matrix is [6 7].

MATLAB: To use the polynomial description with the functions convenc and vitdec, first convert it into a trellis description using the poly2trellis function. For example, trellis = poly2trellis(3,[6 7]); Constraint Length = #FFs 1 79

MATLAB: Trellis Description Each solid arrow shows how the encoder changes its state if the current input is zero. Each dashed arrow shows how the encoder changes its state if the current input is one. The octal numbers above each arrow indicate the current output of the encoder. 80

MATLAB: Trellis Structure trellis = struct('numinputsymbols',2,'numoutputsymbols',4,... 'numstates',4,'nextstates',[0 2;0 2;1 3;1 3],... 'outputs',[0 3;1 2;3 0;2 1]); 81

MATLAB : Convolutional Encoding x = convenc(b,trellis); 82

Example (from the exercise) trellis = poly2trellis(3,[6 7]); b = [1 0 1 1 0]; x = convenc(b,trellis) [ConvCode_Exer.m] >> ConvCode_Exer x = 1 1 1 1 1 0 0 0 1 0 83

Reference Chapter 15 in [Lathi & Ding, 29] Chapter 13 in [Carlson & Crilly, 29] Section 7. in [Cover and Thomas, 26] 84