Chapter 7: Channel coding:convolutional codes

Similar documents
1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Channel Coding and Interleaving

Appendix D: Basics of convolutional codes

Convolutional Codes ddd, Houshou Chen. May 28, 2012

Introduction to Convolutional Codes, Part 1

Lecture 12. Block Diagram

Trellis-based Detection Techniques

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

Example of Convolutional Codec

BASICS OF DETECTION AND ESTIMATION THEORY

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

Code design: Computer search

Binary Convolutional Codes

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

Mapper & De-Mapper System Document

Communication Theory II

SENS'2006 Second Scientific Conference with International Participation SPACE, ECOLOGY, NANOTECHNOLOGY, SAFETY June 2006, Varna, Bulgaria

CHAPTER 8 Viterbi Decoding of Convolutional Codes

Coding theory: Applications

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Optimum Soft Decision Decoding of Linear Block Codes

Error Correction Methods

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

Error Correction and Trellis Coding

An Introduction to Low Density Parity Check (LDPC) Codes

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Channel Coding I. Exercises SS 2017

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

Digital Communications

Trellis Coded Modulation

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Soft-Output Trellis Waveform Coding

A NEW CHANNEL CODING TECHNIQUE TO APPROACH THE CHANNEL CAPACITY

One Lesson of Information Theory

Decision-Point Signal to Noise Ratio (SNR)

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES

16.36 Communication Systems Engineering

The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM


Reed-Solomon codes. Chapter Linear codes over finite fields

LDPC Codes. Intracom Telecom, Peania

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Parallel Concatenated Chaos Coded Modulations

CoherentDetectionof OFDM

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Unequal Error Protection Turbo Codes

HIGH DIMENSIONAL TRELLIS CODED MODULATION

Bit-wise Decomposition of M-ary Symbol Metric

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

New Puncturing Pattern for Bad Interleavers in Turbo-Codes

Coding on a Trellis: Convolutional Codes

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

Turbo Codes for Deep-Space Communications

Turbo Codes for xdsl modems

ELEC 405/511 Error Control Coding. Binary Convolutional Codes

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

Rapport technique #INRS-EMT Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Physical Layer and Coding

Iterative Timing Recovery

The E8 Lattice and Error Correction in Multi-Level Flash Memory

Ralf Koetter, Andrew C. Singer, and Michael Tüchler

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Notes on a posteriori probability (APP) metrics for LDPC

Maximum Likelihood Sequence Detection

MLSE in a single path channel. MLSE in a multipath channel. State model for a multipath channel. State model for a multipath channel

ECEN 655: Advanced Channel Coding

Iterative Equalization using Improved Block DFE for Synchronous CDMA Systems

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels

The E8 Lattice and Error Correction in Multi-Level Flash Memory

EE6604 Personal & Mobile Communications. Week 15. OFDM on AWGN and ISI Channels

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

A Systematic Description of Source Significance Information

홀로그램저장재료. National Creative Research Center for Active Plasmonics Applications Systems

Introduction to Binary Convolutional Codes [1]

Channel Coding I. Exercises SS 2017

List Decoding: Geometrical Aspects and Performance Bounds

Solution Manual for "Wireless Communications" by A. F. Molisch

PUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS

Polar Coding for the Large Hadron Collider: Challenges in Code Concatenation

Computation of Bit-Error Rate of Coherent and Non-Coherent Detection M-Ary PSK With Gray Code in BFWA Systems

Constellation Shaping for Communication Channels with Quantized Outputs

New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications

Polar Code Construction for List Decoding

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

Making Error Correcting Codes Work for Flash Memory

Belief propagation decoding of quantum channels by passing quantum messages

Theoretical Analysis and Performance Limits of Noncoherent Sequence Detection of Coded PSK

Rate-adaptive turbo-syndrome scheme for Slepian-Wolf Coding

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

Cyclic Redundancy Check Codes

Transcription:

Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith

Encoder representation

Communication system Data Channel Encoder Interleaver Modulator Channel Output Channel Dencoder Deinterleaver Demodulator The data is first coded then interleaved. The interleaving is used to avoid the channel presenting burst errors. Then it is transmitted through a kind of modulation over a noisy channel. At the receiver, all the operations must be inversed to estimate the transmitted data.

can be classified in two categories: block coding Cylclic codes BCH Reed-Solomon Product Turbo code LDPC trellis coding Convolutional coding TCM (Trellis code modulation) Turbo codes (SCCC or PCCC) Turbo TCM Here, we are concentrating on convolutional coding.

Output Outline Encoder representation Channel Deinterleaver Demodulator Dencoder c v (1) v (2) This is the simplest convolutional encoder. The input sequence is c = (..., c 1, c 0, c 1,..., c l,...) where l is the time index. The output sequences are: v (1) = (..., v (1) 1, v (1) v (2) = (..., v (2) 1, v (2) 0, v (1) 1 0, v (2) 1,..., v (1) l,...),..., v (2) l,...)

Encoder representation Convolutional code characteristics The constraint length of a CC is the number of input bit involved to generate each output bit. It is the number of the delay elements plus one. For the previous example, the constraint length is 3. Once all the output are serialized and get out of the coder, k right shift occurs. The code rate is defined as R c = k/n where n is the number of output. For the previous example n = 2, k = 1, and R c = 0.5.

Encoder representation Describing a CC by its generator In the previous example, assuming all-zero state, the sequence v (1) 1 will be [1] for a 1 at the input (impulse response). At the same time the sequence v (2) 1 will be [1] for a 1 at the input. Therefore, there are two generators g 1 = [1] and g 2 = [1] and the encoder is completely known. It is convenient to represent the encoder in octal form of as (g 1, g 2 ) = (5, 7). Exercise: Draw the encoder circuit of the code (23,35).

Encoder representation State diagram representation State State input=0 input=1 Code (7,5) State State Using this diagram one can easily calculate the output of the coder. Exercise: What is the coded sequence of [1]? Data Channel

Trellis representation Outline Encoder representation input 0 input 1 We assume that the trellis begins at the state zero. The bits on the diagram show the output of the encoder (v (1) t, v (2) t ). Exercise: Using above diagram what is the coded sequence of the [1 0 1 1 0 1]?

Modulation and the channel The coded sequence is denoted by C. The output of the encoder is modulated, then sent to the channel. We assume a BPSK modulation with 0 1 volt and 1 +1 volt. The channel adds a white Gaussian to the signal. The RX receives a real sequence R. The decoder should estimate the transmitted sequence by observing the channel output. For the i-th information bit, corresponding to the i-th branch in the trellis, the code sequence of size n is denoted by C ij where 0 < i < n 1. Assuming a memoryless channel, the received signal for the i-th information bit is: r ij = E c (2C ij 1) + n ij

Maximum Likelihood decoding Given the received sequence R, the decoder decides that the coded sequence C was transmitted if p(r C ) p(r C) C It means that the decoder finds the maximum likelihood path in the trellis given R. Thus, over an AWGN channel and for a code of rate 1/n and for an information sequence of size L, the likelihood can be written as: p(r C) = L 1 i=0 p(r i C i ) = L 1 n 1 i=0 j=0 p(r ij C ij )

Log Likelihood function Taking logarithm: L 1 L 1 n 1 log p(r C) = log p(r i C i ) = log p(r ij C ij ) i=0 i=0 j=0 The expression B i = n 1 j=0 log p(r ij C ij ) is called the branch metric. Maximizing the likelihood function is equivalent to maximizing the log likelihood function. The log likelihood function corresponding to a given path in the trellis is called the path metric, which is the sum of the branch metrics in the path.

ML and hard decoding In hard decoding, the sequence R is demodulated and contains only 0 and 1. This is called hard decision. The probability of error in hard decision depends to the channel state and is represented by p. If R and C are N symbol long and differ in d places (i.e. Hamming distance), then p(r C) = p d (1 p) N d log p(r C) = d log 1 p p + N log(1 p) The second term is independent of C. Therefore only the first term should be maximized. Conclusion: the sequence C with minimum Hamming distance to the received sequence R corresponds to the ML sequence.

Soft decoding In an AWGN with a zero mean Gaussian noise of variance σ 2 = N 0 /2, p(r ij C ij ) = 1 2πσ exp [ (R ij E c (2C ij 1)) 2 ] 2σ 2 ML is equivalent to choose the C ij that is closest in Euclidean distance to R ij. Taking logarithm and neglecting all scaling factor and common terms to all C i, the branch metric is: µ i = n R ij (2C ij 1) j=1

(1967) There are a huge number of paths to test for even moderate values of L. The reduces considerably this amount. Viterbi proposed an ML decoding by systematically removing paths that cannot achieve the highest path metric. When two paths enter into a given node, the path with lowest likelihood cannot be the survivor path: it can be removed.

Outline Survivor path P 1 B k P 2 N P 3 B n+1 B n+2 ML path P 1 is the survivor path because the partial path metric P 1 is greater than P 2 and P 3 where P i = n 1 k=0 Bi k. The ML path starting from node N to infinity has the path metric of P i + k=n B k (i = 1, 2, 3). Thus, the paths P 2 and P 3 cannot complete the path from n to and give a ML path. They will be discarded.

Trace back Outline Maximum B n+2 Likelihood (ML) decoding Channel B coding Convolutional P encoder n+1 3 ML path Common stem Trace back. All the paths arrive to a common stem Theoretically, the depth of the trace back operation is. However, it is observed that all the surviving paths merge finally to one common node, giving the decoding delay. Statistically speaking, for sufficiently large trellis depth (typically five times the constraint length), all the paths merge to a common node. Therefore a decoding delay of 5K is considered.

path 2 Trace back. Channel All the paths codingarrive Viterbi to a common algorithmstem path 3 Distance properties (example CC(7,5)) B t+1 B t+2 t k t k+1 t k+2 t 0 t 1 t 2 t 3 t 4 t 5 0 0 0 0 0 2 2 1 0 1 1 1 2 2 2 1 The error occurs when the survivor path is to be selected. If the Hamming distance between two sequences entering to a node is large, the probability of making error becomes small. The free distance of CC(7,5) is 5 corresponding to the path ---.

Good codes Outline Testing all the possible combinations of encoders, the best codes for a given constraint length are proposed in tables. Constraint Length Generaor in octal d free 3 (5,7) 5 4 (15,17) 6 5 (13,35) 7 6 (53,75) 8 7 (133,171) Rate 1/2 maximum free distance CC

Good codes Outline Constraint Length Generaor in octal d free 3 (5,7,7) 8 4 (13,15,17) 5 (25,33,37) 12 6 (47,53,75) 13 7 (133,145,175) 15 Rate 1/3 maximum free distance CC

Error probability Outline Convolutional code is linear so the P e is calculated by supposing all zero sequence is sent. Coherent BPSK over AWGN is assumed. The energy per coded bit is related to the energy per information bit by E c = R c E b. The probability of mistaking all zero sequence by another sequence with Hamming distance of d is: ( ) 2Ec P e (d) = Q d = Q (2γ b R c d) N 0 This probability is called pairwise error probability. There is no analytic expression for bit error rate but just some upper bounds.

Error probability Outline