Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Similar documents
Communication Theory II

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

Modeling and Simulation NETW 707

ECE 4450:427/527 - Computer Networks Spring 2017

CSE 123: Computer Networks


Coding theory: Applications

Multimedia Systems WS 2010/2011

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Lecture 12. Block Diagram

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

Error Correction Methods

Cyclic Redundancy Check Codes

Turbo Codes for xdsl modems

Chapter 7: Channel coding:convolutional codes

Physical Layer and Coding

Error Correction and Trellis Coding

Channel Coding and Interleaving

Information redundancy

Assume that the follow string of bits constitutes one of the segments we which to transmit.

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

New Puncturing Pattern for Bad Interleavers in Turbo-Codes

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

Optimum Soft Decision Decoding of Linear Block Codes

Objective: To become acquainted with the basic concepts of cyclic codes and some aspects of encoder implementations for them.

16.36 Communication Systems Engineering

Cooperative HARQ with Poisson Interference and Opportunistic Routing

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

4 An Introduction to Channel Coding and Decoding over BSC

Mapper & De-Mapper System Document

CSCI 2570 Introduction to Nanocomputing

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Trellis Coded Modulation

Error Detection & Correction

VHDL Implementation of Reed Solomon Improved Encoding Algorithm

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

Error Control Codes for Memories

Optical Storage Technology. Error Correction

The E8 Lattice and Error Correction in Multi-Level Flash Memory

An Introduction to Low Density Parity Check (LDPC) Codes

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Convolutional Coding LECTURE Overview

COMMITTEE T1 TELECOMMUNICATIONS Working Group T1E1.4 (DSL Access) Lisle, IL, May 3, 2000

Introduction to Convolutional Codes, Part 1

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

The E8 Lattice and Error Correction in Multi-Level Flash Memory

1 Reed Solomon Decoder Final Project. Group 3 Abhinav Agarwal S Branavan Grant Elliott. 14 th May 2007

Rate-Compatible Low Density Parity Check Codes for Capacity-Approaching ARQ Schemes in Packet Data Communications

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels

CS6956: Wireless and Mobile Networks Lecture Notes: 2/4/2015

AALTO UNIVERSITY School of Electrical Engineering. Sergio Damian Lembo MODELING BLER PERFORMANCE OF PUNCTURED TURBO CODES

2013/Fall-Winter Term Monday 12:50 Room# or 5F Meeting Room Instructor: Fire Tom Wada, Professor

7.5 CRC. 12 March r0 SAS-2 CRC fixes

R. A. Carrasco and M. Johnston, Non-Binary Error Control Coding Cork 2009

Fully-parallel linear error block coding and decoding a Boolean approach

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

Channel Coding I. Exercises SS 2017

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

A NEW CHANNEL CODING TECHNIQUE TO APPROACH THE CHANNEL CAPACITY

APPLICATIONS. Quantum Communications

Appendix D: Basics of convolutional codes

Lecture 3 : Introduction to Binary Convolutional Codes

Chapter 3 Linear Block Codes

MATH3302. Coding and Cryptography. Coding Theory

Making Error Correcting Codes Work for Flash Memory

An Introduction to (Network) Coding Theory

UNIT I INFORMATION THEORY. I k log 2

ELEC3227/4247 Mid term Quiz2 Solution with explanation

Error Correction Code (1)

Coding on a Trellis: Convolutional Codes

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

Part III. Cyclic codes

7.5 CRC. 20 March r2 SAS-2 CRC fixes

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission

Low-density parity-check codes

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Design of Diversity-Achieving LDPC Codes for H-ARQ with Cross-Packet Channel Coding

Polar Code Construction for List Decoding

6.1.1 What is channel coding and why do we use it?

A Comparison of Cooperative Transmission Schemes for the Multiple Access Relay Channel

THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES

Convolutional Codes ddd, Houshou Chen. May 28, 2012

Code design: Computer search

Fault Tolerant Computing CS 530 Information redundancy: Coding theory. Yashwant K. Malaiya Colorado State University

Interleave Division Multiple Access. Li Ping, Department of Electronic Engineering City University of Hong Kong

A Model for the Performance Evaluation of Packet Transmissions Using Type-II Hybrid ARQ over a Correlated Error Channel

Utilizing Correct Prior Probability Calculation to Improve Performance of Low-Density Parity- Check Codes in the Presence of Burst Noise

Cyclic codes. Vahid Meghdadi Reference: Error Correction Coding by Todd K. Moon. February 2008

Iterative Equalization using Improved Block DFE for Synchronous CDMA Systems

Fast-Decodable MIMO HARQ Systems

Optimal Resource Allocation for IR-HARQ

Turbo Codes for Deep-Space Communications

Uplink HARQ for Cloud RAN via Separation of Control and Data Planes

Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel.

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

Reed-Solomon codes. Chapter Linear codes over finite fields

Transcription:

Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1

Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information Capacity Theorem Turbo Codes ARQ (Automatic Repeat Request) Stop-and-wait ARQ Go-back-N ARQ Selective-repeat ARQ 2

Introduction Information to be transmitted Source coding Channel coding Modulation Transmitter Antenna Air Antenna Information received Source decoding Channel decoding Demodulation Receiver 3

Forward Error Correction (FEC) Introduction to Wireless & Mobile Systems, 4 th Edition The key idea of FEC is to transmit enough redundant data to allow receiver to recover from errors all by itself. No sender retransmission required A simple redundancy is to attach a parity bit 1010110 The major categories of FEC codes are Block codes Cyclic codes Reed-Solomon codes (Not covered here) Convolutional codes, and Turbo codes, etc. 4

Linear Block Codes Information is divided into blocks of length k r parity bits or check bits are added to each block (total length n = k + r) Code rate R = k/n Decoder looks for codeword closest to received vector (code vector + error vector) Tradeoffs between Efficiency Reliability Encoding/Decoding compleity Modulo 2 Addition 0 0 0 0 1 1 1 1 0 1 1 10 5

p Linear Block Codes: Eample Eample: Find linear block code encoder G if code generator polynomial g()=1++ 3 for a (7, 4) code; n = Total number of bits = 7, k = Number of information bits = 4, r = Number of parity bits = n - k = 3 Coefficients of 0 1 2 3 = Re = 1+ 3 + + 1 1 p p 4 = Re = + 3 + + 1 [ 110] 2 2 5 = Re = 1+ + 3 + + 1 [ 011] 2 3 6 = 2 p4 Re = 1+ 3 + + 1 [ 101] [ 111] 1000 110 0100 011 G = = [ I 0010 111 0001 101 I is the identity matri P is the parity matri P] 6

Linear Block Codes: Eample The Generator Polynomial can be used to determine the Generator Matri G that allows determination of parity bits for a given data bits of m by multiplying as follows: 1000110 0100011 m. G = [1011] = [1011 100] 0010111 0001101 Data Data Parity Can be done for other combination of data bits, giving the code word c Other combinations of m can be used to determine all other possible code words 7

c c 1 2... ck... cn = m = m c k = m k + 1 = Linear Block Codes k- data and r = n-k redundant bits 1 2 = m p k + m p k +... mk pk k + m p 1 1 1n 1( 1) m 2 p 2n 2 2( 1)... m k p kn ( 1) 8

Linear Block Codes The uncoded k data bits be represented by the m vector: m = (m 1, m 2,, m k ) The corresponding codeword be represented by the n-bit c vector: c = (c 1, c 2, c k, c k+1,, c n-1, c n ) Each parity bit consists of weighted modulo 2 sum of the data bits represented by symbol for Eclusive OR or modulo 2 addition 9

Linear Block Codes Linear Block Code The block length C of the Linear Block Code is C = mg where m is the information codeword block length, G is the generator matri. G = [I k P] k n where P i = Remainder of [ n-k+i-1 /g()] for i=1, 2,.., k, and I is unit or identity matri. At the receiving end, parity check matri can be given as: H = [P T I n-k ], where P T is the transpose of the matri p. 10

Linear Block Codes Eample: Find linear block code encoder G if code generator polynomial g() with k data bits, and r parity bits = n - k 10 0P 1 [ ] 01 0P 2 G = I P = k 0 0 1P where n k+ i 1 i P = Re mainder of, for i = 1, 2,, k g ( ) 11

Linear Block Codes Eample: The parity matri P (k by n-k matri) is given by: p11 p12 p P 1 1( n k ) = p21 p22 p 2( n k ) P 2 P = pk1 pk1 pk ( n k ) k P where n k+ i 1 i P = Re mainder of, for i = 1, 2,, k g ( ) 12

Block Codes: Linear Block Codes Message vector m Generator matri G Code Vector C Air Code Vector C Parity check matri H T Null vector 0 Transmitter Receiver Operations of the generator matri and the parity check matri Consider a (7, 4) linear block code, given by G as 1000 111 0100 110 G = 0010 101 0001 011 H T = P I n k For convenience, the code vector is epressed as c = mc p Where c mp p = = 1110 100 1101 010 1011 001 is an (n-k)-bit parity check vector T 13

Block Codes: Linear Block Codes Define matri H T T P as H = I n k Received code vector = c e, here e is an error vector, the matri H T has the property P T ch = m c p I n k = mp c = c c = 0 The transpose of matri H T is H T p p p T = PI n k Where I n-k is a n-k by n-k unit matri and P T is the transpose of parity matri P. H is called parity check matri. Compute syndrome as ( ) s = H = c e H = ch eh = eh T T T T T 14

Linear Block Codes If S is 0 then message is correct else there are errors in it, from common known error patterns the correct message can be decoded. For the (7, 4) linear block code, given by G as G = 1000 111 0100 110 0010 101 0001 011 H 1110 100 1101 010 1011 001 For m = [1 0 1 1] and c = mg = [1 0 1 1 0 0 1] If there is no error, the received vector = c, and s=ch T =[0, 0, 0] = 15

Linear Block Codes Let c suffer an error such that the received vector =c e =[ 1 0 1 1 0 0 1 ] [ 0 0 1 0 0 0 0 ] =[ 1 0 0 1 0 0 1 ] Then, Syndrome s = H T = 111 110 101 011 100 010 001 [ 1001 001] T = [101] = (eh ) This indicates error position, giving the corrected vector as [1011001] 16

Cyclic Codes It is a block code which uses a shift register to perform encoding and decoding the code word with n bits is epressed as: c()=c 1 n-1 +c 2 n-2 + c n Input D1 where each coefficient c i (i=1,2,..,n) is either a 1 or 0 The codeword can be epressed by the data polynomial m() and the check polynomial c p () as c() = m() n-k + c p () where c p () = remainder from dividing m() n-k by generator g() if the received signal is c() + e() where e() is the error To check if received signal is error free, the remainder from dividing c() + e() by g() is obtained(syndrome). If this is 0 then the received signal is considered error free else error pattern is detected from known error syndromes Output C D 1, D 2 - Registers D2 17

Introduction to Wireless & Mobile Systems, 4 th Edition Cyclic Code: Eample Eample: Find the code words c() if m() = 2 + + 1 and g() = 3 3 + +1 for (7, 4) cyclic code We have n = total number of bits = 7, k = number of information bits = 4, r = number of parity bits = n - k = 3 rem g m rem c k n p = + + + + = = 1 ) ( ) ( ) ( 3 3 4 5 Then, 5 4 3 ) ( ) ( ) ( c m c p k n + + + = + = 18

Cyclic Redundancy Check (CRC) Introduction to Wireless & Mobile Systems, 4 th Edition Cyclic Redundancy Code (CRC) is an error-checking code The transmitter appends an etra n-bit sequence to every frame called Frame Check Sequence (FCS). The FCS holds redundant information about the frame that helps the receivers detect errors in the frame CRC is based on polynomial manipulation using modulo arithmetic. Blocks of input bit as coefficient-sets for polynomials is called message polynomial. Polynomial with constant coefficients is called the generator polynomial 19

Cyclic Redundancy Check (CRC) Generator polynomial is divided into the message polynomial, giving quotient and remainder, the coefficients of the remainder form the bits of final CRC Define: Q The original frame (k bits) to be transmitted F The resulting frame check sequence (FCS) of n-k bits to be added to Q (usually n = 8, 16, 32) J The cascading of Q and F P The predefined CRC generating polynomial The main idea in CRC algorithm is that the FCS is generated so that J should be eactly divisible by P 20

Cyclic Redundancy Check (CRC) The CRC creation process is defined as follows: Get the block of raw message Left shift the raw message by n bits and then divide it by p Get the remainder R as FCS Append the R to the raw message. The result J is the frame to be transmitted J=Q. n-k +F J should be eactly divisible by P Dividing Q. n-k by P gives Q. n-k /P=Q+R/P Where R is the reminder J=Q. n-k +R. This value of J should yield a zero reminder for J/P 21

Common CRC Codes Code-parity check bits CRC-12 Generator polynomial g() 12 + 11 + 3 + 2 ++1 CRC-16 16 + 15 + 2 +1 CRC-CCITT 16 + 12 + 5 +1 CRC-32 32 + 26 + 23 + 22 + 16 + 12 + 11 + 10 + 8 + 7 + 5 + 4 + 2 ++1 22

Common CRC Codes Code Generator polynomial g() Parity check bits CRC-12 1++ 2 + 3 + 11 + 12 12 CRC-16 1+ 2 + 15 + 16 16 CRC-CCITT 1+ 5 + 15 + 16 16 23

Convolutional Codes Most widely used channel code Encoding of information stream rather than information blocks Decoding is mostly performed by the Viterbi Algorithm (not covered here) The constraint length K for a convolution code is defined as K=M+1 where M is the maimum number of stages in any shift register The code rate r is defined as r = k/n where k is the number of parallel information bits and n is the the number of parallel output encoded bits at one time interval A convolution code encoder with n=2 and k=1 or code rate r=1/2 is shown net 24

Convolutional Codes: (n=2, k=1, M=2) Encoder 1 0 Input 1 0 D1D 01 1 D2D 0 2 1 0 y 1 y1 2 Output c D 1, D 2 - Registers Input : 1 1 1 0 0 0 Output y 1 y 2 : 11 01 10 01 11 00 Input : 1 0 1 0 0 0 Output y 1 y 2 : 11 10 00 10 11 00 25

State Diagram 10/1 01/1 11 01/0 10/0 10 01 00/1 11/1 11/0 00 00/0 26

0 First input 1 1 0 0 1 1 Tree Diagram 00 00 00 11 11 11 10 01 10 10 11 00 00 01 01 First output 10 11 11 01 11 10 11 11 00 01 11 00 10 01 01 11 01 10 00 10 01 10 27

Trellis 11 0 0 1 00 00 00 00 00 00 00 00 00 00 00 11 11 11 11 11 11 11 11 10 10 10 10 10 10 00 00 00 10 10 10 10 01 01 01 01 01 01 01 01 01 01 01 01 11 11 11 11 11 11 10 10 10 01 28

Interleaving Input data a1, a2, a3, a4, a5, a6, a7, a8, a9, Interleaving Read a1, a2, a3, a4 a5, a6, a7, a8 a9, a10, a11, a12 a13, a14, a15, a16 Write Transmitting data Received data a1, a5, a9, a13, a2, a6, a10, a14, a3, Through Air a1, a5, a9, a13, a2, a6, a10, a14, a3, De-Interleaving Write a1, a2, a3, a4 a5, a6, a7, a8 a9, a10, a11, a12 a13, a14, a15, a16 Read Output data a1, a2, a3, a4, a5, a6, a7, a8, a9, 29

Interleaving (Eample) Burst error Transmitting data on air 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, Received data 0, 1, 0, 0 De-Interleaving Write 0, 1, 0, 0 0, 1, 0, 0 1, 0, 0, 0 Read Output data 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, Discrete errors 30

Information Capacity Theorem (Shannon Limit) The information capacity (or channel capacity) C of a continuous channel with bandwidth B Hertz can be perturbed by additive Gaussian white noise of power spectral density N 0 /2, provided bandwidth B satisfies P C B log2 1 bits ond N B = + / sec 0 where P is the average transmitted power P = E b R b (for an ideal system, R b = C) E b is the transmitted energy per bit R b is transmission rate 31

Shannon Limit R b /B 20 10 Region for which R b >C Capacity boundary R b =C 1 Shannon Limit Region for which R b <C -1.6 0 10 20 30 E b /N 0 db 0.1 E b is the transmitted energy per bit N 0 is the Gaussian white noise of power spectral density R b is the transmission rate B is the channel with bandwidth C is the channel capacity 32

Turbo Codes A brief historic of turbo codes: The turbo code concept was first introduced by C. Berrou in 1993. Today, Turbo Codes are considered as the most efficient coding schemes for FEC Scheme with known components (simple convolutional or block codes, interleaver, soft-decision decoder, etc.) Performance close to the Shannon Limit (E b /N 0 = -1.6 db if R b 0) at modest compleity! Turbo codes have been proposed for low-power applications such as deep-space and satellite communications, as well as for interference limited applications such as third generation cellular, personal communication services, ad hoc and sensor networks 33

Turbo Codes: Encoder Data Source Convolutional Encoder 1 y 1 Interleaving Convolutional Encoder 2 y 2 y (y 1, y2) : Information y i : Redundancy Information 34

Turbo Codes: Decoder De-interleaving y 1 Convolutional Decoder 1 Interleaver Interleaving Convolutional Decoder 2 De-interleaving y 2 : Decoded Information 35

Automatic Repeat Request (ARQ) Source Transmitter Channel Receiver Destination Encoder Transmit controller Modulation Demodulation Decoder Transmit controller Acknowledge 36

Stop-And-Wait ARQ (SAW ARQ) Introduction to Wireless & Mobile Systems, 4 th Edition Retransmission Transmitting data 1 2 3 3 Time Received data 1 2 3 Error Time Output data 1 2 3 Time ACK: Acknowledge NAK: Negative ACK 37

Stop-And-Wait ARQ (SAW ARQ) Given n = number of bits in a block, k = number of information bits in a block, D = round trip delay, R b = bit rate, P b = BER of the channel, and P ACK (1- P b ) n Throughput: S SAW = (1/T SAW ). (k/n) = [(1- P b ) n / (1 + D * R b / n) ] * (k/n) where T SAW is the average transmission time in terms of a block duration T SAW = (1 + D.R b / n). P ACK + 2 (1 + D R b / n) P ACK (1- P ACK ) + 3 (1 + D R b / n) P ACK (1- P ACK ) 2 +.. = (1+ D R b /n) P ACK i (1-P ACK ) i-1 = (1+ D R b /n) P ACK /[1-(1-P ACK )] 2 = (1 + D R b / n) / P ACK i=1 38

Go-Back-N ARQ (GBN ARQ) Go-back 3 Go-back 5 Transmitting data 1 2 3 4 5 3 4 5 6 7 5 Time Received data 1 2 3 4 5 Error Error Time Output data 1 2 3 4 5 Time 39

Go-Back-N ARQ (GBN ARQ) Throughput S GBN = (1/T GBN ) (k/n) where = [(1- P b ) n / ((1- P b ) n + N (1-(1- P b ) n ) )] (k/n) T GBN = 1. P ACK + (N+1). P ACK. (1- P ACK ) +2. (N+1). P ACK. (1- P ACK ) 2 +. = P ACK +P ACK [(1-P ACK )+(1-P ACK ) 2 +(1-P ACK ) 3 + ]+ P ACK [N (1-P ACK )+2 N (1-P ACK ) 2 +3 N (1-P ACK ) 3 + ] = P ACK +P ACK [(1-P ACK )/{1-(1-P ACK )}+ N (1-P ACK )/{1-(1-P ACK )} 2 ] = 1+ N(1-P ACK )/P ACK 1 + (N [1 - (1- P b ) n ])/ (1- P b ) n 40

Selective-Repeat ARQ (SR ARQ) Retransmission Retransmission Transmitting data 1 2 3 4 5 3 6 7 8 9 7 Time Received data 1 2 4 5 3 6 8 9 7 Error Error Time Buffer 1 2 4 5 3 6 8 9 7 Time Output data 1 2 3 4 5 6 7 8 9 Time 41

Selective-Repeat ARQ (SR ARQ) Throughput S SR = (1/T SR ) * (k/n) = (1- P b ) n * (k/n) where T SR = 1.P ACK + 2 P ACK (1- P ACK ) + 3 P ACK (1- P ACK ) 2 +. = P ACK i (1-P ACK ) i-1 i= 1 = P ACK /[1-(1-P ACK )] 2 = 1/(1- P b ) n where P ACK (1-P b ) n 42