Channel Coding and Interleaving

Similar documents
RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

Chapter 7: Channel coding:convolutional codes

16.36 Communication Systems Engineering

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Physical Layer and Coding

Error Correction and Trellis Coding

Error Correction Methods

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Introduction to Convolutional Codes, Part 1

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Convolutional Coding LECTURE Overview

Binary Convolutional Codes

Communication Theory II


Code design: Computer search

Lecture 12. Block Diagram

Cyclic Redundancy Check Codes

Coding theory: Applications

Solution Manual for "Wireless Communications" by A. F. Molisch

Trellis Coded Modulation

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

The E8 Lattice and Error Correction in Multi-Level Flash Memory

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

Coding on a Trellis: Convolutional Codes

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

BASICS OF DETECTION AND ESTIMATION THEORY

CHAPTER 8 Viterbi Decoding of Convolutional Codes

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Turbo Codes are Low Density Parity Check Codes

Example of Convolutional Codec

Lecture 3 : Introduction to Binary Convolutional Codes

AALTO UNIVERSITY School of Electrical Engineering. Sergio Damian Lembo MODELING BLER PERFORMANCE OF PUNCTURED TURBO CODES

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Channel Coding I. Exercises SS 2017

Convolutional Codes Klaus von der Heide

LDPC Codes. Slides originally from I. Land p.1

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

New Puncturing Pattern for Bad Interleavers in Turbo-Codes

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

Appendix D: Basics of convolutional codes

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

Mapper & De-Mapper System Document

The E8 Lattice and Error Correction in Multi-Level Flash Memory

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

4 An Introduction to Channel Coding and Decoding over BSC

The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes

Optimum Soft Decision Decoding of Linear Block Codes

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

Introduction to Binary Convolutional Codes [1]

Revision of Lecture 4

ELEC 405/511 Error Control Coding. Binary Convolutional Codes

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

A Study of Source Controlled Channel Decoding for GSM AMR Vocoder

Optical Storage Technology. Error Correction

Symbol Interleaved Parallel Concatenated Trellis Coded Modulation

Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel.

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

Error Detection and Correction: Hamming Code; Reed-Muller Code

Multimedia Systems WS 2010/2011

Low-density parity-check codes

MATH3302. Coding and Cryptography. Coding Theory

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

Decision-Point Signal to Noise Ratio (SNR)

New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding

Reed-Solomon codes. Chapter Linear codes over finite fields

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

Turbo Codes for xdsl modems

Lecture 4 : Introduction to Low-density Parity-check Codes

Digital communication system. Shannon s separation principle

A Systematic Description of Source Significance Information

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

Ma/CS 6b Class 24: Error Correcting Codes

HIGH DIMENSIONAL TRELLIS CODED MODULATION

Convolutional Codes ddd, Houshou Chen. May 28, 2012

CSE 123: Computer Networks

Linear Codes and Syndrome Decoding

Soft-Output Trellis Waveform Coding

THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES

Assume that the follow string of bits constitutes one of the segments we which to transmit.

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels

A Study on Simulating Convolutional Codes and Turbo Codes

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

ECE 545 Digital System Design with VHDL Lecture 1. Digital Logic Refresher Part A Combinational Logic Building Blocks

2013/Fall-Winter Term Monday 12:50 Room# or 5F Meeting Room Instructor: Fire Tom Wada, Professor

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS

Construction of coset-based low rate convolutional codes and their application to low rate turbo-like code design

ECE 4450:427/527 - Computer Networks Spring 2017

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

PUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS

Transcription:

Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage. This implies that it is only for testing and not for commercial use. 1

Channel coding Overview Block codes Convolution codes Fading channel and interleaving Coding is a much more complicated topic than this. Anyone interested should follow a course on channel coding. 2016-04-18 Ove Edfors - ETIN15 3 Simple example Data to be sent 1 1 0 1 0 1 Code: [1] => [1 1 1], [0] => [0 0 0] Bit sequency transmitted: 1 1 1 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 2

Simple example Data to be sent 1 1 0 1 0 1 Code: [1] => [1 1 1], [0] => [0 0 0] Bit sequency transmitted: 1 1 1 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 Add noise: 0 0 0 1 0 0 1 0 1 1 0 0 0 1 0 1 0 0 Simple example Data to be sent 1 1 0 1 0 1 Code: [1] => [1 1 1], [0] => [0 0 0] Bit sequency transmitted: 1 1 1 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 Add noise: 0 0 0 1 0 0 1 0 1 1 0 0 0 1 0 1 0 0 Data recieved: 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 1 3

Simple example Data to be sent 1 1 0 1 0 1 Code: [1] => [1 1 1], [0] => [0 0 0] Bit sequency transmitted: 1 1 1 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 Add noise: 0 0 0 1 0 0 1 0 1 1 0 0 0 1 0 1 0 0 Data recieved: 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 1 Decoder (simple): 1 x x x x x Simple example Data to be sent 1 1 0 1 0 1 Code: [1] => [1 1 1], [0] => [0 0 0] Bit sequency transmitted: 1 1 1 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 Add noise: 0 0 0 1 0 0 1 0 1 1 0 0 0 1 0 1 0 0 Data recieved: 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 1 Decoder (simple): 1 x x x x x Better decoder (majority). 1 1 1 1 0 1 4

Channel coding: Basic types of codes We can classify channel codes in two principal groups: BLOCK CODES Encodes data in blocks of k, using code words of length n. CONVOLUTION CODES Encodes data in a stream, without breaking it into blocks, creating a code sequence. 2016-04-18 Ove Edfors - ETIN15 9 Original source data with redundancy (Music, speach) Source coding Pure information without redundancy Channel coding Pure information with structured redundancy. E.g. a speech coder The structured redundancy added in the channel coding is often called parity or check sum. 2016-04-18 Ove Edfors - ETIN15 10 5

Assume that we have a block code, which consists of k information bits per n bit code word (n > k). Since there are only 2 k different information sequences, there can be only 2 k different code words. This leads to a larger distance between the valid code words than between arbitrary binary sequences of length n, which increases our chance of selecting the correct one after receiving a noisy version. 2016-04-18 Ove Edfors - ETIN15 11 If we receive a sequence that is not a valid code word, we decode to the closest one. One thing remains... what do we mean by closest? We need a distance measure. 2016-04-18 Ove Edfors - ETIN15 12 6

Channel Coding Distances The distance measure used depends on the channel over which we transmit our code words (if we want the rule of decoding to the closest code word to give a low probability of error). Two common ones: Hamming distance Measures the number of bits being different between two binary words. Used for binary channels with random bit errors. Euclidean distance Same measure we have used for signal constellations. Used for AWGN channels. 2016-04-18 Ove Edfors - ETIN15 13 Coding gain When applying channel codes we decrease the E b /N 0 required to obtain some specified performance (BER). This coding gain depends on the code and the specified performance. It translates directly to a lower requirement on received power in the link budget. NOTE: E b denotes energy per information bit, even for the coded case. 2016-04-18 Ove Edfors - ETIN15 14 7

Channel Coding Bandwidth When introducing coding we have essentially two ways of handling the indreased number of (code) bits that need to be transmitted: 1) Accept that the raw bit rate will increase the required radio bandwidth proportionally. This is the simplest way, but may not be possible, since we may have a limited bandwidth available. 2) Increase the signal constellation size to compensate for the increased number of bits, thus keeping the same bandwidth. Increasing the number of signal constellation points will decrease the distance between them. This decrease in distance will have to be compensated by the introduced coding. 2016-04-18 Ove Edfors - ETIN15 15 Channel coding: Linear block codes The encoding process of a linear block code can be written as where k - dimensional information vector n x k - dimensional generator matrix n - dimensional code word vector The matrix calculations are done in an appropriate arithmetic. We will primarily assume binary codes and modulo-2 arithmetic. 2016-04-18 Ove Edfors - ETIN15 16 8

Channel coding: Some definitions Code rate: Minimum distance of code: Modulo-2 arithmetic (XOR): The minimum distance of a code determines its error correcting performance in non-fading channels. Hamming weight: Note: The textbook sometimes use the name Hamming distance: Hamming distance of the code (d H ) to denote its minimum distance. 2016-04-18 Ove Edfors - ETIN15 17 Encoding example For a specific (n,k) = (7,4) code we encode the information sequence 1 0 1 1 as If the information is directly visible in the code word, we say that the code is systematic. In addition to the k information bits, there are n-k = 3 parity bits. 2016-04-18 Ove Edfors - ETIN15 18 9

Encoding example, cont. Encoding all possible 4 bit information sequences gives: Information Code word Hamming weight 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0 1 0 0 0 1 0 1 0 1 1 0 0 1 1 1 1 0 0 0 1 0 0 1 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 0 0 1 1 0 0 1 1 1 0 0 0 1 0 0 1 0 1 0 1 0 1 0 1 0 0 1 1 0 1 1 0 0 1 1 1 0 0 1 1 0 0 0 1 1 0 1 0 0 1 0 0 1 1 0 1 0 1 0 1 1 0 1 1 0 1 0 1 1 0 0 0 1 1 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 4 3 3 3 3 4 4 3 3 4 4 4 4 3 7 This code has a minimum distance of 3. (Minimum code word weight of a linear code, excluding the all-zero code word.) This is a (7,4) Hamming code, capable of correcting one bit error. 2016-04-18 Ove Edfors - ETIN15 19 Error correction capability A binary block code with minimum distance d min can correct J bit errors, where Rounded down to nearest integer. J J 2016-04-18 Ove Edfors - ETIN15 20 10

Performance and code length Longer codes (with same rate) usually have better performance! This example is for a non-fading channel. E b /N 0 Drawbacks with long codes is complexity and delay. 2016-04-18 Ove Edfors - ETIN15 21 CONVOLUTION CODES 2016-04-18 Ove Edfors - ETIN15 22 11

Encoder structure In convolution codes, the coded bits are formed as convolutions between the incoming bits and a number of generator sequences. We will view the encoder as a shift register with memory M, length L and N generator sequences (convolution sums). M = 3 2016-04-18 Ove Edfors - ETIN15 23 Encoding example Input State Output Next state 0 1 0 1 0 1 0 1 00 00 01 01 10 10 11 11 000 111 001 110 011 100 010 101 00 10 00 10 01 11 01 11 We usually start the encoder in the all-zero state! 2016-04-18 Ove Edfors - ETIN15 24 12

Encoding example, cont. We can view the encoding process in a trellis created from the table on the previous slide. 2016-04-18 Ove Edfors - ETIN15 25 Termination At the end of the information sequence, it is common to add a tail of L zeros to force the encoder to end (terminate) in the zero state. This improved performance, since a decoder knows both the starting state and ending state. 2016-04-18 Ove Edfors - ETIN15 26 13

Encoding example, cont. Recieved sequence: 010 000 1000 001 011 110 001 1 2016-04-18 Ove Edfors - ETIN15 27 Improvements to Viterbi decoding: Soft decoding We have given examples of hard decoding, using the Hamming distance. If we do not detect ones and zeros before decoding our channel code, we can use soft decoding. In the AWGN channel, this means comparing Euclidean distances instead. 2016-04-18 Ove Edfors - ETIN15 14

Surviving paths The Viterbi algorithm needs to keep track of one surviving path per state in the trellis. For long code sequences this causes a memory problem. In practice we only keep track of surviving paths in a window consisting of a certain number of trellis steps. At the end of this window we enforce decisions on bits, based on the metric in the latest decoding step. FADING CHANNELS AND INTERLEAVING 2016-04-18 Ove Edfors - ETIN15 30 15

Fading channels and interleaving In fading channels, many received bits will be of low quality when we hit a fading dip. Coding may suffer greatly, since many low quality bits in a code word may lead to a decoding error. To prevent all low quality bits in a fading dip from ending up in the same code word, we rearrange the bits between several code words before transmission... and rearrange them again at the receiver, before decoding. This strategy of breaking up fading dips is called interleaving. 2016-04-18 Ove Edfors - ETIN15 31 Distribution of low-quality bits Without interleaving With interleaving bit bit Ove Edfors - ETIN15 Code words 16

Block interleaver The writing and reading of data in interleavers cause a delay in the system, which may cause other problems. 2016-04-18 Ove Edfors - ETIN15 Interleaving - BER example BER of a R=1/3 repetition code over a Rayleigh-fading channel, with and without interleaving. Decoding strategy: majority selection. 2016-04-18 Ove Edfors - ETIN15 17

18