SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Similar documents
Lecture 12. Block Diagram

Chapter 7: Channel coding:convolutional codes

Introduction to Convolutional Codes, Part 1

ECEN 655: Advanced Channel Coding

Code design: Computer search

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

Bounds on Mutual Information for Simple Codes Using Information Combining

Digital Modulation 1

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

16.36 Communication Systems Engineering

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes

Channel Coding I. Exercises SS 2017

Coding theory: Applications

LDPC Codes. Slides originally from I. Land p.1

BASICS OF DETECTION AND ESTIMATION THEORY

Channel Coding and Interleaving

Lecture 3: Error Correcting Codes

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

MATH3302. Coding and Cryptography. Coding Theory

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

One Lesson of Information Theory

Convolutional Codes ddd, Houshou Chen. May 28, 2012

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Physical Layer and Coding

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

Chapter 3 Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes

Binary Convolutional Codes

Coding on a Trellis: Convolutional Codes

Turbo Codes for Deep-Space Communications

A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure


Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

6.1.1 What is channel coding and why do we use it?

The E8 Lattice and Error Correction in Multi-Level Flash Memory

Soft-Output Trellis Waveform Coding

Digital Communications

List Decoding: Geometrical Aspects and Performance Bounds

Codes on graphs and iterative decoding

Section 3 Error Correcting Codes (ECC): Fundamentals

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Channel Coding I. Exercises SS 2017

Introduction to binary block codes

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

4 An Introduction to Channel Coding and Decoding over BSC

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

Example of Convolutional Codec

State-of-the-Art Channel Coding

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

EE 229B ERROR CONTROL CODING Spring 2005

Error Correction and Trellis Coding

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

Reed-Solomon codes. Chapter Linear codes over finite fields

Error Correction Methods

Introduction to Binary Convolutional Codes [1]

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Codes on graphs and iterative decoding

A Systematic Description of Source Significance Information

Unequal Error Protection Turbo Codes

Lecture 4 : Introduction to Low-density Parity-check Codes

The E8 Lattice and Error Correction in Multi-Level Flash Memory

Polar Code Construction for List Decoding

Trellis-based Detection Techniques

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

SENS'2006 Second Scientific Conference with International Participation SPACE, ECOLOGY, NANOTECHNOLOGY, SAFETY June 2006, Varna, Bulgaria

Decomposition Methods for Large Scale LP Decoding

Performance of small signal sets

Decision-Point Signal to Noise Ratio (SNR)

Lattices for Communication Engineers

CHAPTER 8 Viterbi Decoding of Convolutional Codes

Coding Techniques for Data Storage Systems

Lecture 4: Linear Codes. Copyright G. Caire 88

THIS paper is aimed at designing efficient decoding algorithms

Mapper & De-Mapper System Document

HIGH DIMENSIONAL TRELLIS CODED MODULATION

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

Lecture 3 : Introduction to Binary Convolutional Codes

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

ABriefReviewof CodingTheory

Making Error Correcting Codes Work for Flash Memory

Codes on graphs. Chapter Elementary realizations of linear block codes

Information redundancy

Channel Coding 1. Sportturm (SpT), Room: C3165

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

MATH 291T CODING THEORY

Communication Theory II

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks

Transcription:

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1

Overview Basic Concepts of Channel Coding Block Codes I: Codes and Encoding Communication Channels Block Codes II: Decoding Convolutional Codes Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.2

Basic Concepts of Channel Coding System Model Examples Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.3

The Concept of Channel Coding placements Source u Channel x y Destination û Source generates data Destination accepts estimated data Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.4

The Concept of Channel Coding placements Source u Channel x y Destination û Source generates data Destination accepts estimated data Channel introduces noise and thus errors Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.4

The Concept of Channel Coding placements Source u Encoder x Channel Destination û Decoder y Source generates data Destination accepts estimated data Channel introduces noise and thus errors Encoder adds redundancy Decoder exploits redundancy to detect or correct errors Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.4

The Concept of Channel Coding Objective of channel coding Reliable transmission of digital data over noisy channels. Tools Introduction of redundancy for error detection or error correction. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.5

Examples for Channel Coding Mobile communications (GSM, UMTS, WLAN, Bluetooth) Channel: mobile radio channel Satellite communications (pictures from Mars) Channel: radio channel Cable modems (DSL) Channel: wireline, POTS Compact Disc, DVD (music, pictures, data) Channel: storage medium Memory Elements (data) Channel: storage medium In all digital communication systems, channel coding is applied to protect the transmitted data against transmission errors. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.6

System Model for Channel Coding I placements Source u Encoder x Channel Destination û Decoder y Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.7

System Model for Channel Coding I The Source Binary Symmetric Source (BSS) u F 2 := {0, 1}, p U (u) = 1/2 for u = 0, 1 Binary Info(rmation) Word of length k u = [u 0, u 1,..., u k 1 ] F k 2 BSS p U (u) = 1/2 k Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.7

System Model for Channel Coding II placements Source u Encoder x Channel Destination û Decoder y Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.8

System Model for Channel Coding II The Encoder Binary Info(rmation) Word of length k u = [u 0, u 1,..., u k 1 ] F k 2 Binary Code Word of length n x = [x 0, x 1,..., x n 1 ] F n 2 Linear Binary Code of length n C := {set of codewords x} Linear Binary Encoder - one-to-one mapping u x - code rate R := k/n (R < 1) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.8

Examples for Binary Linear Codes Single Parity-Check Code (k = 2, n = 3) Code word x = [x 0, x 1, x 2 ] with code constraint x 0 x 1 x 2 = 0 Code C := {000, 110, 101, 011} Possible encoder: u = [u 0, u 1 ] x = [u 0, u 1, u 0 u 1 ] Repetition Code of length (k = 1, n = 3) Code word x = [x 0, x 1, x 2 ] with code constraint x 0 = x 1 = x 2 Code C := {000, 111} Possible encoder: u = [u 0 ] x = [u 0, u 0, u 0 ] Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.9

System Model for Channel Coding III placements Source u Encoder x Channel Destination û Decoder y Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.10

System Model for Channel Coding III The Channel Binary-Input Symmetric Memoryless Channel (BISMC) binary input alphabet x F 2 real-valued output alphabet y R transition probabilities p Y X (y x) symmetry see Cover/Thomas memoryless independent transmissions Probabilistic mapping from x to y Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.10

Examples for BISMCs Binary Symmetric Channel (BSC) cements X 0 0 1 PSfrag replacements 1 ɛ 1 ɛ ɛ ɛ 1 Y Binary Erasure Channel (BEC) X 0 δ δ 1 δ 1 1 1 δ 0 Y Crossover probability ɛ Erasure probability δ Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.11

Examples for BISMCs PSfrag Binaryreplacements Symmetric Erasure Channel (BSEC) X 0 1 ɛ ɛ 1 ɛ δ 1 ɛ δ δ δ 0 1 Y Crossover probability ɛ Erasure probability δ Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.12

Examples for BISMCs frag replacements Binary-Input AWGN Channel (BI-AWGNC) X BPSK Map 0 +1 1 1 X N Y X {0, 1}, X { 1, +1}, N R, Y R Gaussian distributed noise N with noise variance σ 2 n p N (n) = 1 exp ( n2 ) 2πσ 2 n 2σn 2 Conditional pdf: p Y X (y x ) = p N (y x ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.13

System Model for Channel Coding IV placements Source u Encoder x Channel Destination û Decoder y Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.14

System Model for Channel Coding IV The Decoder Received Word of length n y = [y 0, y 1,..., y n 1 ] R n Decoder - error correction or error detection - estimation of the transmitted info word or code word Estimated Info Word of length k û = [û 0, û 1,..., û k 1 ] F k 2 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.14

Example for Decoding Repetition Code over BSC Code: C := {000, 111} Transmitted code word x = [111] BSC with crossover probability ɛ = 0.1 Received word y = [011] Error detection y / C error detected Error correction - if x = [111] transmitted, then one error - if x = [000] transmitted, then two errors - one error is less likely than two errors estimated code word ˆx = [111] (maximum-likelihood decoding) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.15

System Model for Channel Coding V ag replacements Source u Encoder x Channel Destination û Decoder y Word Error Probability Bit Error Probability P w := Pr(u û) = Pr(x ˆx) P b := 1 k k 1 i=0 Pr(u i û i ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.16

Problems in Channel Coding Construction of good codes Construction of low-complexity decoders Existence of codes with certain parameters (Singleton bound, Hamming bound, Gilbert bound, Varshamov bound) Highest code rate for a given channel such that transmission is error-free (Channel Coding Theorem) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.17

Block Codes I: Codes and Encoding Properties of Codes and Encoders Hamming Distances and Hamming Weights Generator Matrix and Parity-Check Matrix Examples of Codes Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.18

Codes A not so accurate Definition (Galois Field) A Galois field GF (q) is a finite set of q elements with two operators (often called addition and multiplication) such that all operations are similar to ordinary multiplication and addition for real numbers. (For the exact definition, see any channel coding textbook.) Example 1 GF (5) = {0, 1, 2, 3, 4} a b := a + b mod 5 for a, b GF (5) a b := a b mod 5 for a, b GF (5) Example 2: Binary Field F 2 = GF (2) = {0, 1} a b := a + b mod 2 for a, b F 2 a b := a b mod 2 for a, b F 2 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.19

Codes Definition (Hamming distance) The Hamming distance between two vectors a and b (of the same length) is defined as the number of positions in which they differ, and it is denoted by d H (a, b). Example a = [0111], b = [1100], d H (a, b) = 3 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.20

Codes Definition (Linear Binary Block Code) A binary linear (n, k, d min ) block code C is a subset of F n with 2 k vectors that have the following properties: Minimum Distance The minimum Hamming distance between pairs of distinct vectors in C is d min, i.e., d min := min a,b C a b d H (a, b) Linearity in C, i.e., Any sum of two vectors in C is again a vector a b C for all a, b C. The ratio of k and n is called the code rate R = k/n. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.21

Codes Codeword The elements x of a code are called codewords, and they are written as x = [x 0, x 1,..., x n 1 ]. The elements x i of a codeword are called code symbols. Infoword Each codeword x may be associated with a binary vector of length k. These vectors u are called infowords (information words), and they are written as u = [u 0, u 1,..., u k 1 ]. The elements u i of an infoword are called info symbols (information symbols). Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.22

Codes Example This code is a linear binary block code. C = { [000000], [100111], [010001], [110110], [001110], [101001], [011111], [111000] } Code parameters: codeword length n = 6 info word length k = 3 minimum distance d min = 2 Code parameters in short notation (n, k, d min ) : (6, 3, 2) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.23

Encoding Encoder An encoder is a one-to-one map from infowords onto codewords. The encoder inverse is the inverse map from codewords to infowords. encoder : u x = enc(u) encoder inverse : x u = enc 1 (x) Linear Binary Encoder A linear binary encoder is an encoder for a linear binary code such that for all infowords u 1 and u 2 enc(u 1 u 2 ) = enc(u 1 ) enc(u 2 ). Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.24

Encoding Remark 1 Also the device implementing the encoding may be called an encoder. Remark 2 An encoder is a linear map over F 2. A code is a linear vector subspace over F 2. (Compare to ordinary linear algebra over real numbers.) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.25

Encoding Example This encoder for the (6, 3, 3) code is linear. u x = enc(u) 000 000000 001 011011 010 001110 011 010101 100 100111 101 111100 110 101001 111 110010 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.26

Encoding Systematic Encoder An encoder is called a systematic encoder if all info symbols are code symbols. Example Consider a (6, 3, 3) code with infowords and codewords denoted by u = [u 0, u 1, u 2 ] and x = [x 0, x 1, x 2, x 3, x 4, x 5 ]. The encoder is a systematic encoder. [u 0, u 1, u 2 ] [ u 0, u 1, u }{{} 2, x 3, x 4, x 5 ] }{{} systematic parity part part Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.27

Distances and Weights Definition (Hamming distance) The Hamming distance d H (a, b) between two vectors a and b (of the same length) is defined as the number of positions in which they differ. Definition (Hamming weight) The Hamming weight w H (a) of a vector a is defined as the number of non-zero positions. Example a = [00111], b = [00011]; w H (a) = 3, w H (b) = 2, d H (a, b) = 1. Notice: d H (a, b) = w H (a b) = w H (a b). Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.28

Distances and Weights Consider a linear binary code C. The set of Hamming weights of the codewords in C is denoted by w H (C) = {w H (b) : b C}. The set of Hamming distances between a codeword a and the other codewords in C is denoted by d H (a, C) = {d H (a, b) : b C}. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.29

Distances and Weights Theorem For a linear binary code C, we have for all a C. Proof (a) d H (a, b) = w H (a b) (b) C = a C for all a C Thus for all a C, w H (C) = d H (a, C) {d H (a, b) : b C} = {w H (a b) : b C} = {w H (b) : b C}. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.30

Distances and Weights The Hamming distances between codewords are closely related to the error correction/detection capabilities of the code. The set of Hamming distances is identical to the set of Hamming weights. Idea A code should not only be described by the parameters (n, k, d min ), but also by the distribution of the codeword weights. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.31

Distances and Weights Definition (Weight Distribution) Consider a linear code of length n. The weight distribution of this code is the vector A = [A 0, A 1,..., A n ] with A w denoting the number of codewords with Hamming weight w, w = 0, 1,..., n. The weight enumerating function (WEF) of this code is the polynomial A(H) = A 0 + A 1 H + A 2 H 2 + + A n H n, where A w are the elements of the weight distribution, and H is a dummy variable. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.32

Distances and Weights Example Consider the (6, 3, 3) code C = { [000000], [100111], [010101], [110010], [001110], [101001], [011011], [111100] } Weight distribution A = [1, 0, 0, 4, 3, 0, 0] Weight enumerating function A(H) = 1 + 4H 3 + 3H 4 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.33

Generator Matrix and Parity-Check Matrix Consider a binary linear (n, k, d min ) code C. This code C is a k-dimensional vector subspace of the n-dimensional vector space F n 2 (due to the linearity). Every codeword x can be written as a linear combination of k basis vectors g 0, g 1,..., g k 1 C: x = u 0 g 0 u 1 g 1... u k 1 g k 1 = [u 0, u 1,..., u k 1 ] }{{} u = ug g 0 g 1 g k 1 } {{ } G Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.34

Generator Matrix and Parity-Check Matrix Definition (Generator Matrix) Consider a binary linear (n, k, d min ) code C. A matrix G F2 k n is called a generator matrix of C if the set of generated words is equal to the code, i.e., if C = {x = ug : u F k 2}. Remarks The rows of G are codewords. The rank of G is equal to k. The generator matrix defines an encoder: x = enc(u) := ug. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.35

Generator Matrix and Parity-Check Matrix Example Consider the (6, 3, 3) code C = { [000000], [100111], [010101], [110010], [001110], [101001], [011011], [111100] } A generator matrix of this code is 1 0 0 1 1 1 G = 0 1 0 1 0 1 0 0 1 1 1 0 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.36

Generator Matrix and Parity-Check Matrix Definition (Parity-Check Matrix) Consider a binary linear (n, k, d min ) code C. A matrix H F (n k) n 2 is called a parity-check matrix of C if C = {x F n 2 : xh T = 0}. Remarks The rows of H are orthogonal to the codewords. The rank of H is equal to (n k). (More general definition: H F m n 2 with m n k and rank H = n k.) The parity-check matrix defines a code: x C xh T = 0 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.37

Generator Matrix and Parity-Check Matrix Example Consider the (6, 3, 3) code C = { [000000], [100111], [010101], [110010], [001110], [101001], [011011], [111100] } A parity-check matrix of this code is 1 1 1 1 0 0 H = 1 0 1 0 1 0 1 1 0 0 0 1 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.38

Generator Matrix and Parity-Check Matrix Interpretation of Parity-Check Matrix The equation xh T = 0 represents a system of parity-check equations. Example 1 1 1 1 0 0 xh = [x 0, x 1, x 2, x 3, x 4, x 5 ] 1 0 1 0 1 0 1 1 0 0 0 1 T = 0 can equivalently be written as x 0 x 1 x 2 x 3 = 0 x 0 x 2 x 4 = 0 x 0 x 1 x 5 = 0 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.39

Generator Matrix and Parity-Check Matrix Definition (Systematic Generator Matrix) Consider a binary linear (n, k, d min ) code C. A systematic generator matrix G syst of C represents a systematic encoder and has the structure G syst = [I k P ], where I k F k 2 is the identity matrix and P Fn k 2. Theorem Consider a binary linear (n, k, d min ) code C. If G = [I k P ] is a generator matrix of C, then H = [P T I n k ] is a parity-check matrix C, and vice versa. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.40

Generator Matrix and Parity-Check Matrix Example Consider the (6, 3, 3) code C = { [000000], [100111], [010101], [110010], [001110], [101001], [011011], [111100] } A generator matrix and a parity-check matrix of this code are given by 1 0 0 1 1 1 1 1 1 1 0 0 G = 0 1 0 1 0 1 and H = 1 0 1 0 1 0 0 0 1 1 1 0 1 1 0 0 0 1 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.41

Generator Matrix and Parity-Check Matrix Definition (Dual Code) Consider a binary linear (n, k) code C with generator matrix G F2 k n and parity-check matrix H F (n k) n 2. The binary linear (n, n k) code C with generator matrix G = H and parity-check matrix H = G is called the dual code of C. Remark The codewords of the original code and those of the dual code are orthogonal: ab T = 0 for all a C and b C. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.42

Examples of Codes Binary Repetition Codes Linear binary (n, 1, n) codes: C = {x F n 2 : x 0 = x 1 = = x n 1 } Binary Single Parity-Check Codes Linear binary (n, n 1, 2) codes: C = {x F n 2 : x 0 x 1 x n 1 = 0} Remark Repetition codes and single parity-check codes are dual codes. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.43

Examples of Codes Binary Hamming Codes Linear binary codes with d min = 3 and maximal rate. (Dual codes of the binary Simplex codes.) Defined by the parity check matrix H and an integer r N: the columns of H are all non-zero binary vectors of length r. Resulting code parameters: codeword length n = 2 r 1, infoword length k = 2 r 1 r, minimum distance d min = 3; Thus: (2 r 1, 2 r 1 r, 3) code. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.44

Examples of Codes Example: (7, 4, 3) Hamming code Follows from r = 3; parity-check matrix: 0 0 0 1 1 1 1 H = 0 1 1 0 0 1 1 1 0 1 0 1 0 1 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.45

Examples of Codes Binary Simplex Codes Linear binary codes with all non-zero codewords having the same weight. (Dual codes of the binary Hamming codes.) Defined by the generator matrix H and an integer r N: the columns of G are all non-zero binary vectors of length r. Resulting code parameters: codeword length n = 2 r 1, infoword length k = r, minimum distance d min = 2 r 1. Thus: (2 r 1, r, 2 r 1 ) code. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.46

Examples of Codes Example: (7, 3, 4) Simplex code Follows from r = 3; generator matrix: 0 0 0 1 1 1 1 G = 0 1 1 0 0 1 1 1 0 1 0 1 0 1 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.47

Examples of Codes Repetition codes Single parity-check codes Hamming codes Simplex codes Golay codes Reed-Muller codes BCH codes Reed-Solomon codes Low-density parity-check codes Concatenated codes (turbo codes) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.48

Summary Linear Binary (n, k, d min ) Code Linear (Systematic) Encoder Weight distribution (Systematic) Generator matrix Parity-check matrix Dual code Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.49

Communication Channels Binary Symmetric Channel Binary Erasure Channel Binary Symmetric Erasure Channel Binary-Input AWGN Channel Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.50

Binary Symmetric Channel (BSC) PSfrag replacements X 1 ɛ 0 0 ɛ ɛ 1 1 1 ɛ Y Channel input symbols X {0, 1} Channel output symbols Y {0, 1} Transition probabilities: p Y X (y x) = { 1 ɛ for y = x ɛ for y x Channel parameter: crossover probability ɛ Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.51

PSfrag replacements Binary Erasure Channel (BEC) X 0 1 δ δ 1 δ 1 δ 0 1 Y Channel input symbols X {0, 1} Channel output symbols Y {0,, 1} ( = erasure ) Transition probabilities: 1 δ for y = x p Y X (y x) = δ for y = 0 for y x and x, y {0, 1} Channel parameter: erasure probability δ Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.52

PSfrag replacements Binary Symmetric Erasure Channel (BSEC) X 0 1 1 ɛ δ ɛ δ ɛ δ 1 ɛ δ 0 1 Y Channel input symbols X {0, 1} Channel output symbols Y {0,, 1} ( = erasure ) Transition probabilities: 1 ɛ δ for y = x p Y X (y x) = δ for y = ɛ for y x and x, y {0, 1} Channel parameters: erasure probability δ crossover probability ɛ Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.53

ag replacements Binary-Input AWGN Channel (BI-AWGNC) X BPSK Map 0 + E s 1 E s X N N (0, N 0 /2) Y Code symbols X {0, 1} Modulation symbols X { E s, + E s } with symbol energy E s White Gaussian noise (WGN) N R with noise variance σ 2 N = N 0/2 and pdf p N (n) Channel output symbols Y R Signal-to-noise ratio (SNR) per code symbol: E s /N 0 Transition probabilities: p Y X (y x) = p Y X (y x ) = p N (y x ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.54

Binary-Input AWGN Channel (BI-AWGNC) ag replacements Equivalent normalized representation X BPSK Map 0 +1 1 1 X N N (0, N 0 /2E s ) Y Code symbols X {0, 1} Modulation symbols X { 1, +1} White Gaussian noise (WGN) N R with noise variance σ 2 N = N 0/2E s Channel output symbols Y R Signal-to-noise ratio (SNR) per code symbol: E s /N 0 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.55

Binary-Input AWGN Channel (BI-AWGNC) Something about energies and SNRs Assume an (n, k, d min ) code with code rate R = k/n. Energy per code symbol: E s Energy per info symbol: E b = 1 R E s SNR per code symbol: E s N 0 SNR per info symbol: E b N 0 = 1 R E s N 0 Logarithmic scale (often used in error-rate plots): [ Es N 0 ] db = 10 log 10 ( Es N 0 ) [db], [ Eb N 0 ] db = 10 log 10 ( Eb N 0 ) [db] Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.56

Binary-Input AWGN Channel (BI-AWGNC) Error Probability Assume the detection rule { ˆx +1 for y > 0, = 1 for y < 0, ˆx = { 0 for ˆx = +1, 1 for ˆx = 1 If y = 0, ˆx is randomly chosen out of { 1, +1}. The error probability can be computed as Pr( ˆX = 0 X = 1) = Pr( ˆX = +1 X = 1) = = Pr(N 1) = 1 p N (n)dn = Q(1/σ 2 N) = Q( 2E s /N 0 ) with Q(a) := 1/ 2π a exp( a2 /2)da Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.57

Binary-Input AWGN Channel (BI-AWGNC) Conversion of a BI-AWGNC into a BSC Assume the previous detection rule applied to an BI-AWGNC with SNR E s /N 0 The channel between X and ˆX is then a BSC with crossover probability ɛ = Q( 2E s /N 0 ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.58

Block Codes II: Decoding The Tasks Decoding Principles Guaranteed Performance Performance Bounds Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.59

The Tasks of Decoding Objective of error correction Given a received word, estimate the most likely (or at least a likely) transmitted infoword or codeword Objective of error detection Given a received word, detect transmission errors Problem Decoding complexity Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.60

Decoding Principles Optimum Word-wise Decoding minimization of word-error probability Maximum-a-posteriori (MAP) word-estimation Maximum-likelihood (ML) word-estimation Optimum Symbol-wise Decoding minimization of symbol-error probability Maximum-a-posteriori (MAP) symbol-estimation Maximum-likelihood (ML) symbol-estimation Remarks: Word-wise estimation is also called sequence estimation. Symbol-wise estimation is also called symbol-by-symbol estimation Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.61

Decoding Principles Maximum A-Posteriori (MAP) Word Estimation Estimation of the MAP infoword û MAP = argmax u F k 2 p U Y (u y) Equivalent two-step operation: Estimation of the MAP codeword and subsequent determination of the MAP infoword ˆx MAP = argmax x C p X Y (x y), û MAP = enc -1 (ˆx MAP ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.62

Decoding Principles Maximum-Likelihood (ML) Word Estimation Estimation of the ML infoword û ML = argmax u F k 2 p Y U (y u) Equivalent two-step operation: Estimation of the ML codeword and subsequent determination of the ML infoword ˆx ML = argmax x C p Y X (y x), û ML = enc -1 (ˆx ML ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.63

Decoding Principles Remarks ML word estimation and MAP word estimation are equivalent if the infowords (codewords) are uniformly distributed, i.e., if p U (u) = 2 k The rules for symbol estimation are similar to the rules for word estimation. (For details, see textbooks.) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.64

Decoding Principles ML Decoding for the BSC A binary linear (n, k, d min ) code C is used for transmission over a binary symmetric channel (BSC) with crossover probability ɛ < 1/2, i.e., p Y X (y x) = Likelihood of a codeword x { 1 ɛ for y = x, ɛ for y x. p Y X (y x) = n 1 i=0 p Y X (y i x i ) = ɛ d H(y,x) (1 ɛ) n d H(y,x) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.65

Decoding Principles Log-likelihood of a codeword x log p Y X (y x) = log [ɛ ] dh(y,x) (1 ɛ) n d H(y,x) ɛ = d H (y, x) log }{{ 1 ɛ } < 0 Maximum-likelihood word estimation +n log(1 ɛ) p Y X (y x) max log p Y X (y x) max d H (y, x) min Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.66

Decoding Principles ML Word-Estimation for the BSC The Hamming distance d H (y, x) is a sufficient statistic for the received word y. ML estimation (in two steps) ˆx = argmin x C d H (y, x), û = enc 1 (ˆx) Decoding for the BSC is also called hard-decision decoding. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.67

Decoding Principles ML Decoding for the BI-AWGNC Remember: The code symbols x i F 2 are mapped to code symbols x i { 1, +1} according to the BPSK mapping { x +1 for x i = 0, i = BPSK(x) = 1 for x i = 1. For convenience, we define the BPSK-mapped codewords and the BPSK-mapped code x = BPSK(x) { 1, +1} n C = BPSK(C). Notice the one-to-one relation between u, x, and x. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.68

Decoding Principles ML Word Estimation for the BI-AWGNC The squared Euclidean distance is a sufficient statistic for the received word y: d 2 E (y, x ) = y x 2 ML estimation (in three steps) ˆx = argmin x C d 2 E (y, x ), ˆx = BPSK 1 (ˆx ), û = enc 1 (ˆx). Decoding for the BI-AWGNC is also called soft-decision decoding. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.69

Guaranteed Performance System Model Binary linear (n, k, d min ) code C Binary symmetric channel (BSC) with crossover probability ɛ < 1 2 ML decoder, i.e., a decoder that applies the rule ˆx = argmin x C d H (y, x) Questions How many errors t can guaranteed to be corrected. How many errors r can guaranteed to be detected. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.70

Guaranteed Performance Most Likely Scenario Consider the transmitted codeword x and a codeword a that has the minimum distance from x, i.e., d H (x, a) = d min. Examples lacements x PSfrag replacements a x a d min = 3 d min = 4 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.71

Guaranteed Performance Number of errors that can be corrected for sure: dmin 1 t = 2 Number of errors that can be detected for sure: r = d min 1 Examples lacements x PSfrag replacements a x a d min = 3 d min = 4 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.72

Decoding Principles Bounded Minimum Distance (BMD) Decoding Decoding rule: If there is a codeword a C such that d H (y, a) t = dmin 1 output the estimated codeword ˆx = a. Otherwise, indicate a decoding failure. 2, Remark Received words are decoded only if they are within spheres around codewords with radius t (Hamming distance), called decoding spheres. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.73

Performance Bounds Coding Scheme Binary linear (n, k, d min ) code C with WEF A(H) ML decoder, i.e., a decoder that applies the rule Question ˆx = argmax x C p Y X (y x) Bounds on the word-error probability P w = Pr( ˆX X) for a given channel model based on A(H)? Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.74

Performance Bounds Approach The code is linear, and thus Pr( ˆX X) = Pr( ˆX X X = 0) = = Pr( ˆX 0 X = 0) = Pr( ˆX C\{0} X = 0) Lower bound: for any codeword a C with w H (a) = d min, Pr( ˆX C\{0} X = 0) Pr( ˆX = a X = 0) Upper bound: by the union-bound argument, Pr( ˆX C\{0} X = 0) Pr( ˆX = a X = 0) a C\{0} Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.75

Performance Bounds for the BSC Pairwise word-error probability (Bhattacharyya bound for two codewords) Pr( ˆX = a X = b) ( 4ɛ(1 ɛ) ) dh (a,b) Lower bound P w ( 4ɛ(1 ɛ) ) dmin Upper bound P w n d=d min A d ( 4ɛ(1 ɛ) ) d Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.76

Performance Bounds the BI-AWGNC Pairwise word-error probability Pr( ˆX = a X = b) = Q( 2d H (a, b)re b /N 0 ) Lower bound P w Q( 2d min RE b /N 0 ) Upper bound P w n A d Q( 2dREb /N 0 ) d=d min Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.77

Performance Bounds For improving channel quality, the gap between the lower bound and the upper bound becomes very small. (For A dmin = 1, it vanishes.) Improving channel quality means for the BSC: ɛ 1 for the BI-AWGNC: E b /N 0 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.78

Asymptotic Coding-Gain for the BI-AWGNC Error probability for the uncoded system (n = k = 1, d min = 1, R = 1) P w = Q( 2E b /N 0 ) Error probability for an (n, k, d min ) code of rate R = k/n in the case of high SNR (corresponds to lower bound) P w Q( 2d min RE b /N 0 ) The asymptotic coding gain G asy is the gain in SNR (reduction of SNR) such that the same error probability is achieved as for the uncoded system: G asy = 10 log 10 (d min R) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.79

Summary Guaranteed Performance Number of errors that can guaranteed to be corrected Number of errors that can guaranteed to be detected Maximum-likelihood decoding BSC ( hard-decision decoding ): Hamming distance BI-AWGNC ( soft-decision decoding ): squared Euclidean distance Performance bounds based on the weight enumerating function asymptotic coding gain for the BI-AWGNC Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.80

Convolutional Codes General Properties Encoding Convolutional Encoder State Transition Diagram Trellis Diagram Decoding ML Sequence Decoding Viterbi Algorithm Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.81

Convolutional Codes General Properties Continuous encoding of info symbols to code symbols Generated code symbols depend on current info symbol and previously encoded info symbols Convolutional encoder is a finite state machine (with memory) Certain decoders allow continuous decoding of received sequence Convolutional codes enable continuous transmission, whereas block codes allow only blocked transmission. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.82

Convolutional Encoder Defined by generator sequences or generator polynomials g (0) = [101] corr. to g(d) = 1 + D 2 g (1) = [111] corr. to g(d) = 1 + D + D 2 or by the shorthand octal notation (5, 7) 8 Shift register representation of the encoder PSfrag replacements x (0) u D D Memory length m = number of delay elements x (1) Code rate R = (# info-symbols)/(# code-symbols) = 1/2 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.83

Convolutional Encoder frag replacements u t s (0) t s (1) t x (0) t x (1) t Objects of encoding step at time index t {0, 1, 2,... } info symbol u t F 2 encoder state s t = [s (0) t, s (1) t ] F 2 2 (F m 2 ) code-symbol block x t = [x (0) t, x (1) t ] F 2 2 (F 1/R 2 ) Generalizations are straight-forward (see textbooks). Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.84

State Transition Diagram All possible state transitions of the convolutional encoder placements may be depicted in a state transition diagram. Notation: states s (0) t s (1) t, labels u t /x (0) t x (1) t 0/00 1/11 10 1/01 00 1/00 0/10 11 0/11 01 0/01 1/10 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.85

Trellis Diagram All sequences of state transitions and thus all codewords can be depicted in a trellis diagram. u t = 0: solid line, u t = 1: dashed line Free distance d free : minimal codeword weight of a detour from the all-zero path ements 00 00 00 00 00 00 00 11 11 11 11 11 11 01 10 10 10 10 10 10 01 01 01 01 01 01 10 11 11 11 11 11 11 00 00 00 00 00 00 11 01 01 01 01 01 01 10 10 10 10 10 10 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.86

Block Codes from Convolutional Codes Encoding of an info sequence of length K with a convolutional encoder of rate R to a code sequence of length N to obtain an (N, K, d min ) block code of rate R BC Encoding Strategies Trellis-truncation construction K trellis sections, s 0 = 0, s K = arbitrary = N = 1/R K, d min d free Trellis-termination construction K + m trellis sections, s 0 = s K+m = 0 = N = 1/R (K + m), d min = d free Tail-biting construction K trellis sections, s 0 = s K+m = N = 1/R K, d min d free (better than truncation) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.87

Decoding of Convolutional Codes Goal: ML word estimation ˆx = argmax x C p Y X (y x) Possible evaluation: Compute p Y X (y x) for all x C and determine the codeword ˆx with the largest likelihood Problem: computational complexity Solution: Viterbi algorithm Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.88

Branches, Paths, and Metrics Branch in the trellis: one state transition s [t,t+1] = [s t, s t+1 ] Block of code symbols associated to a branch and block of observations corresponding to these code symbols: x(s [t,t+1] ) = x t = [x (0) t, x (1) t ] y t = [y (0) t, y (1) t ] Path through the trellis: sequence of state transitions s [t1,t 2 +1] = [s t1, s t1 +1,..., s t2, s t2 +1] Partial code word associated to a path: x(s [t1,t 2 +1]) = [x t1, x t1 +1,..., x t2 1, x t2 ] Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.89

Branches, Paths, and Metrics Branch metric: distance measure between code-symbol block and observations block, e.g. Hamming distance for the BSC: µ(s [t,t+1] ) = d H (y t, x t ) Path metric: distance measure between code-symbol sequence and sequence of observations: ) µ(s [t1,t 2 +1]) = d H ([y t1,..., y t2 ], [x t1,..., x t2 ], The metric should be an additive metric to allow for a recursive computation of the path metric: µ(s [0,t+1] ) }{{} path metric = µ(s [0,t] ) }{{} + µ(s t,t+1 ) }{{} path metric branch metric Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.90

Decoding of Convolutional Codes Consider a convolutional code with T trellis sections. ML decoding rule Search for the path with the smallest metric and determine the associated codeword: ŝ 0,T = argmin s 0,T trellis ˆx = x(ŝ 0,T ) µ(s 0,T ) Viterbi algorithm (VA) trellis-based search algorithm recursive evaluation of the decoding rule in the trellis most efficient way to implement ML decoding Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.91

Viterbi Algorithm 1. Compute branch-metrics. (May be done when necessary.) 2. Step through trellis sections, t = 0, 1,..., T 1. For each trellis state, (a) add previous path metric and current branch metric; (b) compare resulting new path metrics; (c) select the survivor (path with the smallest metric). 3. Trace back to find the ML path. The resulting ML path corresponds to the ML codeword and thus to the ML infoword. Remark: The VA may be applied in any situation where (i) the search space can be represented in a trellis and (ii) an additive metric can be defined. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.92

Viterbi Algorithm: Example Consider a terminated (5, 7) 8 convolutional code with infoword length K = 3 and thus T = 5 trellis sections. 00 00 00 00 00 00 ents 01 11 10 11 10 11 10 11 10 11 10 01 01 01 01 01 10 11 11 11 11 11 00 00 00 00 00 11 01 01 01 01 01 10 10 10 10 10 Remove parts contradicting s 0 = s K+m = 0. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.93

Viterbi Algorithm: Example Consider a terminated (5, 7) 8 convolutional code with infoword length K = 3 and thus T = 5 trellis sections. 00 00 00 00 00 00 ents 01 10 11 10 11 10 11 01 01 10 11 11 11 00 11 01 01 10 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.94

Summary Convolutional encoder State transition diagram Trellis diagram Path metrics and branch metrics Viterbi algorithm Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.95