SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Size: px
Start display at page:

Download "SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land"

Transcription

1 SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1

2 Overview Basic Concepts of Channel Coding Block Codes I: Codes and Encoding Communication Channels Block Codes II: Decoding Convolutional Codes Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.2

3 Basic Concepts of Channel Coding System Model Examples Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.3

4 The Concept of Channel Coding placements Source u Channel x y Destination û Source generates data Destination accepts estimated data Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.4

5 The Concept of Channel Coding placements Source u Channel x y Destination û Source generates data Destination accepts estimated data Channel introduces noise and thus errors Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.4

6 The Concept of Channel Coding placements Source u Encoder x Channel Destination û Decoder y Source generates data Destination accepts estimated data Channel introduces noise and thus errors Encoder adds redundancy Decoder exploits redundancy to detect or correct errors Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.4

7 The Concept of Channel Coding Objective of channel coding Reliable transmission of digital data over noisy channels. Tools Introduction of redundancy for error detection or error correction. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.5

8 Examples for Channel Coding Mobile communications (GSM, UMTS, WLAN, Bluetooth) Channel: mobile radio channel Satellite communications (pictures from Mars) Channel: radio channel Cable modems (DSL) Channel: wireline, POTS Compact Disc, DVD (music, pictures, data) Channel: storage medium Memory Elements (data) Channel: storage medium In all digital communication systems, channel coding is applied to protect the transmitted data against transmission errors. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.6

9 System Model for Channel Coding I placements Source u Encoder x Channel Destination û Decoder y Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.7

10 System Model for Channel Coding I The Source Binary Symmetric Source (BSS) u F 2 := {0, 1}, p U (u) = 1/2 for u = 0, 1 Binary Info(rmation) Word of length k u = [u 0, u 1,..., u k 1 ] F k 2 BSS p U (u) = 1/2 k Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.7

11 System Model for Channel Coding II placements Source u Encoder x Channel Destination û Decoder y Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.8

12 System Model for Channel Coding II The Encoder Binary Info(rmation) Word of length k u = [u 0, u 1,..., u k 1 ] F k 2 Binary Code Word of length n x = [x 0, x 1,..., x n 1 ] F n 2 Linear Binary Code of length n C := {set of codewords x} Linear Binary Encoder - one-to-one mapping u x - code rate R := k/n (R < 1) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.8

13 Examples for Binary Linear Codes Single Parity-Check Code (k = 2, n = 3) Code word x = [x 0, x 1, x 2 ] with code constraint x 0 x 1 x 2 = 0 Code C := {000, 110, 101, 011} Possible encoder: u = [u 0, u 1 ] x = [u 0, u 1, u 0 u 1 ] Repetition Code of length (k = 1, n = 3) Code word x = [x 0, x 1, x 2 ] with code constraint x 0 = x 1 = x 2 Code C := {000, 111} Possible encoder: u = [u 0 ] x = [u 0, u 0, u 0 ] Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.9

14 System Model for Channel Coding III placements Source u Encoder x Channel Destination û Decoder y Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.10

15 System Model for Channel Coding III The Channel Binary-Input Symmetric Memoryless Channel (BISMC) binary input alphabet x F 2 real-valued output alphabet y R transition probabilities p Y X (y x) symmetry see Cover/Thomas memoryless independent transmissions Probabilistic mapping from x to y Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.10

16 Examples for BISMCs Binary Symmetric Channel (BSC) cements X PSfrag replacements 1 ɛ 1 ɛ ɛ ɛ 1 Y Binary Erasure Channel (BEC) X 0 δ δ 1 δ δ 0 Y Crossover probability ɛ Erasure probability δ Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.11

17 Examples for BISMCs PSfrag Binaryreplacements Symmetric Erasure Channel (BSEC) X 0 1 ɛ ɛ 1 ɛ δ 1 ɛ δ δ δ 0 1 Y Crossover probability ɛ Erasure probability δ Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.12

18 Examples for BISMCs frag replacements Binary-Input AWGN Channel (BI-AWGNC) X BPSK Map X N Y X {0, 1}, X { 1, +1}, N R, Y R Gaussian distributed noise N with noise variance σ 2 n p N (n) = 1 exp ( n2 ) 2πσ 2 n 2σn 2 Conditional pdf: p Y X (y x ) = p N (y x ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.13

19 System Model for Channel Coding IV placements Source u Encoder x Channel Destination û Decoder y Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.14

20 System Model for Channel Coding IV The Decoder Received Word of length n y = [y 0, y 1,..., y n 1 ] R n Decoder - error correction or error detection - estimation of the transmitted info word or code word Estimated Info Word of length k û = [û 0, û 1,..., û k 1 ] F k 2 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.14

21 Example for Decoding Repetition Code over BSC Code: C := {000, 111} Transmitted code word x = [111] BSC with crossover probability ɛ = 0.1 Received word y = [011] Error detection y / C error detected Error correction - if x = [111] transmitted, then one error - if x = [000] transmitted, then two errors - one error is less likely than two errors estimated code word ˆx = [111] (maximum-likelihood decoding) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.15

22 System Model for Channel Coding V ag replacements Source u Encoder x Channel Destination û Decoder y Word Error Probability Bit Error Probability P w := Pr(u û) = Pr(x ˆx) P b := 1 k k 1 i=0 Pr(u i û i ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.16

23 Problems in Channel Coding Construction of good codes Construction of low-complexity decoders Existence of codes with certain parameters (Singleton bound, Hamming bound, Gilbert bound, Varshamov bound) Highest code rate for a given channel such that transmission is error-free (Channel Coding Theorem) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.17

24 Block Codes I: Codes and Encoding Properties of Codes and Encoders Hamming Distances and Hamming Weights Generator Matrix and Parity-Check Matrix Examples of Codes Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.18

25 Codes A not so accurate Definition (Galois Field) A Galois field GF (q) is a finite set of q elements with two operators (often called addition and multiplication) such that all operations are similar to ordinary multiplication and addition for real numbers. (For the exact definition, see any channel coding textbook.) Example 1 GF (5) = {0, 1, 2, 3, 4} a b := a + b mod 5 for a, b GF (5) a b := a b mod 5 for a, b GF (5) Example 2: Binary Field F 2 = GF (2) = {0, 1} a b := a + b mod 2 for a, b F 2 a b := a b mod 2 for a, b F 2 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.19

26 Codes Definition (Hamming distance) The Hamming distance between two vectors a and b (of the same length) is defined as the number of positions in which they differ, and it is denoted by d H (a, b). Example a = [0111], b = [1100], d H (a, b) = 3 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.20

27 Codes Definition (Linear Binary Block Code) A binary linear (n, k, d min ) block code C is a subset of F n with 2 k vectors that have the following properties: Minimum Distance The minimum Hamming distance between pairs of distinct vectors in C is d min, i.e., d min := min a,b C a b d H (a, b) Linearity in C, i.e., Any sum of two vectors in C is again a vector a b C for all a, b C. The ratio of k and n is called the code rate R = k/n. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.21

28 Codes Codeword The elements x of a code are called codewords, and they are written as x = [x 0, x 1,..., x n 1 ]. The elements x i of a codeword are called code symbols. Infoword Each codeword x may be associated with a binary vector of length k. These vectors u are called infowords (information words), and they are written as u = [u 0, u 1,..., u k 1 ]. The elements u i of an infoword are called info symbols (information symbols). Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.22

29 Codes Example This code is a linear binary block code. C = { [000000], [100111], [010001], [110110], [001110], [101001], [011111], [111000] } Code parameters: codeword length n = 6 info word length k = 3 minimum distance d min = 2 Code parameters in short notation (n, k, d min ) : (6, 3, 2) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.23

30 Encoding Encoder An encoder is a one-to-one map from infowords onto codewords. The encoder inverse is the inverse map from codewords to infowords. encoder : u x = enc(u) encoder inverse : x u = enc 1 (x) Linear Binary Encoder A linear binary encoder is an encoder for a linear binary code such that for all infowords u 1 and u 2 enc(u 1 u 2 ) = enc(u 1 ) enc(u 2 ). Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.24

31 Encoding Remark 1 Also the device implementing the encoding may be called an encoder. Remark 2 An encoder is a linear map over F 2. A code is a linear vector subspace over F 2. (Compare to ordinary linear algebra over real numbers.) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.25

32 Encoding Example This encoder for the (6, 3, 3) code is linear. u x = enc(u) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.26

33 Encoding Systematic Encoder An encoder is called a systematic encoder if all info symbols are code symbols. Example Consider a (6, 3, 3) code with infowords and codewords denoted by u = [u 0, u 1, u 2 ] and x = [x 0, x 1, x 2, x 3, x 4, x 5 ]. The encoder is a systematic encoder. [u 0, u 1, u 2 ] [ u 0, u 1, u }{{} 2, x 3, x 4, x 5 ] }{{} systematic parity part part Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.27

34 Distances and Weights Definition (Hamming distance) The Hamming distance d H (a, b) between two vectors a and b (of the same length) is defined as the number of positions in which they differ. Definition (Hamming weight) The Hamming weight w H (a) of a vector a is defined as the number of non-zero positions. Example a = [00111], b = [00011]; w H (a) = 3, w H (b) = 2, d H (a, b) = 1. Notice: d H (a, b) = w H (a b) = w H (a b). Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.28

35 Distances and Weights Consider a linear binary code C. The set of Hamming weights of the codewords in C is denoted by w H (C) = {w H (b) : b C}. The set of Hamming distances between a codeword a and the other codewords in C is denoted by d H (a, C) = {d H (a, b) : b C}. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.29

36 Distances and Weights Theorem For a linear binary code C, we have for all a C. Proof (a) d H (a, b) = w H (a b) (b) C = a C for all a C Thus for all a C, w H (C) = d H (a, C) {d H (a, b) : b C} = {w H (a b) : b C} = {w H (b) : b C}. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.30

37 Distances and Weights The Hamming distances between codewords are closely related to the error correction/detection capabilities of the code. The set of Hamming distances is identical to the set of Hamming weights. Idea A code should not only be described by the parameters (n, k, d min ), but also by the distribution of the codeword weights. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.31

38 Distances and Weights Definition (Weight Distribution) Consider a linear code of length n. The weight distribution of this code is the vector A = [A 0, A 1,..., A n ] with A w denoting the number of codewords with Hamming weight w, w = 0, 1,..., n. The weight enumerating function (WEF) of this code is the polynomial A(H) = A 0 + A 1 H + A 2 H A n H n, where A w are the elements of the weight distribution, and H is a dummy variable. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.32

39 Distances and Weights Example Consider the (6, 3, 3) code C = { [000000], [100111], [010101], [110010], [001110], [101001], [011011], [111100] } Weight distribution A = [1, 0, 0, 4, 3, 0, 0] Weight enumerating function A(H) = 1 + 4H 3 + 3H 4 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.33

40 Generator Matrix and Parity-Check Matrix Consider a binary linear (n, k, d min ) code C. This code C is a k-dimensional vector subspace of the n-dimensional vector space F n 2 (due to the linearity). Every codeword x can be written as a linear combination of k basis vectors g 0, g 1,..., g k 1 C: x = u 0 g 0 u 1 g 1... u k 1 g k 1 = [u 0, u 1,..., u k 1 ] }{{} u = ug g 0 g 1 g k 1 } {{ } G Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.34

41 Generator Matrix and Parity-Check Matrix Definition (Generator Matrix) Consider a binary linear (n, k, d min ) code C. A matrix G F2 k n is called a generator matrix of C if the set of generated words is equal to the code, i.e., if C = {x = ug : u F k 2}. Remarks The rows of G are codewords. The rank of G is equal to k. The generator matrix defines an encoder: x = enc(u) := ug. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.35

42 Generator Matrix and Parity-Check Matrix Example Consider the (6, 3, 3) code C = { [000000], [100111], [010101], [110010], [001110], [101001], [011011], [111100] } A generator matrix of this code is G = Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.36

43 Generator Matrix and Parity-Check Matrix Definition (Parity-Check Matrix) Consider a binary linear (n, k, d min ) code C. A matrix H F (n k) n 2 is called a parity-check matrix of C if C = {x F n 2 : xh T = 0}. Remarks The rows of H are orthogonal to the codewords. The rank of H is equal to (n k). (More general definition: H F m n 2 with m n k and rank H = n k.) The parity-check matrix defines a code: x C xh T = 0 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.37

44 Generator Matrix and Parity-Check Matrix Example Consider the (6, 3, 3) code C = { [000000], [100111], [010101], [110010], [001110], [101001], [011011], [111100] } A parity-check matrix of this code is H = Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.38

45 Generator Matrix and Parity-Check Matrix Interpretation of Parity-Check Matrix The equation xh T = 0 represents a system of parity-check equations. Example xh = [x 0, x 1, x 2, x 3, x 4, x 5 ] T = 0 can equivalently be written as x 0 x 1 x 2 x 3 = 0 x 0 x 2 x 4 = 0 x 0 x 1 x 5 = 0 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.39

46 Generator Matrix and Parity-Check Matrix Definition (Systematic Generator Matrix) Consider a binary linear (n, k, d min ) code C. A systematic generator matrix G syst of C represents a systematic encoder and has the structure G syst = [I k P ], where I k F k 2 is the identity matrix and P Fn k 2. Theorem Consider a binary linear (n, k, d min ) code C. If G = [I k P ] is a generator matrix of C, then H = [P T I n k ] is a parity-check matrix C, and vice versa. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.40

47 Generator Matrix and Parity-Check Matrix Example Consider the (6, 3, 3) code C = { [000000], [100111], [010101], [110010], [001110], [101001], [011011], [111100] } A generator matrix and a parity-check matrix of this code are given by G = and H = Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.41

48 Generator Matrix and Parity-Check Matrix Definition (Dual Code) Consider a binary linear (n, k) code C with generator matrix G F2 k n and parity-check matrix H F (n k) n 2. The binary linear (n, n k) code C with generator matrix G = H and parity-check matrix H = G is called the dual code of C. Remark The codewords of the original code and those of the dual code are orthogonal: ab T = 0 for all a C and b C. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.42

49 Examples of Codes Binary Repetition Codes Linear binary (n, 1, n) codes: C = {x F n 2 : x 0 = x 1 = = x n 1 } Binary Single Parity-Check Codes Linear binary (n, n 1, 2) codes: C = {x F n 2 : x 0 x 1 x n 1 = 0} Remark Repetition codes and single parity-check codes are dual codes. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.43

50 Examples of Codes Binary Hamming Codes Linear binary codes with d min = 3 and maximal rate. (Dual codes of the binary Simplex codes.) Defined by the parity check matrix H and an integer r N: the columns of H are all non-zero binary vectors of length r. Resulting code parameters: codeword length n = 2 r 1, infoword length k = 2 r 1 r, minimum distance d min = 3; Thus: (2 r 1, 2 r 1 r, 3) code. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.44

51 Examples of Codes Example: (7, 4, 3) Hamming code Follows from r = 3; parity-check matrix: H = Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.45

52 Examples of Codes Binary Simplex Codes Linear binary codes with all non-zero codewords having the same weight. (Dual codes of the binary Hamming codes.) Defined by the generator matrix H and an integer r N: the columns of G are all non-zero binary vectors of length r. Resulting code parameters: codeword length n = 2 r 1, infoword length k = r, minimum distance d min = 2 r 1. Thus: (2 r 1, r, 2 r 1 ) code. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.46

53 Examples of Codes Example: (7, 3, 4) Simplex code Follows from r = 3; generator matrix: G = Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.47

54 Examples of Codes Repetition codes Single parity-check codes Hamming codes Simplex codes Golay codes Reed-Muller codes BCH codes Reed-Solomon codes Low-density parity-check codes Concatenated codes (turbo codes) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.48

55 Summary Linear Binary (n, k, d min ) Code Linear (Systematic) Encoder Weight distribution (Systematic) Generator matrix Parity-check matrix Dual code Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.49

56 Communication Channels Binary Symmetric Channel Binary Erasure Channel Binary Symmetric Erasure Channel Binary-Input AWGN Channel Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.50

57 Binary Symmetric Channel (BSC) PSfrag replacements X 1 ɛ 0 0 ɛ ɛ ɛ Y Channel input symbols X {0, 1} Channel output symbols Y {0, 1} Transition probabilities: p Y X (y x) = { 1 ɛ for y = x ɛ for y x Channel parameter: crossover probability ɛ Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.51

58 PSfrag replacements Binary Erasure Channel (BEC) X 0 1 δ δ 1 δ 1 δ 0 1 Y Channel input symbols X {0, 1} Channel output symbols Y {0,, 1} ( = erasure ) Transition probabilities: 1 δ for y = x p Y X (y x) = δ for y = 0 for y x and x, y {0, 1} Channel parameter: erasure probability δ Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.52

59 PSfrag replacements Binary Symmetric Erasure Channel (BSEC) X ɛ δ ɛ δ ɛ δ 1 ɛ δ 0 1 Y Channel input symbols X {0, 1} Channel output symbols Y {0,, 1} ( = erasure ) Transition probabilities: 1 ɛ δ for y = x p Y X (y x) = δ for y = ɛ for y x and x, y {0, 1} Channel parameters: erasure probability δ crossover probability ɛ Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.53

60 ag replacements Binary-Input AWGN Channel (BI-AWGNC) X BPSK Map 0 + E s 1 E s X N N (0, N 0 /2) Y Code symbols X {0, 1} Modulation symbols X { E s, + E s } with symbol energy E s White Gaussian noise (WGN) N R with noise variance σ 2 N = N 0/2 and pdf p N (n) Channel output symbols Y R Signal-to-noise ratio (SNR) per code symbol: E s /N 0 Transition probabilities: p Y X (y x) = p Y X (y x ) = p N (y x ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.54

61 Binary-Input AWGN Channel (BI-AWGNC) ag replacements Equivalent normalized representation X BPSK Map X N N (0, N 0 /2E s ) Y Code symbols X {0, 1} Modulation symbols X { 1, +1} White Gaussian noise (WGN) N R with noise variance σ 2 N = N 0/2E s Channel output symbols Y R Signal-to-noise ratio (SNR) per code symbol: E s /N 0 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.55

62 Binary-Input AWGN Channel (BI-AWGNC) Something about energies and SNRs Assume an (n, k, d min ) code with code rate R = k/n. Energy per code symbol: E s Energy per info symbol: E b = 1 R E s SNR per code symbol: E s N 0 SNR per info symbol: E b N 0 = 1 R E s N 0 Logarithmic scale (often used in error-rate plots): [ Es N 0 ] db = 10 log 10 ( Es N 0 ) [db], [ Eb N 0 ] db = 10 log 10 ( Eb N 0 ) [db] Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.56

63 Binary-Input AWGN Channel (BI-AWGNC) Error Probability Assume the detection rule { ˆx +1 for y > 0, = 1 for y < 0, ˆx = { 0 for ˆx = +1, 1 for ˆx = 1 If y = 0, ˆx is randomly chosen out of { 1, +1}. The error probability can be computed as Pr( ˆX = 0 X = 1) = Pr( ˆX = +1 X = 1) = = Pr(N 1) = 1 p N (n)dn = Q(1/σ 2 N) = Q( 2E s /N 0 ) with Q(a) := 1/ 2π a exp( a2 /2)da Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.57

64 Binary-Input AWGN Channel (BI-AWGNC) Conversion of a BI-AWGNC into a BSC Assume the previous detection rule applied to an BI-AWGNC with SNR E s /N 0 The channel between X and ˆX is then a BSC with crossover probability ɛ = Q( 2E s /N 0 ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.58

65 Block Codes II: Decoding The Tasks Decoding Principles Guaranteed Performance Performance Bounds Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.59

66 The Tasks of Decoding Objective of error correction Given a received word, estimate the most likely (or at least a likely) transmitted infoword or codeword Objective of error detection Given a received word, detect transmission errors Problem Decoding complexity Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.60

67 Decoding Principles Optimum Word-wise Decoding minimization of word-error probability Maximum-a-posteriori (MAP) word-estimation Maximum-likelihood (ML) word-estimation Optimum Symbol-wise Decoding minimization of symbol-error probability Maximum-a-posteriori (MAP) symbol-estimation Maximum-likelihood (ML) symbol-estimation Remarks: Word-wise estimation is also called sequence estimation. Symbol-wise estimation is also called symbol-by-symbol estimation Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.61

68 Decoding Principles Maximum A-Posteriori (MAP) Word Estimation Estimation of the MAP infoword û MAP = argmax u F k 2 p U Y (u y) Equivalent two-step operation: Estimation of the MAP codeword and subsequent determination of the MAP infoword ˆx MAP = argmax x C p X Y (x y), û MAP = enc -1 (ˆx MAP ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.62

69 Decoding Principles Maximum-Likelihood (ML) Word Estimation Estimation of the ML infoword û ML = argmax u F k 2 p Y U (y u) Equivalent two-step operation: Estimation of the ML codeword and subsequent determination of the ML infoword ˆx ML = argmax x C p Y X (y x), û ML = enc -1 (ˆx ML ) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.63

70 Decoding Principles Remarks ML word estimation and MAP word estimation are equivalent if the infowords (codewords) are uniformly distributed, i.e., if p U (u) = 2 k The rules for symbol estimation are similar to the rules for word estimation. (For details, see textbooks.) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.64

71 Decoding Principles ML Decoding for the BSC A binary linear (n, k, d min ) code C is used for transmission over a binary symmetric channel (BSC) with crossover probability ɛ < 1/2, i.e., p Y X (y x) = Likelihood of a codeword x { 1 ɛ for y = x, ɛ for y x. p Y X (y x) = n 1 i=0 p Y X (y i x i ) = ɛ d H(y,x) (1 ɛ) n d H(y,x) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.65

72 Decoding Principles Log-likelihood of a codeword x log p Y X (y x) = log [ɛ ] dh(y,x) (1 ɛ) n d H(y,x) ɛ = d H (y, x) log }{{ 1 ɛ } < 0 Maximum-likelihood word estimation +n log(1 ɛ) p Y X (y x) max log p Y X (y x) max d H (y, x) min Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.66

73 Decoding Principles ML Word-Estimation for the BSC The Hamming distance d H (y, x) is a sufficient statistic for the received word y. ML estimation (in two steps) ˆx = argmin x C d H (y, x), û = enc 1 (ˆx) Decoding for the BSC is also called hard-decision decoding. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.67

74 Decoding Principles ML Decoding for the BI-AWGNC Remember: The code symbols x i F 2 are mapped to code symbols x i { 1, +1} according to the BPSK mapping { x +1 for x i = 0, i = BPSK(x) = 1 for x i = 1. For convenience, we define the BPSK-mapped codewords and the BPSK-mapped code x = BPSK(x) { 1, +1} n C = BPSK(C). Notice the one-to-one relation between u, x, and x. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.68

75 Decoding Principles ML Word Estimation for the BI-AWGNC The squared Euclidean distance is a sufficient statistic for the received word y: d 2 E (y, x ) = y x 2 ML estimation (in three steps) ˆx = argmin x C d 2 E (y, x ), ˆx = BPSK 1 (ˆx ), û = enc 1 (ˆx). Decoding for the BI-AWGNC is also called soft-decision decoding. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.69

76 Guaranteed Performance System Model Binary linear (n, k, d min ) code C Binary symmetric channel (BSC) with crossover probability ɛ < 1 2 ML decoder, i.e., a decoder that applies the rule ˆx = argmin x C d H (y, x) Questions How many errors t can guaranteed to be corrected. How many errors r can guaranteed to be detected. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.70

77 Guaranteed Performance Most Likely Scenario Consider the transmitted codeword x and a codeword a that has the minimum distance from x, i.e., d H (x, a) = d min. Examples lacements x PSfrag replacements a x a d min = 3 d min = 4 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.71

78 Guaranteed Performance Number of errors that can be corrected for sure: dmin 1 t = 2 Number of errors that can be detected for sure: r = d min 1 Examples lacements x PSfrag replacements a x a d min = 3 d min = 4 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.72

79 Decoding Principles Bounded Minimum Distance (BMD) Decoding Decoding rule: If there is a codeword a C such that d H (y, a) t = dmin 1 output the estimated codeword ˆx = a. Otherwise, indicate a decoding failure. 2, Remark Received words are decoded only if they are within spheres around codewords with radius t (Hamming distance), called decoding spheres. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.73

80 Performance Bounds Coding Scheme Binary linear (n, k, d min ) code C with WEF A(H) ML decoder, i.e., a decoder that applies the rule Question ˆx = argmax x C p Y X (y x) Bounds on the word-error probability P w = Pr( ˆX X) for a given channel model based on A(H)? Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.74

81 Performance Bounds Approach The code is linear, and thus Pr( ˆX X) = Pr( ˆX X X = 0) = = Pr( ˆX 0 X = 0) = Pr( ˆX C\{0} X = 0) Lower bound: for any codeword a C with w H (a) = d min, Pr( ˆX C\{0} X = 0) Pr( ˆX = a X = 0) Upper bound: by the union-bound argument, Pr( ˆX C\{0} X = 0) Pr( ˆX = a X = 0) a C\{0} Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.75

82 Performance Bounds for the BSC Pairwise word-error probability (Bhattacharyya bound for two codewords) Pr( ˆX = a X = b) ( 4ɛ(1 ɛ) ) dh (a,b) Lower bound P w ( 4ɛ(1 ɛ) ) dmin Upper bound P w n d=d min A d ( 4ɛ(1 ɛ) ) d Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.76

83 Performance Bounds the BI-AWGNC Pairwise word-error probability Pr( ˆX = a X = b) = Q( 2d H (a, b)re b /N 0 ) Lower bound P w Q( 2d min RE b /N 0 ) Upper bound P w n A d Q( 2dREb /N 0 ) d=d min Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.77

84 Performance Bounds For improving channel quality, the gap between the lower bound and the upper bound becomes very small. (For A dmin = 1, it vanishes.) Improving channel quality means for the BSC: ɛ 1 for the BI-AWGNC: E b /N 0 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.78

85 Asymptotic Coding-Gain for the BI-AWGNC Error probability for the uncoded system (n = k = 1, d min = 1, R = 1) P w = Q( 2E b /N 0 ) Error probability for an (n, k, d min ) code of rate R = k/n in the case of high SNR (corresponds to lower bound) P w Q( 2d min RE b /N 0 ) The asymptotic coding gain G asy is the gain in SNR (reduction of SNR) such that the same error probability is achieved as for the uncoded system: G asy = 10 log 10 (d min R) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.79

86 Summary Guaranteed Performance Number of errors that can guaranteed to be corrected Number of errors that can guaranteed to be detected Maximum-likelihood decoding BSC ( hard-decision decoding ): Hamming distance BI-AWGNC ( soft-decision decoding ): squared Euclidean distance Performance bounds based on the weight enumerating function asymptotic coding gain for the BI-AWGNC Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.80

87 Convolutional Codes General Properties Encoding Convolutional Encoder State Transition Diagram Trellis Diagram Decoding ML Sequence Decoding Viterbi Algorithm Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.81

88 Convolutional Codes General Properties Continuous encoding of info symbols to code symbols Generated code symbols depend on current info symbol and previously encoded info symbols Convolutional encoder is a finite state machine (with memory) Certain decoders allow continuous decoding of received sequence Convolutional codes enable continuous transmission, whereas block codes allow only blocked transmission. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.82

89 Convolutional Encoder Defined by generator sequences or generator polynomials g (0) = [101] corr. to g(d) = 1 + D 2 g (1) = [111] corr. to g(d) = 1 + D + D 2 or by the shorthand octal notation (5, 7) 8 Shift register representation of the encoder PSfrag replacements x (0) u D D Memory length m = number of delay elements x (1) Code rate R = (# info-symbols)/(# code-symbols) = 1/2 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.83

90 Convolutional Encoder frag replacements u t s (0) t s (1) t x (0) t x (1) t Objects of encoding step at time index t {0, 1, 2,... } info symbol u t F 2 encoder state s t = [s (0) t, s (1) t ] F 2 2 (F m 2 ) code-symbol block x t = [x (0) t, x (1) t ] F 2 2 (F 1/R 2 ) Generalizations are straight-forward (see textbooks). Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.84

91 State Transition Diagram All possible state transitions of the convolutional encoder placements may be depicted in a state transition diagram. Notation: states s (0) t s (1) t, labels u t /x (0) t x (1) t 0/00 1/ / /00 0/ / /01 1/10 Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.85

92 Trellis Diagram All sequences of state transitions and thus all codewords can be depicted in a trellis diagram. u t = 0: solid line, u t = 1: dashed line Free distance d free : minimal codeword weight of a detour from the all-zero path ements Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.86

93 Block Codes from Convolutional Codes Encoding of an info sequence of length K with a convolutional encoder of rate R to a code sequence of length N to obtain an (N, K, d min ) block code of rate R BC Encoding Strategies Trellis-truncation construction K trellis sections, s 0 = 0, s K = arbitrary = N = 1/R K, d min d free Trellis-termination construction K + m trellis sections, s 0 = s K+m = 0 = N = 1/R (K + m), d min = d free Tail-biting construction K trellis sections, s 0 = s K+m = N = 1/R K, d min d free (better than truncation) Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.87

94 Decoding of Convolutional Codes Goal: ML word estimation ˆx = argmax x C p Y X (y x) Possible evaluation: Compute p Y X (y x) for all x C and determine the codeword ˆx with the largest likelihood Problem: computational complexity Solution: Viterbi algorithm Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.88

95 Branches, Paths, and Metrics Branch in the trellis: one state transition s [t,t+1] = [s t, s t+1 ] Block of code symbols associated to a branch and block of observations corresponding to these code symbols: x(s [t,t+1] ) = x t = [x (0) t, x (1) t ] y t = [y (0) t, y (1) t ] Path through the trellis: sequence of state transitions s [t1,t 2 +1] = [s t1, s t1 +1,..., s t2, s t2 +1] Partial code word associated to a path: x(s [t1,t 2 +1]) = [x t1, x t1 +1,..., x t2 1, x t2 ] Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.89

96 Branches, Paths, and Metrics Branch metric: distance measure between code-symbol block and observations block, e.g. Hamming distance for the BSC: µ(s [t,t+1] ) = d H (y t, x t ) Path metric: distance measure between code-symbol sequence and sequence of observations: ) µ(s [t1,t 2 +1]) = d H ([y t1,..., y t2 ], [x t1,..., x t2 ], The metric should be an additive metric to allow for a recursive computation of the path metric: µ(s [0,t+1] ) }{{} path metric = µ(s [0,t] ) }{{} + µ(s t,t+1 ) }{{} path metric branch metric Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.90

97 Decoding of Convolutional Codes Consider a convolutional code with T trellis sections. ML decoding rule Search for the path with the smallest metric and determine the associated codeword: ŝ 0,T = argmin s 0,T trellis ˆx = x(ŝ 0,T ) µ(s 0,T ) Viterbi algorithm (VA) trellis-based search algorithm recursive evaluation of the decoding rule in the trellis most efficient way to implement ML decoding Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.91

98 Viterbi Algorithm 1. Compute branch-metrics. (May be done when necessary.) 2. Step through trellis sections, t = 0, 1,..., T 1. For each trellis state, (a) add previous path metric and current branch metric; (b) compare resulting new path metrics; (c) select the survivor (path with the smallest metric). 3. Trace back to find the ML path. The resulting ML path corresponds to the ML codeword and thus to the ML infoword. Remark: The VA may be applied in any situation where (i) the search space can be represented in a trellis and (ii) an additive metric can be defined. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.92

99 Viterbi Algorithm: Example Consider a terminated (5, 7) 8 convolutional code with infoword length K = 3 and thus T = 5 trellis sections ents Remove parts contradicting s 0 = s K+m = 0. Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.93

100 Viterbi Algorithm: Example Consider a terminated (5, 7) 8 convolutional code with infoword length K = 3 and thus T = 5 trellis sections ents Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.94

101 Summary Convolutional encoder State transition diagram Trellis diagram Path metrics and branch metrics Viterbi algorithm Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.95

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 26 Linear Block Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay July 28, 2014 Binary Block Codes 3 / 26 Let F 2 be the set

More information

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00 NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR Sp ' 00 May 3 OPEN BOOK exam (students are permitted to bring in textbooks, handwritten notes, lecture notes

More information

Bounds on Mutual Information for Simple Codes Using Information Combining

Bounds on Mutual Information for Simple Codes Using Information Combining ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar

More information

Digital Modulation 1

Digital Modulation 1 Digital Modulation 1 Lecture Notes Ingmar Land and Bernard H. Fleury Navigation and Communications () Department of Electronic Systems Aalborg University, DK Version: February 5, 27 i Contents I Basic

More information

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Ching-Yao Su Directed by: Prof. Po-Ning Chen Department of Communications Engineering, National Chiao-Tung University July

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Institute of Electronic Systems Signal and Information Processing in Communications Nana Traore Shashi Kant Tobias

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Coding theory: Applications

Coding theory: Applications INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

Channel Coding and Interleaving

Channel Coding and Interleaving Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g Exercise Generator polynomials of a convolutional code, given in binary form, are g 0, g 2 0 ja g 3. a) Sketch the encoding circuit. b) Sketch the state diagram. c) Find the transfer function TD. d) What

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Convolutional Codes ddd, Houshou Chen. May 28, 2012

Convolutional Codes ddd, Houshou Chen. May 28, 2012 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional Codes ddd, Houshou Chen Department of Electrical Engineering National Chung Hsing University Taichung,

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 1 Bacground Material 1.1 Organization of the Trellis The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 The Viterbi algorithm (VA) processes the (noisy) output sequence from a state machine

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

Binary Convolutional Codes

Binary Convolutional Codes Binary Convolutional Codes A convolutional code has memory over a short block length. This memory results in encoded output symbols that depend not only on the present input, but also on past inputs. An

More information

Coding on a Trellis: Convolutional Codes

Coding on a Trellis: Convolutional Codes .... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:

More information

Turbo Codes for Deep-Space Communications

Turbo Codes for Deep-Space Communications TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,

More information

A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure

A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure David G. M. Mitchell E H U N I V E R S I T Y T O H F R G E D I N B U A thesis submitted for the degree of Doctor of Philosophy.

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Linear Codes Enes Pasalic University of Primorska Koper, 2013 2 Contents 1 Preface 5 2 Shannon theory and coding 7 3 Coding theory 31 4 Decoding of linear codes and MacWilliams

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

6.1.1 What is channel coding and why do we use it?

6.1.1 What is channel coding and why do we use it? Chapter 6 Channel Coding 6.1 Introduction 6.1.1 What is channel coding and why do we use it? Channel coding is the art of adding redundancy to a message in order to make it more robust against noise. It

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

Digital Communications

Digital Communications Digital Communications Chapter 8: Trellis and Graph Based Codes Saeedeh Moloudi May 7, 2014 Outline 1 Introduction 2 Convolutional Codes 3 Decoding of Convolutional Codes 4 Turbo Codes May 7, 2014 Proakis-Salehi

More information

List Decoding: Geometrical Aspects and Performance Bounds

List Decoding: Geometrical Aspects and Performance Bounds List Decoding: Geometrical Aspects and Performance Bounds Maja Lončar Department of Information Technology Lund University, Sweden Summer Academy: Progress in Mathematics for Communication Systems Bremen,

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

Section 3 Error Correcting Codes (ECC): Fundamentals

Section 3 Error Correcting Codes (ECC): Fundamentals Section 3 Error Correcting Codes (ECC): Fundamentals Communication systems and channel models Definition and examples of ECCs Distance For the contents relevant to distance, Lin & Xing s book, Chapter

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009 Modern Coding Theory Daniel J. Costello, Jr. Coding Research Group Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 2009 School of Information Theory Northwestern University

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Introduction to binary block codes

Introduction to binary block codes 58 Chapter 6 Introduction to binary block codes In this chapter we begin to study binary signal constellations, which are the Euclidean-space images of binary block codes. Such constellations have nominal

More information

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems Prof. Shu Lin Dr. Cathy Liu Dr. Michael Steinberger U.C.Davis Avago SiSoft A Brief Tour of FEC for Serial Link Systems Outline Introduction Finite Fields and Vector Spaces Linear Block Codes Cyclic Codes

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008 Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Convolutional Codes November 6th, 2008 1

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16 Solutions of Exam Coding Theory (2MMC30), 23 June 2016 (1.a) Consider the 4 4 matrices as words in F 16 2, the binary vector space of dimension 16. C is the code of all binary 4 4 matrices such that the

More information

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes Kevin Buckley - 2010 109 ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes m GF(2 ) adder m GF(2 ) multiplier

More information

Example of Convolutional Codec

Example of Convolutional Codec Example of Convolutional Codec Convolutional Code tructure K k bits k k k n- n Output Convolutional codes Convoltuional Code k = number of bits shifted into the encoder at one time k= is usually used!!

More information

State-of-the-Art Channel Coding

State-of-the-Art Channel Coding Institut für State-of-the-Art Channel Coding Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture) ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with

More information

Error Correction and Trellis Coding

Error Correction and Trellis Coding Advanced Signal Processing Winter Term 2001/2002 Digital Subscriber Lines (xdsl): Broadband Communication over Twisted Wire Pairs Error Correction and Trellis Coding Thomas Brandtner brandt@sbox.tugraz.at

More information

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code. Convolutional Codes Goals Lecture Be able to encode using a convolutional code Be able to decode a convolutional code received over a binary symmetric channel or an additive white Gaussian channel Convolutional

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

Introduction to Binary Convolutional Codes [1]

Introduction to Binary Convolutional Codes [1] Introduction to Binary Convolutional Codes [1] Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw Y. S. Han Introduction

More information

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga. Turbo Codes Manjunatha. P manjup.jnnce@gmail.com Professor Dept. of ECE J.N.N. College of Engineering, Shimoga June 29, 2013 [1, 2, 3, 4, 5, 6] Note: Slides are prepared to use in class room purpose, may

More information

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved. Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information

More information

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

A Systematic Description of Source Significance Information

A Systematic Description of Source Significance Information A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh

More information

Unequal Error Protection Turbo Codes

Unequal Error Protection Turbo Codes Unequal Error Protection Turbo Codes Diploma Thesis Neele von Deetzen Arbeitsbereich Nachrichtentechnik School of Engineering and Science Bremen, February 28th, 2005 Unequal Error Protection Turbo Codes

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M. Kurkoski kurkoski@ice.uec.ac.jp University of Electro-Communications Tokyo, Japan ICC 2011 IEEE International Conference on Communications

More information

Polar Code Construction for List Decoding

Polar Code Construction for List Decoding 1 Polar Code Construction for List Decoding Peihong Yuan, Tobias Prinz, Georg Böcherer arxiv:1707.09753v1 [cs.it] 31 Jul 2017 Abstract A heuristic construction of polar codes for successive cancellation

More information

Trellis-based Detection Techniques

Trellis-based Detection Techniques Chapter 2 Trellis-based Detection Techniques 2.1 Introduction In this chapter, we provide the reader with a brief introduction to the main detection techniques which will be relevant for the low-density

More information

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road 517583 QUESTION BANK (DESCRIPTIVE) Subject with Code : CODING THEORY & TECHNIQUES(16EC3810) Course & Branch: M.Tech - DECS

More information

SENS'2006 Second Scientific Conference with International Participation SPACE, ECOLOGY, NANOTECHNOLOGY, SAFETY June 2006, Varna, Bulgaria

SENS'2006 Second Scientific Conference with International Participation SPACE, ECOLOGY, NANOTECHNOLOGY, SAFETY June 2006, Varna, Bulgaria SENS'6 Second Scientific Conference with International Participation SPACE, ECOLOGY, NANOTECHNOLOGY, SAFETY 4 6 June 6, Varna, Bulgaria SIMULATION ANALYSIS OF THE VITERBI CONVOLUTIONAL DECODING ALGORITHM

More information

Decomposition Methods for Large Scale LP Decoding

Decomposition Methods for Large Scale LP Decoding Decomposition Methods for Large Scale LP Decoding Siddharth Barman Joint work with Xishuo Liu, Stark Draper, and Ben Recht Outline Background and Problem Setup LP Decoding Formulation Optimization Framework

More information

Performance of small signal sets

Performance of small signal sets 42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable

More information

Decision-Point Signal to Noise Ratio (SNR)

Decision-Point Signal to Noise Ratio (SNR) Decision-Point Signal to Noise Ratio (SNR) Receiver Decision ^ SNR E E e y z Matched Filter Bound error signal at input to decision device Performance upper-bound on ISI channels Achieved on memoryless

More information

Lattices for Communication Engineers

Lattices for Communication Engineers Lattices for Communication Engineers Jean-Claude Belfiore Télécom ParisTech CNRS, LTCI UMR 5141 February, 22 2011 Nanyang Technological University - SPMS Part I Introduction Signal Space The transmission

More information

CHAPTER 8 Viterbi Decoding of Convolutional Codes

CHAPTER 8 Viterbi Decoding of Convolutional Codes MIT 6.02 DRAFT Lecture Notes Fall 2011 (Last update: October 9, 2011) Comments, questions or bug reports? Please contact hari at mit.edu CHAPTER 8 Viterbi Decoding of Convolutional Codes This chapter describes

More information

Coding Techniques for Data Storage Systems

Coding Techniques for Data Storage Systems Coding Techniques for Data Storage Systems Thomas Mittelholzer IBM Zurich Research Laboratory /8 Göttingen Agenda. Channel Coding and Practical Coding Constraints. Linear Codes 3. Weight Enumerators and

More information

Lecture 4: Linear Codes. Copyright G. Caire 88

Lecture 4: Linear Codes. Copyright G. Caire 88 Lecture 4: Linear Codes Copyright G. Caire 88 Linear codes over F q We let X = F q for some prime power q. Most important case: q =2(binary codes). Without loss of generality, we may represent the information

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Mapper & De-Mapper System Document

Mapper & De-Mapper System Document Mapper & De-Mapper System Document Mapper / De-Mapper Table of Contents. High Level System and Function Block. Mapper description 2. Demodulator Function block 2. Decoder block 2.. De-Mapper 2..2 Implementation

More information

HIGH DIMENSIONAL TRELLIS CODED MODULATION

HIGH DIMENSIONAL TRELLIS CODED MODULATION AFRL-IF-RS-TR-2002-50 Final Technical Report March 2002 HIGH DIMENSIONAL TRELLIS CODED MODULATION Ohio University APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AIR FORCE RESEARCH LABORATORY INFORMATION

More information

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM Journal of ELECTRICAL ENGINEERING, VOL. 63, NO. 1, 2012, 59 64 SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM H. Prashantha Kumar Udupi Sripati K. Rajesh

More information

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes S-723410 BCH and Reed-Solomon Codes 1 S-723410 BCH and Reed-Solomon Codes 3 Background The algebraic structure of linear codes and, in particular, cyclic linear codes, enables efficient encoding and decoding

More information

Lecture 3 : Introduction to Binary Convolutional Codes

Lecture 3 : Introduction to Binary Convolutional Codes Lecture 3 : Introduction to Binary Convolutional Codes Binary Convolutional Codes 1. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes. In contrast with a block

More information

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x) Outline Recall: For integers Euclidean algorithm for finding gcd s Extended Euclid for finding multiplicative inverses Extended Euclid for computing Sun-Ze Test for primitive roots And for polynomials

More information

ABriefReviewof CodingTheory

ABriefReviewof CodingTheory ABriefReviewof CodingTheory Pascal O. Vontobel JTG Summer School, IIT Madras, Chennai, India June 16 19, 2014 ReliableCommunication Oneofthemainmotivationforstudyingcodingtheoryisthedesireto reliably transmit

More information

Making Error Correcting Codes Work for Flash Memory

Making Error Correcting Codes Work for Flash Memory Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging

More information

Codes on graphs. Chapter Elementary realizations of linear block codes

Codes on graphs. Chapter Elementary realizations of linear block codes Chapter 11 Codes on graphs In this chapter we will introduce the subject of codes on graphs. This subject forms an intellectual foundation for all known classes of capacity-approaching codes, including

More information

Information redundancy

Information redundancy Information redundancy Information redundancy add information to date to tolerate faults error detecting codes error correcting codes data applications communication memory p. 2 - Design of Fault Tolerant

More information

Channel Coding 1. Sportturm (SpT), Room: C3165

Channel Coding 1.   Sportturm (SpT), Room: C3165 Channel Coding Dr.-Ing. Dirk Wübben Institute for Telecommunications and High-Frequency Techniques Department of Communications Engineering Room: N3, Phone: 4/8-6385 Sportturm (SpT), Room: C365 wuebben@ant.uni-bremen.de

More information

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations TUTORIAL ON DIGITAL MODULATIONS Part 8a: Error probability A [2011-01-07] 07] Roberto Garello, Politecnico di Torino Free download (for personal use only) at: www.tlc.polito.it/garello 1 Part 8a: Error

More information

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 24: Error Correction Techniques Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt May 14 th, 2015 1 Error Correction Techniques olinear Block Code Cyclic

More information

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks A new analytic approach to evaluation of Packet Error Rate in Wireless Networks Ramin Khalili Université Pierre et Marie Curie LIP6-CNRS, Paris, France ramin.khalili@lip6.fr Kavé Salamatian Université

More information