Section 3 Error Correcting Codes (ECC): Fundamentals

Size: px
Start display at page:

Download "Section 3 Error Correcting Codes (ECC): Fundamentals"

Transcription

1 Section 3 Error Correcting Codes (ECC): Fundamentals Communication systems and channel models Definition and examples of ECCs Distance For the contents relevant to distance, Lin & Xing s book, Chapter 2, should be helpful. W. Dai (IC) EE4.07 Coding Theory ECC 2017 page 3-1

2 Communication Systems W. Dai (IC) EE4.07 Coding Theory ECC Systems & Models 2017 page 3-2

3 Abstract Channel Model: Binary Symmetric Channel (BSC) Binary Symmetric Channel: a memoryless channel such that Pr (0 received 1 sent) = Pr (0 received 1 sent) = p, Pr (1 received 1 sent) = Pr (0 received 0 sent) = 1 p. p is called the transition (cross-over) probability. Memoryless channel: A channel that satisfies Pr (y received x sent) = n i=1 Pr (y i received x i sent). W. Dai (IC) EE4.07 Coding Theory ECC Systems & Models 2017 page 3-3

4 The Memoryless q-ary Symmetric Channel Define an alphabet set F q. Both channel input x i and channel output y i are from F q. Crossover probability p: Pr (y i x i ) = { 1 p if y i = x i p/ (q 1) if y i x i W. Dai (IC) EE4.07 Coding Theory ECC Systems & Models 2017 page 3-4

5 The Memoryless Binary Erasure Channel (BEC) Binary Erasure Channel: Internet traffic: a package got lost. Cloud storage: one copy of file messed up. W. Dai (IC) EE4.07 Coding Theory ECC Systems & Models 2017 page 3-5

6 What is a Code? Definition 3.1 (Code) A code is a set C containing (row) vectors of elements from F q. An (n, M) block code: C F n q and C = M. A codeword: a vector in C. Codeword length: n Dimension: k = log q M. Code size: M Rate: r = k/n. Example 1: F 2 = {0, 1}. C = {0000, 1100, 1111}. n = 4. M = 3. k = log 2 3 = r = Example 2: F 3 = {0, 1, 2}. C = {00000, 12121, 20202}. n = 5. M = 3. k = log 3 3 = 1. r = 0.2. W. Dai (IC) EE4.07 Coding Theory ECC Definitions & Examples 2017 page 3-6

7 Triple Repetition Code Encoding Decoding: majority voting 111, 110, 101, , 001, 010, Error probability computation: P (ŝ = 1 s = 0) = P (111 0) + P (110 0) + P (101 0) + P (011 0) = p 3 + 3p 2 (1 p) = , when p = Much better than an uncoded system. The price to pay: data rate 1/3. Coding theory: tradeoff between error correction and data rate. W. Dai (IC) EE4.07 Coding Theory ECC Definitions & Examples 2017 page 3-7

8 Performance Comparison Uncoded case (f=0.1) ¹ ½ µ ¼ ¼ Ê ¹ ½ ½ ½ µ Triple repetition code ÒÓ Ö Ø ÒÒ Ð ¹ ½¼± ¹ Ö Ó Ö ¹ From David J.C. MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, W. Dai (IC) EE4.07 Coding Theory ECC Definitions & Examples 2017 page 3-8

9 The 2nd example: (7, 4) Hamming code Encoding: encode every 4 bit information into 7 bits. (Details are omitted.) Code rate: r = 4/ Much higher rate but slightly larger P e. From David J.C. MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, W. Dai (IC) EE4.07 Coding Theory ECC Definitions & Examples 2017 page 3-9

10 Another example - low-density parity-check code Details are omitted here. Only simulation is presented BSC with p = 7.5%. LDPC (20 000, ) r = 0.5 From David J.C. MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, W. Dai (IC) EE4.07 Coding Theory ECC Definitions & Examples 2017 page 3-10

11 Distance: Definition Definition 3.2 (Distance) A distance d on a set X is a function d : X X R such that for all x, y, z X, the following conditions hold: Positive definite: Symmetry: Triangle inequality: d (x, y) 0 where = holds iff x = y. d (x, y) = d (y, x). d (x, z) d (x, y) + d (y, z). In this course, d is also translation invariant, that is, d (x, y) = d (x + z, y + z). W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-11

12 Examples of Commonly Used Distances Let x, y R n be two vectors of length n, for example, x = [9, 1, 0], y = [6, 1, 4] R 3 l 2 -norm distance: Euclidean distance d 2 n d 2 (x, y) = i=1 (x i y i ) 2 = = 5. l 1 -norm distance: d 1 d 1 (x, y) = n i=1 x i y i = = 7. Hamming distance: d H d H (x, y) = n i=1 δ x i y i = = 2, where δ xi y i = 1 if x i y i and δ xi y i = 0 if x i = y i. W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-12

13 Hamming Distance Definition 3.3 (Hamming Distance) For x, y F n, the Hamming distance is given by d H (x, y) = n i=1 δ x i y i = {i : x i y i }. Fact 3.4 Hamming distance is a well defined distance. To prove this fact, the only non-trivial part is the triangle inequality. W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-13

14 Proof of the Triangle Inequality for Hamming Distance 1. scalar case: d H (x i, z i ) d H (x i, y i ) + d H (y i, z i ): If x i = z i, then the equality holds obviously. If x i z i, LHS= 1. We have three cases: y i = x i y i z i y i = z i y i x i RHS 1. y i x i and y i z i 2. vector case: d H (x, z) = n i=1 d H (x i, z i ) n i=1 (d H (x i, y i ) + d H (y i, z i )) = d H (x, y) + d H (y, z). W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-14

15 Hamming Distance: Properties Fact 3.5 Hamming distance is translation invariant: d H (x 1, x 2 ) = d H (x 1 + y, x 2 + y). Definition 3.6 (Weight) A weight of a vector x F n q is defined as its Hamming distance from the zero vector: wt (x) = d H (x, 0). Example: x = [9, 1, 4], y = [0, 1, 4] d H (x, y) = 1. x = [1, 2, 1, 2, 1], y = [2, 0, 2, 0, 2] d H (x, y) = 5. x = [0, 1, 0, 1] wt (x) = 2. W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-15

16 Decoding Techniques Suppose that a codeword c C F n q is transmitted and a word y is received. The decoding function is defined as the mapping D : F n q C y ĉ C. Popular decoding strategies include Maximum likelihood decoding: ĉ ML = D ML (y) = arg max c C Pr (y received c sent). Minimum distance decoding: ĉ MD = D MD (y) = arg min c C d H (y, c). They are equivalent for many channels. W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-16

17 Equivalence Between ML and MD decoding Theorem 3.7 Consider a memoryless binary symmetric channel (BSC) with cross-over probability p < 1/2. Then ĉ ML = ĉ MD. Proof: Pr (y received c sent) = n Pr (y i received c i sent) i=1 = p dh(y,c) (1 p) n d H(y,c) ( ) = (1 p) n p dh (y,c). 1 p That p < 1/2 implies that p/ (1 p) < 1. Hence, Pr (y received c sent) is a monotonically decreasing function of d H (y, c). The maximum Pr (y c) is achieved when d H (y, c) is minimized. W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-17

18 Distance of a Code Definition 3.8 The distance of a code C is defined as d H (C) = min d H (x 1, x 2 ). x 1,x 2 C, x 1 x 2 Notation: An (n, M, d)-code: a code of codeword length n, size M, and distance d. Example: Consider the binary code C = {00000, 00111, 11111}. It is a binary (5, 3, 2)-code. Example: Consider the ternary code C = {000000, , }. It is a ternary (6, 3, 3)-code. W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-18

19 Error Detection Error detector: if the received word y C, let ĉ = y and claim no error; if y / C, claim that errors happened. Theorem 3.9 Let C F n q be an (n, M, d) code. The above error detector detects every pattern of up to d 1 many errors. Proof: 1. Every pattern of d 1 many errors are detectable. Since at most d 1 many errors happened, 0 < d H (c, y) < d := d (C) and y / C. The receiver will claim that errors happened. 2. Exists a pattern of d many errors that is not detectable. By the definition of the code distance, there exist c 1, c 2 C s.t. d H (c 1, c 2 ) = d. Suppose that c 1 is the transmitted codeword and the channel errors happen to be e = c 2 c 1 (d errors happened). Then y = c 2 is received. As c 2 C, the detector claims that no error happened and set ĉ = c 2. W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-19

20 Error Correction Theorem 3.10 Let C F n q be an (n, M, d) code. The minimum distance decoder can correct every pattern of up to t := (d 1) /2 many errors. Side mark: and d 1 t = = 2 { d if d is odd, d 2 1 if d is even. 2t + 1 d (C) 2t + 2 Examples: The previous ternary (6, 3, 3) code is exactly 1-error-detecting. W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-20

21 Error Correction: Proof Proof: Let D be the minimum distance decoder. Let c and y be the transmitted codeword and received word respectively. Let ĉ = D MD (y). 1. If d H (y, c) t = (d 1) /2, then ĉ = c. Suppose that ĉ c. By the way the decoder D MD is defined, d H (y, ĉ) d H (y, c) t. On the other hand, by the definition of the code distance, d d H (c, ĉ) d H (c, y) + d H (y, ĉ) 2t d 1, which is a contradiction. 2. a pair (c, y) C F n q s.t. d H (y, c) = t + 1 and it may happen that D MD (y) c. By the definition of the code distance, c, c C s.t. d H (c, c ) = d. W.l.o.g., assume the first d positions of c, c are different. Define y such that y i = c i, i = 1, 2,, t + 1 and y i = c i, i = t + 2,, n. It is clear that d H (y, c) = t + 1 and d H (y, c ) = d (t + 1) t + 1 = d H (y, c). Hence, it may happen that ĉ = D MD (y) c. W. Dai (IC) EE4.07 Coding Theory ECC Code Distance 2017 page 3-21

22 Section 4 Linear Codes Definition. Decoding. Generator matrices. Parity-check matrices. Remark: Why linear codes? Low complexity in encoding, decoding, and distance computation. For the contents relevant to distance, Lin & Xing s book, Chapter 2, should be helpful. W. Dai (IC) EE4.07 Coding Theory Linear Codes 2017 page 4-1

23 Linear Codes: Definition Block codes: all codewords are of the same length C F n q. Definition 4.1 (Linear Codes) A linear code is a code for which any linear combination of codewords is also a codeword. That is, let u, v C F n q. Then λu + µv C, λ, µ F q. Example of linear codes: C = {0000, 0011, 1100, 1111} F 4 2. C = { v F 4 2 : wt (v) is even.}. Example of nonlinear codes: C = {0000, 1100, 1111}. C = { v F 4 3 : wt (v) is even.}. W. Dai (IC) EE4.07 Coding Theory Linear Codes Definition 2017 page 4-2

24 Basis Definition 4.2 (Basis) Let B = {v 1,, v k } F n. It is a basis of a set C F n if it satisfies the following conditions: Linear independence property: For all λ 1,, λ k F, if λ i v i = 0, then necessarily λ 1 = = λ k = 0. The spanning property: For every c C, there exist λ 1,, λ k F s.t. c = i λ iv i. dim (C) = k: the # of vectors in a basis. The basis B is not unique in general, but the dimension is. Example: Let C = {0000, 0011, 1100, 1111}. B 1 = {0011, 1100} is a basis for C. B 2 = {0011, 1111} is also a basis for C. dim (C) = 2. W. Dai (IC) EE4.07 Coding Theory Linear Codes Generator Matrix 2017 page 4-3

25 Construct a Basis Definition 4.3 (Linear Span) For any subset V F n, define V as the linear span of V: { } V = λi v i : λ i F, v i V. Construct a basis for a linear code C F n : 1. From C, take a nonzero vector, say v Take a nonzero vector, say v 2, from C {v 1 }. 3. Take a nonzero vector, say v 3, from C {v 1, v 2 }. 4. Continue this process, until C {v 1, v 2,, v k } = φ. 5. Set B = {v 1, v 2,, v k }. W. Dai (IC) EE4.07 Coding Theory Linear Codes Generator Matrix 2017 page 4-4

26 The Size of a Linear Code Theorem 4.4 Let C F n q be a linear code and dim (C) = k, then C = q k. Proof: 1. dim (C) = k a basis B = {v 1,, v k } for C. 2. C q k : { k } Definition of the basis suggests C = B = i=1 λ iv i : λ i F q. There are q k many possible linear combinations. Hence, C q k (repetition may exist). 3. C = q k : It suffices to show that there is no repetition. Let λ (1) λ (2). Let x (1) = k v 1 and x (2) = k Then x (1) x (2) = k i=1 ( i=1 λ(1) i λ (1) i λ (2) i ) i=1 λ(2) i v 1. v i 0 by linear independence of v i s and the fact that λ (1) λ { (2). k } There is no repetition in the set i=1 λ iv i : λ i F q. W. Dai (IC) EE4.07 Coding Theory Linear Codes Generator Matrix 2017 page 4-5

27 Generator Matrix Definition 4.5 (Generator Matrix) A generator matrix G for a linear code C F n is a matrix whose rows form a basis for C. For a given (n, k) linear code C F n, it can be defined using its generator matrix G F k n. The encoding function that maps information symbols to a codeword is given by E : F k C F n s c = sg C. Remark: Encoding of a linear code is efficient: vector-matrix product. Encoding of a nonlinear code is via a look-up table and hence computationally less efficient. W. Dai (IC) EE4.07 Coding Theory Linear Codes Generator Matrix 2017 page 4-6

28 Examples Example 1: the (3, 1) repetition code: G = [ ]. Example 2: the (7, 4) Hamming code G = Example [ 3: the generator ] matrix[ is not unique. ] G = and G = generate the same code C = {0000, 1010, 0101, 1111} F 4 2. W. Dai (IC) EE4.07 Coding Theory Linear Codes Generator Matrix 2017 page 4-7

29 Dual Code Definition 4.6 (Dual Code) Let C F n q be a non-empty code. Its dual code C is defined as { C = v F n q : vc T = } v i c i = 0 for all c C. i Lemma 4.7 For any non-empty code C F n q (linear or nonlinear), its dual code C is always linear. Proof: Take arbitrary v 1, v 2 C. Then for all λ 1, λ 2 F q and for all c C, (λ 1 v 1 + λ 2 v 2 ) c T = λ 1 v 1 c T + λ 2 v 2 c T = 0, which implies λ 1 v 1 + λ 2 v 2 C. W. Dai (IC) EE4.07 Coding Theory Linear Codes Parity Check Matrix 2017 page 4-8

30 Parity Check Matrix Definition 4.8 (Parity-Check Matrix) A parity-check matrix H for a linear code C F n q is a generator matrix for the dual code C. For a code C [n, k], it holds that G F k n q H G T = 0. and H F (n k) n q. Define a linear code via its parity-check matrix: C = { c F n q : ch T = 0 }. W. Dai (IC) EE4.07 Coding Theory Linear Codes Parity Check Matrix 2017 page 4-9

31 Examples The (3, 1) repetition code: G = [ ] and H = The (7, 4) Hamming code: G = and H = [ A self-dual code is a code s.t. C = C, Example: C = {0000, 1010, 0101, 1111}, where [ ] G = = H Self-dual codes do not exist for vector space R n or C n. ] W. Dai (IC) EE4.07 Coding Theory Linear Codes Parity Check Matrix 2017 page 4-10.

32 Relation Between G and H Consider C [n, k] F n q. Write G and H in systematic forms: Let G = [I k A] Fq k n, where A Fq k (n k). Let H = [B I n k ] F (n k) n q, where B F q (n k) k. Lemma 4.9 It holds that B = A T. Proof: [ HG T Ik = [B I n k ] A T ] = B I k + I n k A T = A T + A T = 0 F (n k) k q. Systematic form: Why? Easy to compute H from G, and vice versa. How? Gaussian-Jordan elimination. W. Dai (IC) EE4.07 Coding Theory Linear Codes Parity Check Matrix 2017 page 4-11

33 Hamming Weight Hamming Weight of x: wt (x) = {i : x i 0} = d (x, 0). Theorem 4.10 For a linear code C, d H (C) = min wt (x). x C\{0} Proof: d H (c 1, c 2 ) = wt (c 1 c 2 ) = wt (c ) for some c C (by the definition of linear codes). Notation: C [n, k, d]: n: codeword length; k: dimension; d: distance. W. Dai (IC) EE4.07 Coding Theory Linear Codes Distance 2017 page 4-12

34 Distance and Parity Check Matrix Theorem 4.11 Let C be a linear code defined by the parity-check matrix H. Then that d (C) = d is equivalent to that 1. Every d 1 columns of H are linearly independent. 2. There exist d linearly dependent columns. W. Dai (IC) EE4.07 Coding Theory Linear Codes Distance 2017 page 4-13

35 Two Confusing Concepts Given a matrix H, spark: minimum number of linearly dependent columns column rank: maximum number of linearly independent columns. Theorem 4.11 suggests that spark (H) = d (C). Example 4.12 [ ] H = : spark (H) = 3 and column rank (H) = H =..... is an n (n + 1) matrix: spark (H) = 2 and column rank (H) = n + 1. W. Dai (IC) EE4.07 Coding Theory Linear Codes Distance 2017 page 4-14

36 Application of Theorem 4.11: Binary Hamming Codes Definition 4.13 (Binary Hamming Codes ) The parity-check matrix of the binary Hamming code H [2 r 1, 2 r 1 r, 3] consists of all the nonzero binary vectors (columns) of length r. (Also denoted by H r.) Example 4.14 The H 2 [3, 1, 3] is given by H = [ ], and the H 3 [7, 4, 3] is given by H = W. Dai (IC) EE4.07 Coding Theory Linear Codes Distance 2017 page 4-15

37 The Distance of Binary Hamming Codes Corollary 4.15 The distance of a binary Hamming code is 3, i.e., d (H r ) = 3. Proof: We apply Theorem That there is no zero column implies that the minimum number of linearly dependent columns is at least 2, i.e., d (C) = spark (H) 2. In the binary field, that every two columns are distinct implies that every two columns are linearly independent. Hence, d (C) = spark (H) 3. It is easy to see that there exist three columns that are linearly dependent (for example the first three columns). Therefore d (C) = 3. Corollary 4.16 Binary Hamming codes correct up to one error. W. Dai (IC) EE4.07 Coding Theory Linear Codes Distance 2017 page 4-16

38 Theorem 4.11: Proof Proof: Let h i be the i th column of H. c C, let i 1,, i K be the locations where c i 0. By the definition of parity-check matrix, 0 = n K c i h i = c ik h ik. i=1 k=1 d (C) = d Claim 2: d (C) = d implies that c C s.t. wt (c) = d. That is, d k=1 c i k h ik = 0, or, h i1,, h ik are linearly dependent. d (C) = d Claim 1: Suppose not. h i1, h id 1 are linear dependent, i.e., d 1 k=1 x i k h ik = 0. Let x = [ 0 x i1 x ik x id 1 0 ]. Then wt (x) d 1 and x C. Hence d (C) d 1. A contradiction with d (C) = d. Claims 1&2 d (C) = d: That every d 1 columns are linearly independent implies no nonzero codeword of weight d 1. That there exists d columns that are linearly dependent means the existence of a codeword of weight d. Hence d (C) = min x C\{0} wt (x) = d. W. Dai (IC) EE4.07 Coding Theory Linear Codes Distance 2017 page 4-17

39 Syndrome Vector Let H F (n k) n q be a parity-check matrix of a linear code C [n, k] F n q. Suppose that the received word is given by y F n q. Define the syndrome vector s := yh T. It depends only on the error vector not the transmitted codeword. In particular, let y = x + e where x C is the transmitted codeword and e F n q is the error vector introduced by the channel. It holds that s = yh T = (x + e) H T = eh T. W. Dai (IC) EE4.07 Coding Theory Linear Codes Syndrome Decoding 2017 page 4-18

40 Syndrome Decoding MD decoding: Find ĉ = arg min c C Syndrome decoding: d H (c, y). 1. For the received word y, compute the syndrome vector: s := yh T. 2. Find the error vector e with the minimum weight: (MD decoding) 3. Decode y as ĉ = y ê. ê = arg min e wt (e) s.t. s = eh T. (1) Comments: In general, it is computationally challenging to solve (1). However, all practical codes have particular structures in the parity-check matrix so that the decoding problem can be solved efficiently. W. Dai (IC) EE4.07 Coding Theory Linear Codes Syndrome Decoding 2017 page 4-19

41 Decoding of Binary Hamming Codes Take H 3 (Definition 4.13) as an example. Assume that y = [ ]. Find the MD decoded codeword ĉ C. Since d (H 3 ) = 3, it corrects up to 1 error. For any e s.t. wt (e) = 1, let e i 0 for some i [n]. Then s = eh T = e i h T i = h T i. In the example, s = [001], e = [ ] and ĉ = [ ]. W. Dai (IC) EE4.07 Coding Theory Linear Codes Syndrome Decoding 2017 page 4-20

42 Section 5 Coding Bounds Sphere packing (Hamming) bound Sphere covering (Gilbert-Varshamov) bound Singleton bound and MDS codes The lectures will only cover sphere packing, sphere covering, singleton bounds and relevant contents. Reference: Lin & Xing s book, Chapter 5. W. Dai (IC) EE4.07 Coding Theory Coding Bounds 2017 page 5-1

43 Coding Bounds: Motivation Consider the Hamming code H r : r = 2: [3, 1, 3] r = 3: [7, 4, 3] r = 4: [15, 11, 3] Questions: Can we do better? What is the best that we can do? It is possible to construct linear codes with parameters [7, 4, 4] over F 8. [15, 11, 5] over F 16. W. Dai (IC) EE4.07 Coding Theory Coding Bounds 2017 page 5-2

44 Hamming Bound Theorem 5.1 (Hamming bound, sphere-packing bound) For a code of length n and distance d, the number of codewords is upper bounded by ( t ( ) ) n M q n / (q 1) i, i i=0 where t := d 1 2. W. Dai (IC) EE4.07 Coding Theory Coding Bounds Sphere Packing 2017 page 5-3

45 Examples Definition 5.2 (Perfect Codes) A perfect code is a code that attains the Hamming bound. Binary Hamming code H r [2 r 1, 2 r 1 r, 3] is a perfect code. d = 3 t = d 1 2 = 1. Ball Volume: t ( n i=0 i) (q 1) i = 1 + (2 r 1) = 2 r. Hamming bound: q n / t ( n i=0 i) (q 1) i = 2 2r 1 /2 r = 2 2r r 1 = 2 k. Perfect codes are rare (binary Hamming codes & Golay codes). W. Dai (IC) EE4.07 Coding Theory Coding Bounds Sphere Packing 2017 page 5-4

46 Hamming Bound: Proof (1) Define a ball in F n q centered at x F n q with radius t by B (x, t) = { y F n q : d (x, y) t }. Step one: the balls B (c, t), c C, are disjoint. For all c c C, it holds that B (c, t) B (c, t) = φ. For a y B (c, t), then y / B (c, t) for all c c. By triangle inequality: d d H (c, c ) d H (c, y) + d H (y, c ). Then ( d H y, c ) d 1 d d H (c, y) d t = d 2 d 1 > = t, 2 which implies y / B (c, t). W. Dai (IC) EE4.07 Coding Theory Coding Bounds Sphere Packing 2017 page 5-5

47 Hamming Bound: Proof (2) Step two: Consider the union of these balls. Clearly c C B (c, t) Fn q. Hence, ( ) Vol B (c, t) = Vol (B (c, t)) Vol ( F n ) q = q n, c C c C where the first equality holds because the balls do not overlap. The volume of each ball is Therefore Vol (B (c, t)) = t i=0 MVol (B (c, t)) q n M q n / ( ) n (q 1) i. i t i=0 ( ) n (q 1) i. i W. Dai (IC) EE4.07 Coding Theory Coding Bounds Sphere Packing 2017 page 5-6

48 Gilbert-Varshamov Bound Theorem 5.3 (Gilbert-Varshamov bound, sphere covering bound) For given code length n and distance d, there exists a code such that q n /Vol (d 1) M, where Vol (d 1) := d 1 ( n ) i=0 i (q 1) i. Comment: Different from the sphere packing bound, which holds for all codes, the sphere covering bound claims the existence of a code. That means, some badly designed codes may not satisfy this bound. W. Dai (IC) EE4.07 Coding Theory Coding Bounds Sphere Covering 2017 page 5-7

49 Gilbert-Varshamov Bound: Proof It s proved by construction. Let M 0 = q n /Vol (d 1) > 1. It suffices to show that exists a code with M 0 codewords. Take an arbitrary word c 1 F n q. Since M 0 > 1, or q n > Vol (d 1), it holds F n q \B (c 1, d 1) φ. Take an arbitrary word c 2 F n q \B (c 1, d 1). It is clear that d (c 1, c 2 ) d (c 2 / B (c 1, d 1)). Continue this process inductively. Suppose to ( obtain codewords c 1,, c M0 1 in this way. M0 ) 1 Since Vol i=1 B (c i, d 1) (M 0 1) Vol (d 1) < q n, it holds that F n q \ M 0 1 i=1 B (c i, d 1) φ. Take an arbitrary c M0 F n q \ M 0 1 i=1 B (c i, d 1) φ. Let C = {c 1,, c M0 }. By construction, d (c, c ) > d 1 for all c c C. Hence d (C) d. W. Dai (IC) EE4.07 Coding Theory Coding Bounds Sphere Covering 2017 page 5-8

50 Illustration for Sphere Packing and Covering Sphere Packing Sphere Covering W. Dai (IC) EE4.07 Coding Theory Coding Bounds Sphere Covering 2017 page 5-9

51 Singleton Bound and MDS Theorem 5.4 (Singleton Bound) The distance of any code C F n q with M codewords satisfies M q n d+1. In particular, if the code is linear and M = q k, then d n k + 1. Definition 5.5 (MDS) Codes that attain the singleton bound are maximum distance separable (MDS). Binary Hamming codes H r [2 r 1, 2 r 1 r, 3] are not MDS in general. n k + 1 = r + 1 > 3 for all r 2. W. Dai (IC) EE4.07 Coding Theory Coding Bounds MDS 2017 page 5-10

52 Singleton Bound: Proof Proof of the general case: Let C be of length n and distance d. c C, let c 1:n d+1 F n d+1 be the vector containing the first n d + 1 entries of c, and c n d+2:n F d 1 be the vector composed of the last d 1 elements of c. c c C, d d H (c, c ) = d H ( c1:n d+1, c 1:n d+1) + dh ( cn d+2:n, c n d+2:n). But d H ( cn d+2:n, c n d+2:n) d 1. Hence, d H ( c1:n d+1, c 1:n d+1) d (d 1) = 1. The truncated codewords are all distinct. Hence, M q n d+1. Proof for linear codes: Note that the parity-check matrix H F (n k) n contains n k rows. Every n k + 1 columns must be linearly dependent. By Theorem 4.11, d n k + 1. W. Dai (IC) EE4.07 Coding Theory Coding Bounds MDS 2017 page 5-11

53 Dual of MDS Codes Theorem 5.6 If a linear code C is MDS, then its dual code C is also MDS. Let the linear code C [n, k] be MDS. According to Theorem 5.6, one has Parity-check matrix Generator Matrix Parameters C H F (n k) n G F k n (n, k, n k + 1) C G F k n H F (n k) n (n, n k, k + 1) Key for the proof: Theorem If C [n, k] is MDS, then every set of n k columns of H is linear independent. W. Dai (IC) EE4.07 Coding Theory Coding Bounds MDS 2017 page 5-12

54 Dual of MDS Codes (Theorem 5.6): Proof Suppose d ( C ) < k + 1. Then there exists a nonzero codeword c C with at most k nonzero entries and at least n k zeros. Since permuting the coordinates reserves the codeword weights (i.e., the distance), w.l.o.g., assume that the last n k coordinates of c are zeros. Write the generator matrix of C (the parity-check matrix of C) as H = [A, H ], where A F (n k) k and H F (n k) (n k). By definition of the generator matrix, there exists s F n k such that c = sh. As C is MDS, by Theorem 4.11 the columns of H are linearly independent. That is, H is invertible. That the last n k coordinates of c are zeros implies that s = c k+1:n (H ) 1 = 0. But s = 0 implies c = sh = 0 which contradicts the assumption that c 0. Hence, d ( C ) k + 1. By the Singleton bound, d ( C ) = k + 1. W. Dai (IC) EE4.07 Coding Theory Coding Bounds MDS 2017 page 5-13

55 Section 6 RS & BCH Codes Reed-Solomon Codes Definition and properties. Decoding Cyclic and BCH codes The contents in this section are significant re-organization and condensation of the materials of many sources, including Lin & Xing s book, Chapters 7 and 8, and Roth s book, Chapters 5, 6 and 8. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes 2017 page 6-1

56 Reed-Solomon Codes Our Heroes: Irving S. Reed and Gustave Solomon Used in Magnetic recording (all computer hard disks use RS codes) Digital versatile disks (CDs, DVDs, etc.) Optical fiber networks (ITU-TG.795) ADSL transceivers (ITU-TG.992.1) Wireless telephony (3G systems, 4G systems) Digital satellite broadcast (ETS S, ETS ) Deep space exploration (all NASA probes) W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-2

57 RS Codes: Evaluation Mapping Definition 6.1 (Evaluation Mapping) Let F q be a finite field. Let n q 1 (typically n = q 1). Let A = {α 1,, α n } F q. For any polynomial f (x) F q [x], define the evaluation mapping eval (f (x)) that maps f to a vector c (F q ) n Example 6.2 f c = [c 1,, c n ] where c i = f (α i ). F 7 = {0, 1, 2, 3, 4, 5, 6}. Choose the primitive element α = 3. Let A = { 1, α,, α 5} = {1, 3, 2, 6, 4, 5}. f (x) = 2x + 1, c = eval (f) = [3, 0, 5, 6, 2, 4]. f (x) = 3x 2 + x + 2, c = eval (f) = [6, 4, 2, 4, 5, 5]. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-3

58 RS Codes: Definition Definition 6.3 (Reed-Solomon Codes) Given A = {α 1,, α n } F q, an [n, k] q-ary RS code C = {eval (f), 0 deg f k 1}. The set A is called a defining set of points of C. A common choice of defining set of points of C is A = { 1, α,, α q 2} where α is a primitive element in F q. In this case, n = q 1. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-4

59 RS Codes: Properties Theorem RS codes are linear codes. 2. RS codes are MDS, i.e., The distance of the RS code is d = n k + 1. Proof: 1. Let c 1 = eval (f 1 ) and c 2 = eval (f 2 ) where deg f 1 k 1 and deg f 2 k 1. Then αc 1 + βc 2 = eval (g) with g = αf 1 + βf 2. Since deg g k 1, eval (g) C. 2. A polynomial of degree k 1 can have at most k 1 zeros. Hence, c C s.t. c 0, c = eval (f) has weight at least n k + 1. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-5

60 RS Codes: Conventional Definition Theorem 6.5 Let the defining set of points is { 1, α,, α n 1} with order (α) = n (typically n = q 1). The generated RS code has generator matrix and parity-check matrix given by α α n 1 G = α k 1 α (k 1)(n 1) and H = 1 α α n 1 1 α 2 α 2(n 1) α n k α (n k)(n 1) W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-6

61 Generator Matrix: Justification For any c C, there exists a polynomial f (x) = a 0 + a 1 x + + a k 1 x k 1 s.t. c = eval (f) = [ f (1), f (α),, f ( α n 1)] C. Note that i [n], c i = f ( α i) = k 1 l=0 a ( l α i 1 ) l = [a0,, a k 1 ] G i where G i is the i-th column of the G matrix. One has C = { sg : s F k q} and G is a generator matrix of C. Remark: In the definition of the generator matrix (Def. 4.5), the rows of G are required to be linearly independent. We shall prove it later. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-7

62 Parity-Check Matrix: Justification Lemma 6.6 The G and H matrices defined in Theorem 6.5 satisfy GH T = 0. Proof: Let A := GH T F k (n k) q. i [k] and j [n k], it holds that A i,j = n α (l 1)(i 1) α (l 1)j = l=1 (a) = α(i+j 1)n 1 (b) α i+j 1 = 0, 1 n l=1 α (i+j 1)(l 1) where (a) comes from that i + j 1 < n and α i+j 1 1, and (b) holds because α n = 1. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-8

63 Row Rank of the G/H Matrix Theorem 6.7 The rows of the G/H matrix in Theorem 6.5 are linearly independent. Proof: It is sufficient to show that any k-column sub-matrix of G ((n k)-column sub-matrix of H) has full rank. Note that a k-column sub-matrix of G is of the form G α i 1 α i 2 α i k = α (k 1)i 1 α (k 1)i 2 α (k 1)i k, which is a Vandermonde matrix (defined and analysed later). A Vandermonde matrix has full rank. Hence the rows of G are linearly independent. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-9

64 Vandermonde Matrix Definition 6.8 (Vandermonde Matrix) A Vandermonde matrix V F n n is of the form α 1 α 2 α n V = α n 1 1 α n 1 2 αn n 1 Theorem 6.9 The determinant of a Vandermonde matrix V F n n is V = i<j (α j α i ). As a result, if α i α j, 1 i j n, then V 0 and V is of full rank. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-10

65 Determinant: A Recap Definition 6.10 (Determinant) A F n n, its determinant A is computed via A = n j=1 ( 1)i+j a i,j M i,j, where M i,j is the minor matrix obtained by deleting row i and column j from A. Lemma AB = A B. 2. If B results from A by adding a multiple of one row/column to another row/column, then B = A. 3. A 0 A is of full rank. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-11

66 Theorem 6.9: Proof (1) We prove Theorem 6.9 by using induction. Recall that V n = α 1 α 2 α n 1 α n α n 2 1 α n 2 2 α n 2 n 1 α n 1 1 α n 1 2 α n 1 n 1 α n 2 n α n 1 n Let ( V n ) :,2 = (Vn) :,2 (Vn) :,1,, ( V n ) :,n = (Vn) :,n (Vn) :,1. We obtain V n = α 1 α 2 α 1 α n 1 α 1 α n α α n 2 1 α n 2 2 α n 2 1 α n 2 n 1 αn 2 α n 1 1 α n 1 2 α n 1 1 α n 1 n 1 αn 1 1 α n 2 n 1 α n 1 n α n 2 1 α n 1 1 W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-12

67 Theorem 6.9: Proof (2) Let ( V n ) n,: = ( V n ) n,: α ( 1 V n )n 1,:,, ( V n ) 2,: = ( V n ) 2,: α ( 1 V ) n. We obtain 1,: V n = α 2 α 1 α n 1 α 1 α n α α n 2 2 α 1 α n 3 2 α n 2 n 1 α 1α n 3 n 1 0 α n 1 2 α 1 α n 2 2 α n 1 n 1 α 1α n 2 n 1 α n 2 n α 1 α n 3 n α n 1 n α 1 α n 2 n α 2 α 1 α n 1 α 1 α n α 1... = (α 2 α 1 ) α n 3 ( ) 2 αn 1 α 1 α n 3 n 1 0 (α 2 α 1 ) α n 2 ( ) 2 αn 1 α 1 α n 2 n 1 (α n α 1 ) α n 3 n (α n α 1 ) α n 2 n [ 1 = Vn 1 (α2,, αn) ] 1 α 2 α 1... αn 1 α1 α n α 1 Hence V n = V n = V n = Vn 1 j>1 ( αj α 1). W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Codes 2017 page 6-13

68 Decoding with Known Error Locations Let e be the error vector. Let I = {i : e i 0} be the set of error locations. e I, H I : sub-vector and sub-matrix of e and H respectively. If we knew error locations I: Solve H I e T I = st. (e T I = H I st ) Complexity of pseudo-inverse H I : O ( d 3). W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-14

69 Erasure Correction Recall the erasure channel model. Suppose that c C was transmitted. Receive r = [c 1 c i 1? c i+1 c n ] (at most d 1 symbols erased). Decoding: Set the missing symbols to zero, i.e., r I = 0. Then r = c + e, where e I c = 0. s T = Hr T = He T = H I e T I. α i 1 1 α 2(i 1 1). α s(i 1 1) α i 2 1 α 2(i 2 1). α s(i 2 1) α is 1 α 2(is 1) α s(is 1) e 1 e 2. e s = s 1 s 2. s s. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-15

70 A Specific Example Consider a [7, 4, 4] RS code over F 8 (F 2 [x] /x 3 + x + 1). Let α be a primitive element (a root of f (x) = x 3 + x + 1). F 8 : α 3 = α α α 2 α 3 α 4 α 5 α 6 Encoded message m (x) = αx 3 + αx 2 + x. c = eval (m) = [ 1 α 5 α 1 α 5 α 6 α 5 ]. r = [ 1 α 5 α 1?? α 5 ]. H = ch T =0 1 α α 2 α 3 α 4 α 5 α 6 1 α 2 α 4 α 6 α α 3 α 5 1 α 3 α 6 α 2 α 5 α α 4 α 4 α 5 α α 3 α 5 α [ c5 c 6 ] = α 1 α matrix inverse [ c5 c 6 ] [ α 5 = α 6 ]. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-16

71 Error Correction In previous example: Error (erasure) locations are known. The error values are found via matrix inverse. For error correction: Find error locations ( Exhaustive search: complexity n t) = O (n t ). In practice, methods to find error locations efficiently. Correct errors with given error locations Methods to avoid matrix inverse. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-17

72 Efficient Error Correction: Definitions Definitions 6.12 Syndrome polynomial: S (z) = n k 1 j=0 s j z j, where s = rh T = eh T. Error locator polynomial: (information about error locations) L (z) = ( i I 1 α i z ). Error evaluator polynomial: (information about errors) E (z) = L (z) i I e i α i (1 α i z) = i I e iα i j I\{i} ( 1 α j z ). Remark: The receiver can compute the syndrome vector and the syndrome polynomial easily. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-18

73 Information Encoded in L (z) and E (z) If we know L (z), we can find error locations. { L ( α k) = 0 if k I L ( α k) 0 if k / I. Error locations can be found by exhaustively computing L ( α k), 0 k n 2. E (z) helps find errors e i, i I, without matrix inverse. ( ) k I, E α k = e k α k ( j k 1 α j α k) 0. e k = E ( α k) ( / α k ( j k 1 α j α k)). or ek = E ( α k) / d dz L ( α k), where d dz L (z) is the derivative of L (z). Complexity is reduced from O ( n 3) to O ( n 2). Decoding strategy: from S (z) to find L (z) and E (z). W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-19

74 An Example of L (z) and E (z) I = {1, 2, 5}. L (z) = (1 αz) ( 1 α 2 z ) ( 1 α 5 z ) L (z) = 0 if z = α 1, α 2, or α 5. L (z) 0 otherwise. E (z) = e 1 α 1 ( 1 α 2 z ) ( 1 α 5 z ) T1 + e 2 α 2 (1 αz) ( 1 α 5 z ) T2 + e 5 α 5 (1 αz) ( 1 α 2 z ) T3 T1 T2 T3 E (z) z = α 1 0 = 0 = 0 0 z = α 2 = 0 0 = 0 0 z = α 5 = 0 = W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-20

75 Properties of L (z) and E (z) Let t = d 1 2. Theorem gcd (L (z), E (z)) = The key equation: E (z) = L (z) S (Z) mod z d (Uniqueness) Let a (z), b (z) F q [z] be such that deg (a (z)) t 1, deg (b (z)) t, gcd (a (z), b (z)) = 1 and ( a (z) S (z) b (z) mod z d 1). Then a (z) and b (z) are unique up to a constant. That is, we can treat a (z) = ce (z), b (z) = cl (z), and E (z) and L (z) are generated from an error vector e s.t. wt (e) t. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-21

76 Decoding Process 1. Compute the syndrome vector and polynomial s and S (z) respectively. 2. Apply Euclidean algorithm to z d 1 and S (z), i.e., z d 1 = q 1 (z) S (z) + r 1 (z) S (z) = q 2 (z) r 1 (z) + r 2 (z). r l 2 (z) = q l (z) r l 1 (z) + r l (z), where deg (r l (z)) t By Bézout s Identity (Lem. 1.5), one has r l (z) = a (z) S (z) + b (z) z d 1 a (z) S (z) mod z d Let c be the leading coefficient of the polynomial a (z), i.e., c 1 a (z) is a monic polynomial. By Theorem 6.13, set L (z) = c 1 a (z), and E (z) = c 1 r l (z). 5. Find the error locations i I from L (z) and the errors e i from E (z). ĉ = y e. The complexity is highly reduced! W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-22

77 Theorem 6.13, Part 1: Proof Proof: L (z) has roots α i, i I. None of them is a root of E (z). L (z) and E (z) does not share any roots. gcd (L (z), E (z)) = 1. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-23

78 Theorem 6.13, Part 2: Proof Theorem 6.13 part 2 is a direct consequence of the lemma below. Lemma 6.14 S (z) i I e iα i 1 α i z mod zd 1 Proof: As s = rh T = eh T, it follows that Hence, s j = n 1 i=0 e iα i(j+1) = i I e iα i(j+1), 0 j d 2. By the definition of S (z), it holds that S (z) = d 2 j=0 s jz j = d 2 j=0 i I e iα i(j+1) z j = ( i I e d 2 iα i ( j=0 α i z ) ) j = i I e iα i ( j=0 ( α i z ) j ) mod z d 1 = i I e iα i 1 1 α i z. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-24

79 Theorem 6.13, Part 3: Proof Proof: To prove the uniqueness, we assume that there exist (E (z), L (z)) (E (z), L (z)) s.t. E (z) = S (z) L (z) mod z d 1 and E (z) = S (z) L (z) mod z d 1. It follows that E (z) L (z) = S (z) L (z) L (z) mod z d 1 = E (z) L (z) mod z d 1. (2) By assumption, deg (E (z)) t 1 and deg (L (z)) t. It is clear that deg (E (z) L (z)) 2t 1 d 2. The same is true for E (z) L (z). As a result, (2) becomes E (z) L (z) = E (z) L (z) Note gcd (E (z), L (z)) = 1. By Lemma 1.12, E (z) E (z) and L (z) L (z). Similarly from gcd (E (z), L (z)) = 1, E (z) E (z) and L (z) L (z). Hence, E (z) = ce (z) and L (z) = cl (z) for some nonzero c F q. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-25

80 An Example Example: Consider the [7, 3] RS code over F 8 (F 8 is given as follows). 0 1 α α 2 α 3 α 4 α 5 α Let the received signal be y = [ α 3, α, 1, α 2, 0, α 3, 1 ]. Find ĉ. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-26

81 Solutions to the Example 1. Parameters: n k = 4, d = 5 (t = 2), and H F 4 7 H = 2. Syndromes: s = yh T = [ α 3, α 4, α 4, 0 ]. S (z) = α 4 z 2 + α 4 z + α α α 2 α 3 α 4 α 5 α 6 1 α 2 α 4 α 6 α α 3 α 5 1 α 3 α 6 α 2 α 5 α α 4 1 α 4 α α 5 α 2 α 6 α 3 3. Key polynomials: apply Euclidean algorithm to z 4 and S (z). 3.1 z 4 = ( α 3 z 2 + α 3 z + α 5) S (z) + (z + α). 3.2 L (z) = α 3 z 2 + α 3 z + α 5. E (z) = z + α. 3.3 L (z) = α 5 z 2 + α 5 z + 1. E (z) = α 2 z + α Find ĉ: 4.1 Plug 1, α 1, into L (z). L ( α 2) = L ( α 3) = According E (z), we have e 2 = α 3 and e 3 = α ĉ = y e = y + e = [ α 3, α, α, 1, 0, α 3, 1 ].. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes RS Code - Decoding 2017 page 6-27

82 Towards Cyclic and BCH Codes Have seen Binary Hamming codes: d = 3. Reed-Solomon codes: MDS (d = n k + 1) and requires large fields (typically q = n + 1). Will introduce cyclic codes Reed-Solomon codes are a special case of cyclic codes. BCH codes as another special case. Systematic way to construct binary codes with large distance. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes Cyclic Codes 2017 page 6-28

83 Cyclic Codes Definition 6.15 An [n, k] linear code is cyclic if for every codeword c = c 0 c 1 c n 2 c n 1, the right cyclic shift of c, c n 1 c 0 c 1 c n 2, is also a codeword. Example: The H [7, 3] has the parity-check matrix H = It s dual code H3 (view H as the generator matrix) is cyclic. (The codewords are , , , , , , , ) W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes Cyclic Codes 2017 page 6-29

84 Generating Function Definition 6.16 The generating function of a codeword c = [c 0 c n 1 ] is c (x) = c 0 + c 1 x + + c n 1 x n 1. It will be convenient to use c (x) to represent a codeword c. The right cyclic shift c = (c 0,, c n 1 ) c = (c n 1, c 0,, c n 2 ) can be obtained by c (x) = x c (x) mod x n 1 as x c (x) = c 0 x + c 1 x c n 2 x n 1 + c n 1 x n = c n 1 + c 0 x + c 1 x c n 2 x n 1 mod x n 1. Lemma 6.17 Let c (x) C. For an arbitrary u (x), u (x) c (x) mod x n 1 is in C. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes Cyclic Codes 2017 page 6-30

85 Generator Polynomial Theorem 6.18 For a cyclic code C, a unique monic polynomial g (x) s.t. for all c (x) C, c (x) = u (x) g (x) for some u (x). Proof: Let g (x) C be the nonzero polynomial of least degree. Since C is linear, w.l.o.g., assume that g (x) is monic. Then c (x) C, write c (x) = u (x) g (x) + r (x). By definition of cyclic codes, u (x) g (x) C. Hence, r (x) C by linearity of C. But deg (r (x)) < deg (g (x)), which implies r (x) = 0. The uniqueness of g (x) can be proved by contradiction. Suppose that there are two monic polynomials g 1 (x) g 2 (x) of the same degree that both generate C. Then g 1 (x) g 2 (x) C and deg (g 1 g 2 ) < deg (g 1 ), which forces g 1 (x) g 2 (x) = 0. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes Cyclic Codes 2017 page 6-31

86 Properties of the Generator Polynomial Corollary 6.19 g (x) x n 1. Proof: Write x n 1 = q (x) g (x) + r (x). Take mod x n 1 on both sides. 0 = x n 1 mod x n 1 C. q (x) g (x) mod x n 1 C (x). Hence r (x) mod x n 1 C r (x) C r (x) = 0. Remark: Let n = q m 1. We know how to factor x n 1 in terms of minimal polynomials. g (x) must be a product of minimal polynomials. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes Cyclic Codes 2017 page 6-32

87 Generator Matrices of Cyclic Codes Theorem 6.20 The generator matrix of a cyclic code C [n, k]: g (x) xg (x) G =. x k 1 g (x) = g 0 g 1 g n k g 0 g 1 g n k g 0 g 1 g n k. Observations: deg (g (x)) = n k. Easy for implementation: can be implemented by using flip-flops. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes Cyclic Codes 2017 page 6-33

88 Parity-Check Matrices of Cyclic Codes Recall g (x) x n 1. Define h (x) such that g (x) h (x) = x n 1. Then h (x) is a monic polynomial with degree k. Write h (x) = k i=0 a ix i. Definition 6.21 The reciprocal polynomial h R (x) of h (x) is given by h R (x) = a k + a k 1 x + + a 0 x k = x k h (1/x). Example: h (x) = 1 + x 2 + x 3 h R (x) = 1 + x + x 3. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes Cyclic Codes 2017 page 6-34

89 Parity-Check Matrix Theorem 6.22 The parity-check matrix of the cyclic code C [n, k] is h R (x) xh R (x) H =. x n k 1 h R (x) = h k h k 1 h 0 h k h k 1 h h k h k 1 h 0. Corollary 6.23 The dual of a cyclic code, C, is also cyclic. h 1 0 h R (x) is the generator polynomial of C. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes Cyclic Codes 2017 page 6-35

90 Theorem 6.22: Proof By assumption, x n 1 = ( g (x) h (x). Note that n k ) ( g (x) h (x) = i=0 g k ) ix i i=0 h ix i = n i=0 ( i l=0 g lh i l ) x i = n i=0 a ix i, where a 0 = g 0 h 0 = 1, a n = g n k h k = 1 1 = 1, and a i = i l=0 h lg i l = 0, 1 i n 1. Let A = GH T with A i,j = [0,, 0, g }{{} 0,, g n k, 0, 0] [0,, 0, h }{{} k,, h 0, 0,, 0] T. i 1 j 1 It can be verified that A 1,1 = a k, A 1,2 = a k+1,, and a k a k+1 a n 1 A = GH T a k 1 a k a n 2 = = 0 Fk (n k) a 1 a 2 a n k W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes Cyclic Codes 2017 page 6-36

91 Cyclic Codes: An Example To construct a cyclic code on F q, we realize that M (i) (x) F q [x] M (i) (x) x qm 1 1. Definition 6.24 A BCH code over F q of length n = q m 1 is the cyclic code generated by g (x) = lcm ( M (a) (x),, M (a+δ 2) (x) ) for some integer a. (The code is called narrow-sense if a = 1.) Lemma 6.25 A BCH code defined in Definition 6.24 has d δ. δ is referred to the designed distance. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes BCH Codes 2017 page 6-37

92 Distance of BCH Codes: Proof of Lemma 6.25 Let α be the primitive element in F q m. By construction, α a,, α a+δ 2 are roots of the generator polynomial g (x). That is, c C, the generating function c (x) satisfies c ( α i) = 0, a i a + δ 2. In matrix format, 1 α a α a(n 1) c 0 1 α a+1 α (a+1)(n 1) c = 0 (3) 1 α a+δ 2 α (a+δ 1)(n 1) c n 1 Any (δ 1)-column submatrix is a Vandermonde matrix and hence of full rank. This implies d δ. Remark: The matrix in (3) is in F (δ 1) n q m while the vector c C Fn q. Hence the matrix is not a parity-check matrix when m > 1. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes BCH Codes 2017 page 6-38

93 Example: Reed-Solomon Codes Recall that a RS-code C [n, k, n k + 1] is built on F q with typically n = q 1. Compare the parity-check matrix of a RS-code (Theorem 6.5) and Equation (3). It is clear that a RS-code is a special case of a BCH code with m = 1. In particular, suppose that we are asked to build a BCH code over F q with n = q 1 and d δ = n k + 1. We find M (i) (x) F q [x], 1 i 1 + δ 2 = n k. Since M (i) (x) F q [x], it follows that M (i) (x) = x α i. Hence g (x) = n k ( i=1 x α i ) and the generator matrix can be constructed (in a different form of that in Theorem 6.5) and good for implementation). Its parity-check matrix is given by the matrix in Equation (3). RS decoder can be directly applied for decoding. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes BCH Codes 2017 page 6-39

94 Example: Binary BCH Codes We have learned binary Hamming codes. The distance is always 3. The question is how to construct a binary code with large distance. For example, how to construct a binary code of length 15 and d 5? 1. For binary codes, use F 2. n = 15 = hence m = δ = 5 implies g (x) = lcm ( M (1) (x), M (2) (x), M (3) (x), M (4) (x) ). 3. The relevant cyclotomic cosets of 2 modulo 15 include C 1 = {1, 2, 4, 8} and C 3 = {3, 6, 9, 12}. Hence M (1) (x) = i C 1 ( x α i ) = M (2) (x) = M (4) (x) and M (3) (x) = i C 3 ( x α i ). Furthermore, g (x) = M (1) (x) M (3) (x). 4. Find the generator matrix and parity-check matrix according to Theorems 6.20 and 6.22 respectively. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes BCH Codes 2017 page 6-40

95 From Hamming to BCH Example: A binary code of length 15 and d 5? g (x) = lcm ( M (1) (x), M (2) (x), M (3) (x), M (4) (x) ) = lcm ( M (1) (x), M (3) (x) ) = M (1) (x) M (3) (x). RS codes are special cases of BCH codes (m = 1). Have learned [7, 4, 3] Hamming code H = Another view: Let α be a primitive element of F 8 that satisfies α 3 = α + 1. The parity check matrix can be written as [ 1 α α 2 α 3 α 4 α 5 α 6 ]. (Column vectors in F 3 2 Elements in F 8) W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes BCH Codes 2017 page 6-41

96 From Hamming to BCH: Larger Distance Binary BCH codes with d 5: g (x) = lcm ( M (1) (x), M (2) (x), M (3) (x), M (4) (x) ) It holds that = lcm ( M (1) (x), M (3) (x) ). 1 α α 2 α 3 α 4 α 5 α 6 1 α 2 α 4 α 6 α α 3 α 5 1 α 3 α 6 α 2 α 5 α α 4 1 α 4 α α 5 α 2 α 6 α 3 { c ( α 2) = c (α) 2 = 0 c = 0 But c (α) = 0 c ( α 4) = c (α) 4 = 0 [ 1 α α 2 α H = 3 α 4 α 5 α 6 ] 1 α 3 α 6 α 2 α 5 α α 4 Eventually, we get a [7, 1, 7] code C = { , }. W. Dai (IC) EE4.07 Coding Theory RS & BCH Codes BCH Codes 2017 page 6-42

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16 Solutions of Exam Coding Theory (2MMC30), 23 June 2016 (1.a) Consider the 4 4 matrices as words in F 16 2, the binary vector space of dimension 16. C is the code of all binary 4 4 matrices such that the

More information

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes ELEC 519A Selected Topics in Digital Communications: Information Theory Hamming Codes and Bounds on Codes Single Error Correcting Codes 2 Hamming Codes (7,4,3) Hamming code 1 0 0 0 0 1 1 0 1 0 0 1 0 1

More information

MATH32031: Coding Theory Part 15: Summary

MATH32031: Coding Theory Part 15: Summary MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with

More information

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane 2301532 : Coding Theory Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, 2006 http://pioneer.chula.ac.th/ upattane Chapter 1 Error detection, correction and decoding 1.1 Basic definitions and

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................

More information

We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes.

We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes. Chapter 5 Golay Codes Lecture 16, March 10, 2011 We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes. Question. Are there any other nontrivial perfect codes? Answer. Yes,

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt

More information

: Error Correcting Codes. October 2017 Lecture 1

: Error Correcting Codes. October 2017 Lecture 1 03683072: Error Correcting Codes. October 2017 Lecture 1 First Definitions and Basic Codes Amnon Ta-Shma and Dean Doron 1 Error Correcting Codes Basics Definition 1. An (n, K, d) q code is a subset of

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Codes over Subfields. Chapter Basics

Codes over Subfields. Chapter Basics Chapter 7 Codes over Subfields In Chapter 6 we looked at various general methods for constructing new codes from old codes. Here we concentrate on two more specialized techniques that result from writing

More information

Orthogonal Arrays & Codes

Orthogonal Arrays & Codes Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible

More information

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes S-723410 BCH and Reed-Solomon Codes 1 S-723410 BCH and Reed-Solomon Codes 3 Background The algebraic structure of linear codes and, in particular, cyclic linear codes, enables efficient encoding and decoding

More information

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound ELEC 45/ELEC 5 Error Control Coding and Sequences Hamming Codes and the Hamming Bound Single Error Correcting Codes ELEC 45 2 Hamming Codes One form of the (7,4,3) Hamming code is generated by This is

More information

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination

More information

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes ELEC 405/ELEC 511 Error Control Coding Hamming Codes and Bounds on Codes Single Error Correcting Codes (3,1,3) code (5,2,3) code (6,3,3) code G = rate R=1/3 n-k=2 [ 1 1 1] rate R=2/5 n-k=3 1 0 1 1 0 G

More information

Lecture Introduction. 2 Linear codes. CS CTT Current Topics in Theoretical CS Oct 4, 2012

Lecture Introduction. 2 Linear codes. CS CTT Current Topics in Theoretical CS Oct 4, 2012 CS 59000 CTT Current Topics in Theoretical CS Oct 4, 01 Lecturer: Elena Grigorescu Lecture 14 Scribe: Selvakumaran Vadivelmurugan 1 Introduction We introduced error-correcting codes and linear codes in

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Solved Exercises and Problems of Cyclic Codes Enes Pasalic University of Primorska Koper, 2013 Contents 1 Preface 3 2 Problems 4 2 1 Preface This is a collection of solved

More information

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory 6895 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 3: Coding Theory Lecturer: Dana Moshkovitz Scribe: Michael Forbes and Dana Moshkovitz 1 Motivation In the course we will make heavy use of

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often

More information

MTAT : Introduction to Coding Theory. Lecture 1

MTAT : Introduction to Coding Theory. Lecture 1 MTAT05082: Introduction to Coding Theory Instructor: Dr Vitaly Skachek Lecture 1 University of Tartu Scribe: Saad Usman Khan Introduction Information theory studies reliable information transmission over

More information

Arrangements, matroids and codes

Arrangements, matroids and codes Arrangements, matroids and codes first lecture Ruud Pellikaan joint work with Relinde Jurrius ACAGM summer school Leuven Belgium, 18 July 2011 References 2/43 1. Codes, arrangements and matroids by Relinde

More information

Math 512 Syllabus Spring 2017, LIU Post

Math 512 Syllabus Spring 2017, LIU Post Week Class Date Material Math 512 Syllabus Spring 2017, LIU Post 1 1/23 ISBN, error-detecting codes HW: Exercises 1.1, 1.3, 1.5, 1.8, 1.14, 1.15 If x, y satisfy ISBN-10 check, then so does x + y. 2 1/30

More information

Lecture 17: Perfect Codes and Gilbert-Varshamov Bound

Lecture 17: Perfect Codes and Gilbert-Varshamov Bound Lecture 17: Perfect Codes and Gilbert-Varshamov Bound Maximality of Hamming code Lemma Let C be a code with distance 3, then: C 2n n + 1 Codes that meet this bound: Perfect codes Hamming code is a perfect

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 x + + a n 1 x n 1 + a n x n, where the coefficients a 0, a 1, a 2,,

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

Hamming codes and simplex codes ( )

Hamming codes and simplex codes ( ) Chapter 6 Hamming codes and simplex codes (2018-03-17) Synopsis. Hamming codes are essentially the first non-trivial family of codes that we shall meet. We start by proving the Distance Theorem for linear

More information

Channel Coding for Secure Transmissions

Channel Coding for Secure Transmissions Channel Coding for Secure Transmissions March 27, 2017 1 / 51 McEliece Cryptosystem Coding Approach: Noiseless Main Channel Coding Approach: Noisy Main Channel 2 / 51 Outline We present an overiew of linear

More information

11 Minimal Distance and the Parity Check Matrix

11 Minimal Distance and the Parity Check Matrix MATH32031: Coding Theory Part 12: Hamming Codes 11 Minimal Distance and the Parity Check Matrix Theorem 23 (Distance Theorem for Linear Codes) Let C be an [n, k] F q -code with parity check matrix H. Then

More information

Vector spaces. EE 387, Notes 8, Handout #12

Vector spaces. EE 387, Notes 8, Handout #12 Vector spaces EE 387, Notes 8, Handout #12 A vector space V of vectors over a field F of scalars is a set with a binary operator + on V and a scalar-vector product satisfying these axioms: 1. (V, +) is

More information

Berlekamp-Massey decoding of RS code

Berlekamp-Massey decoding of RS code IERG60 Coding for Distributed Storage Systems Lecture - 05//06 Berlekamp-Massey decoding of RS code Lecturer: Kenneth Shum Scribe: Bowen Zhang Berlekamp-Massey algorithm We recall some notations from lecture

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done

More information

Mathematics Department

Mathematics Department Mathematics Department Matthew Pressland Room 7.355 V57 WT 27/8 Advanced Higher Mathematics for INFOTECH Exercise Sheet 2. Let C F 6 3 be the linear code defined by the generator matrix G = 2 2 (a) Find

More information

1 Vandermonde matrices

1 Vandermonde matrices ECE 771 Lecture 6 BCH and RS codes: Designer cyclic codes Objective: We will begin with a result from linear algebra regarding Vandermonde matrices This result is used to prove the BCH distance properties,

More information

FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES. Presenting: Wednesday, June 8. Section 1.6 Problem Set: 35, 40, 41, 43

FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES. Presenting: Wednesday, June 8. Section 1.6 Problem Set: 35, 40, 41, 43 FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES BRIAN BOCKELMAN Monday, June 6 2005 Presenter: J. Walker. Material: 1.1-1.4 For Lecture: 1,4,5,8 Problem Set: 6, 10, 14, 17, 19. 1. Basic concepts of linear

More information

ELEC-E7240 Coding Methods L (5 cr)

ELEC-E7240 Coding Methods L (5 cr) Introduction ELEC-E7240 Coding Methods L (5 cr) Patric Östergård Department of Communications and Networking Aalto University School of Electrical Engineering Spring 2017 Patric Östergård (Aalto) ELEC-E7240

More information

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal Finite Mathematics Nik Ruškuc and Colva M. Roney-Dougal September 19, 2011 Contents 1 Introduction 3 1 About the course............................. 3 2 A review of some algebraic structures.................

More information

Proof: Let the check matrix be

Proof: Let the check matrix be Review/Outline Recall: Looking for good codes High info rate vs. high min distance Want simple description, too Linear, even cyclic, plausible Gilbert-Varshamov bound for linear codes Check matrix criterion

More information

Combinatória e Teoria de Códigos Exercises from the notes. Chapter 1

Combinatória e Teoria de Códigos Exercises from the notes. Chapter 1 Combinatória e Teoria de Códigos Exercises from the notes Chapter 1 1.1. The following binary word 01111000000?001110000?00110011001010111000000000?01110 encodes a date. The encoding method used consisted

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x), Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 + + a n 1 x n 1 + a n x n, where the coefficients a 1, a 2,, a n are

More information

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

MATH 433 Applied Algebra Lecture 22: Review for Exam 2. MATH 433 Applied Algebra Lecture 22: Review for Exam 2. Topics for Exam 2 Permutations Cycles, transpositions Cycle decomposition of a permutation Order of a permutation Sign of a permutation Symmetric

More information

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics Answers and Solutions to (Even Numbered) Suggested Exercises in Sections 6.5-6.9 of Grimaldi s Discrete and Combinatorial Mathematics Section 6.5 6.5.2. a. r = = + = c + e. So the error pattern is e =.

More information

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x) Outline Recall: For integers Euclidean algorithm for finding gcd s Extended Euclid for finding multiplicative inverses Extended Euclid for computing Sun-Ze Test for primitive roots And for polynomials

More information

5.0 BCH and Reed-Solomon Codes 5.1 Introduction

5.0 BCH and Reed-Solomon Codes 5.1 Introduction 5.0 BCH and Reed-Solomon Codes 5.1 Introduction A. Hocquenghem (1959), Codes correcteur d erreurs; Bose and Ray-Chaudhuri (1960), Error Correcting Binary Group Codes; First general family of algebraic

More information

U Logo Use Guidelines

U Logo Use Guidelines COMP2610/6261 - Information Theory Lecture 22: Hamming Codes U Logo Use Guidelines Mark Reid and Aditya Menon logo is a contemporary n of our heritage. presents our name, d and our motto: arn the nature

More information

Error-Correcting Codes

Error-Correcting Codes Error-Correcting Codes HMC Algebraic Geometry Final Project Dmitri Skjorshammer December 14, 2010 1 Introduction Transmission of information takes place over noisy signals. This is the case in satellite

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

Secure RAID Schemes from EVENODD and STAR Codes

Secure RAID Schemes from EVENODD and STAR Codes Secure RAID Schemes from EVENODD and STAR Codes Wentao Huang and Jehoshua Bruck California Institute of Technology, Pasadena, USA {whuang,bruck}@caltechedu Abstract We study secure RAID, ie, low-complexity

More information

Cyclic codes: overview

Cyclic codes: overview Cyclic codes: overview EE 387, Notes 14, Handout #22 A linear block code is cyclic if the cyclic shift of a codeword is a codeword. Cyclic codes have many advantages. Elegant algebraic descriptions: c(x)

More information

Lecture 12: November 6, 2017

Lecture 12: November 6, 2017 Information and Coding Theory Autumn 017 Lecturer: Madhur Tulsiani Lecture 1: November 6, 017 Recall: We were looking at codes of the form C : F k p F n p, where p is prime, k is the message length, and

More information

EE512: Error Control Coding

EE512: Error Control Coding EE512: Error Control Coding Solution for Assignment on Linear Block Codes February 14, 2007 1. Code 1: n = 4, n k = 2 Parity Check Equations: x 1 + x 3 = 0, x 1 + x 2 + x 4 = 0 Parity Bits: x 3 = x 1,

More information

: Error Correcting Codes. November 2017 Lecture 2

: Error Correcting Codes. November 2017 Lecture 2 03683072: Error Correcting Codes. November 2017 Lecture 2 Polynomial Codes and Cyclic Codes Amnon Ta-Shma and Dean Doron 1 Polynomial Codes Fix a finite field F q. For the purpose of constructing polynomial

More information

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1) Cyclic codes: review EE 387, Notes 15, Handout #26 A cyclic code is a LBC such that every cyclic shift of a codeword is a codeword. A cyclic code has generator polynomial g(x) that is a divisor of every

More information

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials Outline MSRI-UP 2009 Coding Theory Seminar, Week 2 John B. Little Department of Mathematics and Computer Science College of the Holy Cross Cyclic Codes Polynomial Algebra More on cyclic codes Finite fields

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

7.1 Definitions and Generator Polynomials

7.1 Definitions and Generator Polynomials Chapter 7 Cyclic Codes Lecture 21, March 29, 2011 7.1 Definitions and Generator Polynomials Cyclic codes are an important class of linear codes for which the encoding and decoding can be efficiently implemented

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Lecture 14: Hamming and Hadamard Codes

Lecture 14: Hamming and Hadamard Codes CSCI-B69: A Theorist s Toolkit, Fall 6 Oct 6 Lecture 4: Hamming and Hadamard Codes Lecturer: Yuan Zhou Scribe: Kaiyuan Zhu Recap Recall from the last lecture that error-correcting codes are in fact injective

More information

Coding Techniques for Data Storage Systems

Coding Techniques for Data Storage Systems Coding Techniques for Data Storage Systems Thomas Mittelholzer IBM Zurich Research Laboratory /8 Göttingen Agenda. Channel Coding and Practical Coding Constraints. Linear Codes 3. Weight Enumerators and

More information

Lecture 12: Reed-Solomon Codes

Lecture 12: Reed-Solomon Codes Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 007) Lecture 1: Reed-Solomon Codes September 8, 007 Lecturer: Atri Rudra Scribe: Michel Kulhandjian Last lecture we saw the proof

More information

ERROR CORRECTING CODES

ERROR CORRECTING CODES ERROR CORRECTING CODES To send a message of 0 s and 1 s from my computer on Earth to Mr. Spock s computer on the planet Vulcan we use codes which include redundancy to correct errors. n q Definition. A

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

Cyclic Redundancy Check Codes

Cyclic Redundancy Check Codes Cyclic Redundancy Check Codes Lectures No. 17 and 18 Dr. Aoife Moloney School of Electronics and Communications Dublin Institute of Technology Overview These lectures will look at the following: Cyclic

More information

The Hamming Codes and Delsarte s Linear Programming Bound

The Hamming Codes and Delsarte s Linear Programming Bound The Hamming Codes and Delsarte s Linear Programming Bound by Sky McKinley Under the Astute Tutelage of Professor John S. Caughman, IV A thesis submitted in partial fulfillment of the requirements for the

More information

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Linear Codes Enes Pasalic University of Primorska Koper, 2013 2 Contents 1 Preface 5 2 Shannon theory and coding 7 3 Coding theory 31 4 Decoding of linear codes and MacWilliams

More information

The extended Golay code

The extended Golay code The extended Golay code N. E. Straathof July 6, 2014 Master thesis Mathematics Supervisor: Dr R. R. J. Bocklandt Korteweg-de Vries Instituut voor Wiskunde Faculteit der Natuurwetenschappen, Wiskunde en

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Lecture B04 : Linear codes and singleton bound

Lecture B04 : Linear codes and singleton bound IITM-CS6845: Theory Toolkit February 1, 2012 Lecture B04 : Linear codes and singleton bound Lecturer: Jayalal Sarma Scribe: T Devanathan We start by proving a generalization of Hamming Bound, which we

More information

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei COMPSCI 650 Applied Information Theory Apr 5, 2016 Lecture 18 Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei 1 Correcting Errors in Linear Codes Suppose someone is to send

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

EE512: Error Control Coding

EE512: Error Control Coding EE51: Error Control Coding Solution for Assignment on BCH and RS Codes March, 007 1. To determine the dimension and generator polynomial of all narrow sense binary BCH codes of length n = 31, we have to

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem IITM-CS6845: Theory Toolkit February 08, 2012 Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem Lecturer: Jayalal Sarma Scribe: Dinesh K Theme: Error correcting codes In the previous lecture,

More information

IN this paper, we will introduce a new class of codes,

IN this paper, we will introduce a new class of codes, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 44, NO 5, SEPTEMBER 1998 1861 Subspace Subcodes of Reed Solomon Codes Masayuki Hattori, Member, IEEE, Robert J McEliece, Fellow, IEEE, and Gustave Solomon,

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Check Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University sergekhanna@rutgersedu, salimelrouayheb@rutgersedu arxiv:759569v3

More information

Support weight enumerators and coset weight distributions of isodual codes

Support weight enumerators and coset weight distributions of isodual codes Support weight enumerators and coset weight distributions of isodual codes Olgica Milenkovic Department of Electrical and Computer Engineering University of Colorado, Boulder March 31, 2003 Abstract In

More information

MAS309 Coding theory

MAS309 Coding theory MAS309 Coding theory Matthew Fayers January March 2008 This is a set of notes which is supposed to augment your own notes for the Coding Theory course They were written by Matthew Fayers, and very lightly

More information

Algebra for error control codes

Algebra for error control codes Algebra for error control codes EE 387, Notes 5, Handout #7 EE 387 concentrates on block codes that are linear: Codewords components are linear combinations of message symbols. g 11 g 12 g 1n g 21 g 22

More information

Hamming Codes 11/17/04

Hamming Codes 11/17/04 Hamming Codes 11/17/04 History In the late 1940 s Richard Hamming recognized that the further evolution of computers required greater reliability, in particular the ability to not only detect errors, but

More information

Week 3: January 22-26, 2018

Week 3: January 22-26, 2018 EE564/CSE554: Error Correcting Codes Spring 2018 Lecturer: Viveck R. Cadambe Week 3: January 22-26, 2018 Scribe: Yu-Tse Lin Disclaimer: These notes have not been subjected to the usual scrutiny reserved

More information

CONSTRUCTION OF QUASI-CYCLIC CODES

CONSTRUCTION OF QUASI-CYCLIC CODES CONSTRUCTION OF QUASI-CYCLIC CODES by Thomas Aaron Gulliver B.Sc., 1982 and M.Sc., 1984 University of New Brunswick A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

More information

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x]. The second exam will be on Friday, October 28, 2. It will cover Sections.7,.8, 3., 3.2, 3.4 (except 3.4.), 4. and 4.2 plus the handout on calculation of high powers of an integer modulo n via successive

More information

Zigzag Codes: MDS Array Codes with Optimal Rebuilding

Zigzag Codes: MDS Array Codes with Optimal Rebuilding 1 Zigzag Codes: MDS Array Codes with Optimal Rebuilding Itzhak Tamo, Zhiying Wang, and Jehoshua Bruck Electrical Engineering Department, California Institute of Technology, Pasadena, CA 91125, USA Electrical

More information

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0 Coding Theory Massoud Malek Binary Linear Codes Generator and Parity-Check Matrices. A subset C of IK n is called a linear code, if C is a subspace of IK n (i.e., C is closed under addition). A linear

More information

G Solution (10 points) Using elementary row operations, we transform the original generator matrix as follows.

G Solution (10 points) Using elementary row operations, we transform the original generator matrix as follows. EE 387 October 28, 2015 Algebraic Error-Control Codes Homework #4 Solutions Handout #24 1. LBC over GF(5). Let G be a nonsystematic generator matrix for a linear block code over GF(5). 2 4 2 2 4 4 G =

More information

Coding theory: Applications

Coding theory: Applications INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts

More information

Lattices. SUMS lecture, October 19, 2009; Winter School lecture, January 8, Eyal Z. Goren, McGill University

Lattices. SUMS lecture, October 19, 2009; Winter School lecture, January 8, Eyal Z. Goren, McGill University Lattices SUMS lecture, October 19, 2009; Winter School lecture, January 8, 2010. Eyal Z. Goren, McGill University Lattices A lattice L in R n is L = {a 1 v 1 + + a n v n : a 1,..., a n Z}, where the v

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

Lecture 2 Linear Codes

Lecture 2 Linear Codes Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of

More information

Linear Codes, Target Function Classes, and Network Computing Capacity

Linear Codes, Target Function Classes, and Network Computing Capacity Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:

More information

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction BCH Codes Outline Binary Primitive BCH Codes Decoding of the BCH Codes Implementation of Galois Field Arithmetic Implementation of Error Correction Nonbinary BCH Codes and Reed-Solomon Codes Preface The

More information