Linear Codes and Syndrome Decoding

Similar documents
Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

Mathematics Department

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound

Definition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}.

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes

3. Coding theory 3.1. Basic concepts

Ma/CS 6b Class 25: Error Correcting Codes 2

16.36 Communication Systems Engineering

EE512: Error Control Coding

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

MATH 291T CODING THEORY

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Cyclic Redundancy Check Codes

exercise in the previous class (1)

MATH 291T CODING THEORY

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Error Detection and Correction: Hamming Code; Reed-Muller Code

Solutions to problems from Chapter 3

MT5821 Advanced Combinatorics

EE 229B ERROR CONTROL CODING Spring 2005

Chapter 3 Linear Block Codes

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

Chapter 5. Cyclic Codes

Reed-Muller Codes. These codes were discovered by Muller and the decoding by Reed in Code length: n = 2 m, Dimension: Minimum Distance

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

MATH32031: Coding Theory Part 15: Summary

Math 512 Syllabus Spring 2017, LIU Post

Lecture 12. Block Diagram

Optimum Soft Decision Decoding of Linear Block Codes

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

Cyclic codes: overview

EE512: Error Control Coding

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

Error-Correcting Codes

Coset Decomposition Method for Decoding Linear Codes

1. How many errors may be detected (not necessarily corrected) if a code has a Hamming Distance of 6?

Coding Theory. Golay Codes

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

The extended coset leader weight enumerator

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

Arrangements, matroids and codes

U Logo Use Guidelines

11 Minimal Distance and the Parity Check Matrix

Notes 10: Public-key cryptography

Channel Coding I. Exercises SS 2017

Hamming codes and simplex codes ( )

Lecture 4: Linear Codes. Copyright G. Caire 88

} has dimension = k rank A > 0 over F. For any vector b!

ECEN 604: Channel Coding for Communications

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal

A Projection Decoding of a Binary Extremal Self-Dual Code of Length 40

Numbers and their divisors

Ma/CS 6b Class 24: Error Correcting Codes

Lecture 14: Hamming and Hadamard Codes

MATH3302. Coding and Cryptography. Coding Theory

FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES. Presenting: Wednesday, June 8. Section 1.6 Problem Set: 35, 40, 41, 43

Channel Coding for Secure Transmissions

Some error-correcting codes and their applications

Introduction to binary block codes


Can You Hear Me Now?

Engineering Jump Start - Summer 2012 Mathematics Worksheet #4. 1. What is a function? Is it just a fancy name for an algebraic expression, such as

Generator Matrix. Theorem 6: If the generator polynomial g(x) of C has degree n-k then C is an [n,k]-cyclic code. If g(x) = a 0. a 1 a n k 1.

Example: sending one bit of information across noisy channel. Effects of the noise: flip the bit with probability p.

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution

We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes.

Lecture 3: Error Correcting Codes

The extended Golay code

Channel Coding I. Exercises SS 2017

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 1, JANUARY

Algebraic Geometry Codes. Shelly Manber. Linear Codes. Algebraic Geometry Codes. Example: Hermitian. Shelly Manber. Codes. Decoding.

Hadamard Codes. A Hadamard matrix of order n is a matrix H n with elements 1 or 1 such that H n H t n = n I n. For example,

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

The Golay code. Robert A. Wilson. 01/12/08, QMUL, Pure Mathematics Seminar

Analysis of Absorbing Sets and Fully Absorbing Sets of Array-Based LDPC Codes

7.1 Definitions and Generator Polynomials

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

BINARY CODES. Binary Codes. Computer Mathematics I. Jiraporn Pooksook Department of Electrical and Computer Engineering Naresuan University

Lecture 4: Proof of Shannon s theorem and an explicit code

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

Lecture 12: November 6, 2017

Berlekamp-Massey decoding of RS code

Lecture Introduction. 2 Linear codes. CS CTT Current Topics in Theoretical CS Oct 4, 2012

A Fraction Strip Above the Rest

Hamming Codes 11/17/04

UTA EE5362 PhD Diagnosis Exam (Spring 2011)

Transcription:

Linear Codes and Syndrome Decoding These notes are intended to be used as supplementary reading to Sections 6.7 9 of Grimaldi s Discrete and Combinatorial Mathematics. The proofs of the theorems are left as exercises. In the following, an (n,m) block code will be the image C = E(Z m 2 ) of an encoding function E : Z m 2 Z n 2 where we assume, except in Section 5, that the first m bits of the code word E(w) Z n 2 agrees with the first m bits of the message word w Z m 2.. Generator Matrices and Linear Codes Let G be an m n matrix over Z 2. We will assume that G has the form G = [I m A] where I m is the m m identity matrix, and A is an m (n m) matrix over Z 2. We will consider the elements of Z k 2, k, as row k-vectors (in other words, k matrices) over Z 2. We define an encoding function E G : Z m 2 Z n 2 by E G (w) = wg for w Z m 2. G is called the generator matrix of the code C = E(Z m 2 ) Z n 2, and C is called the code generated by G. Example : Let Then, for instance, G =. E() = [ ] G = [ ] = [ ] =. The code consists of the words obtained by adding any number of rows of G. Hence, C = {,,,,,,, }. Example 2: Let E : Z 2 2 Z 6 2 be the (6, 2) triple repetition code defined by E(w) = www for all w Z 6 2. Then [ ] G = is the generator matrix for this code. Example 3: Let E : Z 5 2 Z 6 2 be the (6, 5) parity-check code defined by E(w) = wp w for all w Z 5 2 where p w is if w has an odd number of s, and if w has an even number of s. Then G = is the generator matrix for this code.

2 Definition: A code C is called a linear code if c + c 2 C for all c,c 2 C. All of the codes in the examples above are linear codes. More generally we have the following theorem. Theorem : Let C = E(Z m 2 ) Z n 2 be a code with the encoding function E : Z m 2 Z n 2. Then the following conditions are equivalent: a. C is a linear code. b. C is generated by a generator matrix. c. E(w + w 2 ) = E(w ) + E(w 2 ) for all w,w 2 Z m 2. 2. Parity-Check Matrices and Error Correction For a generator matrix G = [I m A], we define the parity-check matrix H by H = [A t I n m ] where A t is the transpose of A, and I n m is the (n m) (n m) identity matrix. Note that H is an (n m) m matrix over Z 2. Example 4: For G of Example, H = is the parity-check matrix. Example 5: For the (6, 2) triple repetition code the parity check matrix is H =. Example 6: For the (6, 5) parity-check code the parity check matrix is H = [ ]. Example 7: Consider the code of Examples and 4. Note that for the code word x =, we have Hx t = H [ ] t = =.

3 On the other hand for x =, which is not a code word, we have Hr t = H [ ] t = =. What is observed in Example 7 is true in general. Theorem 2: Let C be the code generated by G. Then for x Z n 2, x C if and only if Hx t =. Remark: We will follow the usual convention and denote zero vectors of any size by, and assume that the size is clear from the context. Suppose that the code word c C is transmitted, and r Z n 2 is received where r = c + e and e Z n 2 is the error pattern. Then Hr t = He t by Theorem 2. Example 8: Again consider the code of Examples, 4 and 7. If the received word is r =, then as we computed in Example 7, Hr t = [ ] t. If e Z 6 2 is the error pattern, then by the observation above, He t = Hr t = [ ] t. [ ] t is the second column of H. Therefore the error pattern e = is a possibility. [ ] t It has weight one. is also the sum of the fifth and the sixth columns, and the sum of the first, the fourth and the sixth columns. These correspond to the error patterns e = and e = of weights two and three, respectively. In a binary symmetric channel, for a given received word, the error pattern with the lowest weight, or equivalently, the code word closest to the received word, is the most likely one. So we will assume that e = is the error pattern, correct the received word as c = r + e = + =, and decode it as w =. On the other hand, if r =, then Hr t = [ ] t. So r is not a code word, and since Hr t is not equal to a column of H, an error of weight one is not possible. There are three possibilities for error patterns of weight two: e =,,. 3. Syndromes and Error Correction Definition: Let C Z n 2 be a linear code with the parity-check matrix H. For a received word r Z n 2, Hr t Z m 2 is called the syndrome of r. The set of all received words with the same syndrome is called a coset of the code C. Having the same syndrome is an equivalence relation, and the cosets are the equivalence classes of this equivalence relation. We will denote the set {x + c : c C} by x + C. Note that x x + C.

4 Theorem 3: Each coset of the linear code C Z n 2 is of the form x + C for some x Z n 2. In particular, each coset has C = 2 m elements, and there are 2 n m cosets. Definition: An element of a coset whose weight is smaller than or equal to the weights of all other elements in that coset is called a coset leader for the coset. Example 9: The following is the coset table for the code of Examples, 4, 7. We have the elements of C in the first row. Each row consists of the elements of a coset of C. An element in the first column is a coset leader for the coset in that row. The coset leader is unique for the cosets in the rows -7. The coset in the eighth row has three coset leaders of weight two:, and. Here we chose the first one when making this table. Using this table we correct the errors as follows: If r = is received, then we find it in this table. It is the fourth element in the seventh row. The coset leader of this coset is e =. So we correct the received word as c = r + e = + =. Since the cosets are in a one-to-one correspondence with the syndromes, we can also do this using a smaller table with two columns in which the first column lists the syndrome corresponding to a coset, and the second column lists the coset leader of the coset. Hr t e Now for the received word r =, we simply compute Hr t = [ ] t, find in the first row, and read the coset leader e = in the second row. We can prepare the syndrome table for a linear code as follows: Starting with weight zero, and in the order of increasing weight compute the syndrome of each x Z n 2. (We order the elements of the same weight in some way.) If the syndrome we compute is not in the table, we add it to the first column of the table, and add x to the second column as the corresponding coset leader; if the syndrome we compute is already in the table, we move on to the next x. We stop when there are 2 n m syndromes in the table.

Example : We apply this algorithm to get the following syndrome table for the (6, 2) triple repetition code. 5 Hr t e Suppose r Z n 2 is the received word, and let e Z n 2 be the coset leader of the coset r + C. Then wt(e) = min{wt(r + c) : c C}. If we apply the syndrome method, then we correct r as c = r + e. Since d(c,r) = wt(c + r) = wt(e) wt(r + c) = d(c,r) for all c C, c is one of the closest code words to r in Z n 2. In other words, the syndrome error correction is the nearest neighbor error correction for a linear code. Exercise : Note that in both of Examples 9 and, every e Z 6 2 of weight one occurs as a coset leader in the syndrome table, but this is not true for weight two elements. There are ( 6 2) = 5 such elements, but only 9 of them occurs as coset leaders in Example, and only one in Example 9. If t is the largest integer such that every e Z n 2 with wt(e) t is the unique coset leader of its coset, what can you say about t? Exercise 2: Check that the remaining 5 9 = 6 weight two elements in Example lie in cosets whose coset leader has weight one. Therefore every coset of the (6, 2) triple repetition code has a unique coset leader. But in Example 9, the last coset in the table has three coset leaders as we observed in Example 9. What can you say about the codes every coset of which has a unique coset leader? 4. Parity-Check Matrix and Minimum Distance Let C Z n 2 be a linear code with minimum distance d and the parity-check matrix H. Theorem 4: For a linear code C, d = min{wt(c) : c C, c }. Theorem 5: Suppose that there are d columns in H whose sum is zero, but for all i d, the sum of any i columns of H is different from zero. Then d = d.

6 5. Hamming Codes Let r be a positive integer. Let H be an r (2 r ) matrix whose columns consist of all possible nonzero elements of Z r 2. The corresponding generator matrix G is (2 r r) (2 r ). The code generated by G is called the (2 r, 2 r r) Hamming code. Example : Let r = 3. H = is the parity-check matrix, and G = is the generator matrix of the (7, 4) Hamming code. By Theorem 5, this code has minimum distance 3. Therefore it corrects errors of weight one. Moreover, it has 2 7 4 = 8 cosets, and each coset has a unique coset leader of weight less than or equal to one. Example 2: So far we followed the convention that the first m bits of each code word is the corresponding message word. This is equivalent to the conditions that G has the form [I m A], and H has the form [A t I n m ]. Now we will forgo this condition. Consider the matrices H = and G =. E G : Z 4 2 Z 7 2 gives an encoding such that E(w w 2 w 3 w 4 ) = (w + w 2 + w 4 )(w + w 3 + w 4 )w (w 2 + w 3 + w 4 )w 2 w 3 w 4. Therefore the message word bits occur as the third, the fifth, the sixth, and the seventh bits of the code word. H is the parity-check matrix for this code. Note that for i 7, ith column of H has the digits of the binary expansion of i from top to bottom. For instance, 3 = 2 2 + 2 + 2 = () 2, and the third column is [ ] t. The advantage of using this set-up is in the error correction. Suppose r = is received. Then the syndrome is H r t = [ ] t. This tells us directly that the corresponding weight one error pattern has its only in the 3rd position. That is, e =.