Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Similar documents
Mathematics Department

MATH3302. Coding and Cryptography. Coding Theory

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

EE512: Error Control Coding

MATH32031: Coding Theory Part 15: Summary

Vector spaces. EE 387, Notes 8, Handout #12

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

Coding Theory. Golay Codes

EE 229B ERROR CONTROL CODING Spring 2005

Chapter 3 Linear Block Codes

Cyclic Redundancy Check Codes

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

3. Coding theory 3.1. Basic concepts

16.36 Communication Systems Engineering

MATH 291T CODING THEORY

MATH 291T CODING THEORY

The extended coset leader weight enumerator

Linear Codes and Syndrome Decoding

Hadamard Codes. A Hadamard matrix of order n is a matrix H n with elements 1 or 1 such that H n H t n = n I n. For example,

MTH6108 Coding theory Coursework 7

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics

Coset Decomposition Method for Decoding Linear Codes

Week 3: January 22-26, 2018

MT5821 Advanced Combinatorics

Codes for Partially Stuck-at Memory Cells

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal

Solutions to problems from Chapter 3

Lecture 12. Block Diagram

Lecture 3: Error Correcting Codes

Math 512 Syllabus Spring 2017, LIU Post

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound

: Error Correcting Codes. October 2017 Lecture 1

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

A Brief Encounter with Linear Codes

The Golay code. Robert A. Wilson. 01/12/08, QMUL, Pure Mathematics Seminar

ELEC-E7240 Coding Methods L (5 cr)

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].

A Projection Decoding of a Binary Extremal Self-Dual Code of Length 40

} has dimension = k rank A > 0 over F. For any vector b!

MTH6108 Coding theory

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

On Two Probabilistic Decoding Algorithms for Binary Linear Codes

MAS309 Coding theory

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

Ma/CS 6b Class 25: Error Correcting Codes 2

Ma/CS 6b Class 24: Error Correcting Codes

Lecture 4: Linear Codes. Copyright G. Caire 88

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Pre-Calculus I. For example, the system. x y 2 z. may be represented by the augmented matrix

Linear Algebra Massoud Malek

11 Minimal Distance and the Parity Check Matrix

Generator Matrix. Theorem 6: If the generator polynomial g(x) of C has degree n-k then C is an [n,k]-cyclic code. If g(x) = a 0. a 1 a n k 1.

Vector Spaces 4.3 LINEARLY INDEPENDENT SETS; BASES Pearson Education, Inc.

The definition of a vector space (V, +, )

Chapter 2 Subspaces of R n and Their Dimensions

Cayley-Hamilton Theorem

Definition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}.

Chapter 1. Vectors, Matrices, and Linear Spaces

The extended Golay code

1. How many errors may be detected (not necessarily corrected) if a code has a Hamming Distance of 6?

A Relation Between Weight Enumerating Function and Number of Full Rank Sub-matrices

1. Select the unique answer (choice) for each problem. Write only the answer.

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

exercise in the previous class (1)

7.1 Definitions and Generator Polynomials

An Introduction to (Network) Coding Theory

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Carleton College, winter 2013 Math 232, Solutions to review problems and practice midterm 2 Prof. Jones 15. T 17. F 38. T 21. F 26. T 22. T 27.

MA 265 FINAL EXAM Fall 2012

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES

Section 3 Error Correcting Codes (ECC): Fundamentals

Error Detection and Correction: Hamming Code; Reed-Muller Code

Zigzag Codes: MDS Array Codes with Optimal Rebuilding

Some error-correcting codes and their applications

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Notes 10: Public-key cryptography

Optimum Soft Decision Decoding of Linear Block Codes

EE512: Error Control Coding

An Introduction to (Network) Coding Theory

Linear Algebra Review. Vectors

The Hamming Codes and Delsarte s Linear Programming Bound

Error-Correcting Codes

6.4 BASIS AND DIMENSION (Review) DEF 1 Vectors v 1, v 2,, v k in a vector space V are said to form a basis for V if. (a) v 1,, v k span V and

1 Last time: inverses


Chapter 5. Cyclic Codes

Transcription:

Coding Theory Massoud Malek Binary Linear Codes Generator and Parity-Check Matrices. A subset C of IK n is called a linear code, if C is a subspace of IK n (i.e., C is closed under addition). A linear code of dimension k contains precisely 2 k codewords. The distance of a linear code is the minimum weight of any nonzero codeword. If C is a block code (not necessarily linear) of length n and if P is an n n permutation matrix, the block code C = {v P : v C} is said to be equivalent to C. A k n matrix G is a generator matrix for some linear code C, if the rows of G are linearly independent; that is if the rank of G equals k. A linear code generated by an k n generator matrix G is called a (n, k) code. An (n, k) code with distance d is said to be an (n, k, d) code. If G 1 is row equivalent to G, then G 1 also generates the same linear code C. If G 2 is column equivalent to G, then the linear code C 2 generated by G 2 is equivalent to C. Consider the 3 5 generator matrix G = 0 1 0 0 1 1 0 of rank 3. By using some row operations, we obtain another generator matrix 0 G 1 = 0 0 0 1.. = [ I 3 B ]. for the same linear code. Note that G 1 is in reduced row echelon form (RREF). This linear code has an information rate of 3/5 (i.e., G and G 1 accept all the messages in IK 3 and change them into words of length 5). A generator matrix in the form G = [ I 3 B ] is said to be in standard form, and the code C generated by G is called a systematic code. Not all linear codes have a generator matrix in standard form. For example, the linear code C = {000,,, } has six generator matrices G 1 =, G 2 =, G 3 =, G 4 =, G 5 =, and G 6 =. None of these matrices are in standard form. Note that the matrix 0 G = in 0 standard form generates the code C = {000,, 010, 110} which is equivalent to C. If G is in RREF, then any column of G which is equal to the vector e i is called a leading column. If m IK k is the message and v = mg IK n is the codeword of a systematic code, then the first k digits of v which represent the message m are called information digits, while the last n k digits are called redundancy or parity-check digits. If C is not a systematic code, then to recover the message from a codeword we ( select) the digits corresponding to the leading columns e 1, e 2,, e k. For example, if G = = [ e 2 θ e 1 ] and v =, then we recover the message m = 10 from the last digit and the first digit of v respectively.

Massoud Malek Binary Linear Codes Page 2 Let S be a subset of IK n. The set of all vectors orthogonal to S is denoted by S and called the orthogonal complement of S. It can readily be shown that S is a linear code. If C =< S >, then C = S which is also a linear code is called the dual code of C. A matrix H is called a parity-check matrix for a linear code C of length n generated by the matrix G, if the columns of H form a basis for the dual code C. If v is a word in C, then vh = θ. A code C is called self-dual if C = C. In this case n must be even and C must be an (n, n/2) code. If G is a generator matrix of a self-dual code, then H = G t. Both the generator matrices 1 1 1 1 1 1 1 1 0 0 1 1 0 0 0 1 1 1 0 0 0 1 1 1 G = and G 0 1 0 1 1 = 0 1 0 1 generate self-dual codes but only G 1 is in RREF. If G = [I B] is a generator of a self-dual code, then B 2 = I. Theorem 1. Let H be a parity-chek matrix for a linear code C generated by the k n matrix G. Then (i) the rows of G are linearly independent; (ii) the columns of H are linearly independent; (iii) GH = Z k, where Z k is the k k zero matrix; (iv) by permuting columns of H, we obtain another parity-check matrix corresponding to G, (v) dim(c) = rank(g), dim(c ) = rank(h), and dim(c) + dim(c ) = n; (vi) H t is a generator matrix for C with G t its parity-check matrix; (vii) if C is self-dual with G = [I k B] its generator, then G 1 = [B I k ] also generates C; (viii) C has distance d if and only if any set of d 1 rows of H is linearly independent, and at least one set of d rows of H is linearly dependent. Algorithms for Finding Generator and Parity-Check Matrices. Example 1. Let S = {01, 00, 11, 10} be a subset of IK 5 generating the linear code C. By using the words in S, we define the matrix 0 1 0 0 M =. 1 1 0 0 0 1 After some row operations and deleting a row, we obtain the generator matrix 0. G = 0. 0 0 1. in RREF. Let B be the matrix formed by the last two rows of G, the parity-check matrix of C is the following matrix H = 0 1

Massoud Malek Binary Linear Codes Page 3 Example 2. Let S = {0, 0010, 0000, 0000, 0000001} be a linearly independent set generating C. The generator matrix G = e 1 e 2 e 3 e 4 e 5 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 is in RREF but not in standard form. We permute the columns of G into order 1, 4, 5, 7, 9, 2, 3, 6, 8, 10 to form the matrix 0000000 0000000 0000000 0 0 0 0 0000000 0010 0 0 0 0000000 G 1 = G P = 0000 = 0000000 0 0 0 0000 0000000 0000001 0 0 0 0000000 0000000 0000000. 0 1 1 1 1. 0 0 1. 0 0 0.. Then we form the matrix H 1 and finally rearrange the rows of H 1 into their natural order to form the parity-check matrix H. 01111 1 01 4 00 5 [ ] 00 7 B 00 9 H 1 = = ; H = P H I 5 00 2 1 = 00 3 00 6 00 8 00 10 The columns of H form a basis for C. 0 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 Matlab. To permute the columns of G into order 1, 4, 5, 7, 9, 2, 3, 6, 8, 10 to form the matrix G 1, first we define the permutation matrix P as follows: P = eye(10) ; T = P ; < Return key > P (:, 2) = T (:, 4) ; P (:, 3) = T (:, 5) ; P (:, 4) = T (:, 7) ; P (:, 5) = T (:, 9) ; P (:, 6) = T (:, 2) ; P (:, 7) = T (:, 3) ; P (:, 8) = T (:, 6) ; P (:, 9) = T (:, 8) ; < Return key > G1 = G P < Return key > Finally H is obtained as follows: B = G1(:, 6 : 10) ; H1 = [B ; eye(5) ] ; H = P H1 < Return key > 1 2 3 4 5 6 7 8 9 10

Massoud Malek Binary Linear Codes Page 4 Maximum Likelihood Decoding (MLD) for Linear Codes. We will describe a procedure for either CMLD or IMLD for a linear codes. If C IK n is a linear code of dimension k, and if u IK n, we define the coset of C determined by u denoted û as follows: û = C + u = {v + u : v C}. There are as many as 2 n k distinct cosets of C in IK n of order 2 k, where every word in IK n is contained in one of the cosets. Theorem 2. Let C be a linear code. Then (i) θ = C, (ii) if v û = C + u, then v = û, (iii) u + v C if and only if u and v are in the same coset. The parity-check matrix and cosets of the code play fundamental roles in the decoding process. Let C be a linear code. Assume the codeword v in C is transmitted and the word w is received, resulting in the error pattern u = v + w. Then w + u = v is in C, so the error pattern u and the received word w are in the same coset of C. Since error patterns of small weight are the most likely to occur, we choose a word u of least weight in the coset û (which must contain w) and conclude that v = w + u was the word sent. Let C IK n be a linear code of dimension k and let H be a parity-check matrix. For any word w IK n, the syndrome of w is the word s(w) = wh in K n k. Theorem 3. Let H be a parity-chek matrix for a linear code C. Then (i) wh = θ if and only if w is a codeword in C. (ii) w 1 H = w 2 H if and only if w 1 and w 2 lie in the same coset of C. (iii) If u is the error pattern in a received word w, then uh is the sum of the rows of H that correspond to the positions in which errors occurred in transmission. A table which matches each syndrome with its coset leader, is called a standard decoding array, or SDA. To construct an SDA, first list all the cosets for the code, and choose from each coset word of least weight as coset leader u. Then find a parity-check matrix for the code and, for each coset leader u, calculate its syndrome uh. Example. Here is the list of all the cosets of C = {0000, 1, 0, 1110} generated by the 11 1 01 generator matrix G = with a parity-check matrix H = : 0 10 01 0000 = {0000, 1, 0, 1110} 0 = {0, 1, 1, 0110} 0 = {0, 1111, 0, 0} 0 = {0, 1, 0111, 1} Here is the SDA for the code: Coset leader u Syndrome uh 0000 00 0 11 0 01 0 or 0 10

Massoud Malek Binary Linear Codes Page 5 The syndrome with a indicates a retransmission in the case of IMLD. Notice that the set of error patterns that can be corrected using IMLD is equal to the set of unique coset leaders. If w = 1 is received, then the syndrome of w is s(w) = wh = 11. Notice that the word of least weight in the coset ŵ is u = 0 and the syndrome of u is s(u) = uh = 11 = wh. Furthermore, CMLD concludes v = w + u = 1 + 0 = 0 was sent, so there was an error in the first digit. Notice also that s(w) = 11 picks up the first row of H corresponding to the location of the most likely error; also the coset leader in the SDA is 0. The calculations d(0000, 1) = 3 d(0, 1) = 1 d(1, 1) = 2 d(1110, 1) = 2 give the distances between w and each codeword in C, show that indeed v = 0 is the closest word in C to w. For w = 1111 received, however, the same calculations d(0000, 1111) = 4 d(0, 1111) = 2 d(1, 1111) = 1 d(1110, 1111) = 1 reveal a tie for the closest word in C to w. This is not surprising, since there was a choice for a coset leader for the syndrome 1111H = 01. In the case of CLMD, we arbitrary choose a coset leader, which in effect arbitrary selects one codeword in C closest to w. Using ILMD, we ask for retransmission.