Mathematics Department

Similar documents
Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics

The extended coset leader weight enumerator

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

MATH32031: Coding Theory Part 15: Summary

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

Linear Codes and Syndrome Decoding

3. Coding theory 3.1. Basic concepts

Hamming codes and simplex codes ( )

11 Minimal Distance and the Parity Check Matrix

EE 229B ERROR CONTROL CODING Spring 2005

MAS309 Coding theory

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

MTH6108 Coding theory

MATH 291T CODING THEORY

Generator Matrix. Theorem 6: If the generator polynomial g(x) of C has degree n-k then C is an [n,k]-cyclic code. If g(x) = a 0. a 1 a n k 1.

Cyclic Redundancy Check Codes

Definition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}.

MATH 291T CODING THEORY

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

Math 512 Syllabus Spring 2017, LIU Post

Linear Algebra. F n = {all vectors of dimension n over field F} Linear algebra is about vectors. Concretely, vectors look like this:

Vector spaces. EE 387, Notes 8, Handout #12

Ma/CS 6b Class 25: Error Correcting Codes 2

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

MT5821 Advanced Combinatorics

Lecture B04 : Linear codes and singleton bound

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Week 3: January 22-26, 2018

Error Detection and Correction: Hamming Code; Reed-Muller Code

Lecture 17: Perfect Codes and Gilbert-Varshamov Bound

MTH6108 Coding theory Coursework 7

Lecture 4: Linear Codes. Copyright G. Caire 88

Advanced Higher Mathematics for INFOTECH Final exam 28. March 2012

MTH 362: Advanced Engineering Mathematics

Row Space, Column Space, and Nullspace

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes

: Error Correcting Codes. October 2017 Lecture 1

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].

Coding Theory. Golay Codes

Coset Decomposition Method for Decoding Linear Codes

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

EE512: Error Control Coding

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

Lecture 14: Hamming and Hadamard Codes

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

The Golay code. Robert A. Wilson. 01/12/08, QMUL, Pure Mathematics Seminar

Reed-Muller Codes. These codes were discovered by Muller and the decoding by Reed in Code length: n = 2 m, Dimension: Minimum Distance

00000, 01101, 10110, Find the information rate and the minimum distance of the binary code 0000, 0101, 0011, 0110, 1111, 1010, 1100, 1001.

Solutions to problems from Chapter 3

16.36 Communication Systems Engineering

Lecture 12: November 6, 2017

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Section 3 Error Correcting Codes (ECC): Fundamentals

Can You Hear Me Now?

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials

Introduction to Determinants

Cyclic codes: overview

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Lecture 22: Section 4.7

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Lecture 12. Block Diagram

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal

Chapter 2 Notes, Linear Algebra 5e Lay

7.1 Definitions and Generator Polynomials

FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES. Presenting: Wednesday, June 8. Section 1.6 Problem Set: 35, 40, 41, 43

Arrangements, matroids and codes

} has dimension = k rank A > 0 over F. For any vector b!

Error-Correcting Codes

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Math 121 Homework 5: Notes on Selected Problems

Math 344 Lecture # Linear Systems

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Chapter 1: Systems of Linear Equations

Ma/CS 6b Class 24: Error Correcting Codes

Math 369 Exam #2 Practice Problem Solutions

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

is Use at most six elementary row operations. (Partial

Chapter 3 Linear Block Codes

Lecture 2 Linear Codes

Introduction to binary block codes

exercise in the previous class (1)

Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore

Span and Linear Independence

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

Span & Linear Independence (Pop Quiz)

Transcription:

Mathematics Department Matthew Pressland Room 7.355 V57 WT 27/8 Advanced Higher Mathematics for INFOTECH Exercise Sheet 2. Let C F 6 3 be the linear code defined by the generator matrix G = 2 2 (a) Find a generator matrix G in standard form for C. (b) Find a parity-check matrix H for C. (c) Find the minimum distance d min of C, and all codewords. Repeat (a) (c) for the linear code C F 5 4 defined by the generator matrix ( ) γ + γ + γ γ G = γ + γ γ Solution. First for the code in F 6 3 : (a) Performing elementary row operations does not change the row-span C of G. So, subtracting the second row from the first, multiplying the third row by 2, and reordering, we find the generator matrix 2 2 G = 2 for C, which is in standard form. (b) Using the matrix G = (I A), we compute the parity-check matrix 2 H = ( A T I) = 2 2 2 (c) No column of H is zero, no two columns are multiples of each other (since any two columns have zeros in different positions) but the second column is a linear combination of the fourth and fifth, so d min = 3, this being the minimal number of linearly dependent columns.

Computing ag for each a F 3 3, we see that the codewords are, 22, 22,, 22, 22, 222, 22, 22,, 22, 22, 22, 2222, 2, 2, 222,, 2222, 222, 222, 22, 2222, 2, 222, 22, 22. We can check that the minimum weight of a non-zero codeword is 3, confirming that d min = 3. Now for the code in F 5 4 : (a) Multiplying both rows by + γ and then subtracting + γ times the second from the first gives the generator matrix ( ) G + γ γ γ = γ for C, which is in standard form. (b) From G, we compute the parity-check matrix + γ γ H = γ γ (c) As for the first code, we see from the columns of H that d min = 3. Computing ag for each a F 2 4, we see that the codewords are (,,,, ), (, γ, + γ,, γ), (, + γ,,, + γ), (,, γ,, ), (γ,, + γ, + γ, γ), (γ, + γ,, + γ, ), (γ, γ, γ, + γ, ), (γ,,, + γ, + γ), ( + γ, γ,,, + γ), ( + γ,, γ,, ), ( + γ,,,, ), ( + γ, + γ, + γ,, γ), (, + γ, γ, γ, ), (,,, γ, + γ), (,, + γ, γ, γ), (, γ,, γ, ). Again we can check that the minimum weight of a non-zero codeword is 3, confirming that d min = 3. 2. Find a (5, ) binary Hamming code in standard form: there is more than one possibility for this form! (a) Find the 5-bit codeword corresponding to the -bit data word. (b) For the received message, use nearest neighbour decoding to find the transmitted Hamming codeword, and the corresponding -bit data word. Solution. We have (5, ) = (2 m, 2 m m) for m = 4, so the code is specified by a 4 5 parity-check matrix. A possible matrix in standard form is H = (Any matrix over F 2 of the right size of the form ( A T I) with pairwise distinct columns will do.)

(a) For H = ( A T I) as above, the check symbols of the data word x = are computed by A T x T =, so the codeword corresponding to x is. (b) The syndrome of y = is Hy T = which is the 5th column of H. Thus the coset leader is e 5, and the transmitted codeword was. The data word is, obtained by deleting the check digits. 3. Define the scalar product on F n q by u v = u v + + u n v n = uv T F q for u, v F n q. Given a k-dimensional linear code C F n q, its dual code is the (n k)-dimensional linear code C = {u F n q u v = for all v C}. Let G and H be a generator and parity-check matrix for C, and show that H and G are generator and parity-check matrices for C. You may use without proof the fact that rank(a) = rank(a T ) for any matrix A. Solution. First we show that H is a generator matrix for C, i.e. the rows of H span C. First we claim that any element of this row span, of the form ah for a F n k q, is in C, by checking that ah v = for all v C. Any such v is of the form bg for b F k q, so we have ah bg = ah(bg) T =. since H is a parity check matrix for C and bg C. Thus ah C. Note that since, as we have shown, HG T b T = for all b F k q, we have HG T =. Now we need to show that any element of C is of the form ah for some a F n k q. The dimension of the image of the map Fq n k F n q, a ah, is rank(h T ) (since (ah) T = Ha T ), and rank(h T ) = rank(h), which is n k since H is a parity-check matrix. Since dim(c ) = n k, the image must be all of C, as required. Since C is a k-dimensional code, rank(g) = k, and so the dimension of the kernel of the map a Ga T is n k. Since GH T = (HG T ) T = T =, it follows from a Lemma in the lectures that G is a parity-check matrix for C. 4. Let C F 6 3 be the dual of the first code from Exercise. (a) Find a generator matrix, a parity-check matrix, and all codewords of C. (b) Describe all cosets of C, by finding a coset leader for each, and calculate the syndrome of each coset leader.

(c) If the received message is v = 2, which codeword in C was most likely sent? What about if the received message is w = 2? (In the event that there is not a unique most likely sent message, give all maximally likely possibilities.) Solution. (a) Using the solutions to Exercise and 3, we see that C has generator matrix 2 G = 2 2 2 and parity-check matrix H = 2 2 We find the codewords by calculating ag for all a F 3 3, to obtain (b) The cosets are, 22, 2, 2,, 22, 222, 22, 2222, 2, 22, 222, 22,, 2, 2, 22, 222, 22, 22, 22, 222, 222, 222, 22, 22, 22222. u + C = {u + c c C } for u F 6 3. Since there are 27 codewords in C, and 729 = 27 2 elements of F 6 3, there are 27 cosets, corresponding to the 27 possible syndromes in F 3 3. To get started we can note that any two different elements of F6 3 of weight at most lie in different cosets: their difference has weight at most 2, and thus is not an element of C, in which non-zero elements have weight at least 3. Such elements must also be coset leaders because of their low weight (since every non-zero element of C has weight at least 3, an element of weight cannot be in the same coset as an element of weight, i.e. the coset C ). This gives us coset leaders: u =, S(u ) =, u 2 =, S(u 2 ) = 2, u 3 = 2, S(u 3 ) = 22, u 4 =, S(u 4 ) =, u 5 = 2, S(u 5 ) = 22, u 6 =, S(u 6 ) =, u 7 = 2, S(u 7 ) = 22, u 8 =, S(u 8 ) =, u 9 = 2, S(u 9 ) = 22, u =, S(u ) =, u = 2, S(u ) = 2, u 2 =, S(u 2 ) = 2, u 3 = 2, S(u 3 ) =. Now we can look for vectors with the remaining 4 possible syndromes, either by trial and error or solving linear equations. We also need to check that these elements are coset leaders; this will be automatic if they have weight 2, since we have already computed all of the cosets of vectors

of weight at most above. For example: u 4 =, S(u 4 ) =, u 5 = 2, S(u 5 ) = 2, u 6 = 2, S(u 6 ) = 2, u 7 = 2, S(u 7 ) = 2, u 8 =, S(u 8 ) = 2, u 9 =, S(u 9 ) =, u 2 =, S(u 2 ) = 2, u 2 =, S(u 2 ) = 2, u 22 = 22, S(u 22 ) = 22, u 23 =, S(u 23 ) = 2, u 24 =, S(u 24 ) = 2, u 25 = 2, S(u 25 ) = 2, u 26 = 22, S(u 27 ) = 22, u 27 = 2, S(u 27 ) = 222. (c) The syndrome of v is 22, so it lies in the coset with leader u 9 = 2. This is the unique element of the coset with minimal weight (since it has Hamming distance at least 2 from every non-zero element of C ). Thus the most likely sent message is v u 9 = 22. The syndrome of w is, so it lies in the coset with leader u 9 =. However, this is not the unique element of this coset with minimal weight; a vector u 9 + c with c C has weight 2 whenever the second and fifth entries of c are 2, and there are two other non-zero entries of c the codewords with this property are 222, 22 and 2222, giving other possible coset leaders 2, and 22 for u 9 + C. Thus the most likely sent messages are all of which are equally likely., 22, 2, 222, 5. (Optional) Let K be a field and f : K n K m be the linear map defined by f(x) = Ax for some m n matrix A with entries in K. Recalling that Ker f is a vector subspace of K n (see Exercise 3(a) on Sheet 2), prove that rank(a) + dim(ker f) = n. You may use, without proof, the following fact: if W is a vector subspace of V, then any basis of W may be extended to a basis of V. Solution. Write k = dim(ker f), let v,..., v k be a basis of Ker f, and extend this to a basis v,..., v k, v k+,..., v n of K n. We now wish to show that rank(a) = n k. By definition, rank(a) is the dimension of the image of f, so we should find a basis of this image with size n k; as a candidate, we have the vectors Av k+,..., Av n, which all lie in this image. To see that they form a basis, we need to check that they span the image, and are linearly independent. First we check that they span. Any element of the image is of the form f(x) = Ax for some x K n. Since v,..., v n is a basis, we can find (unique) λ,..., λ n K such that x = λ v + + λ n v n. Then by linearity Ax = A(λ v + + λ n v n ) = λ Av + + λ n Av n = λ k+ Av k+ + + λ n Av n

since v i Ker f, meaning Av i =, whenever i k. Thus Av k+,..., Av n span the image of f. Now we show that these vectors are linearly independent. Let λ k+,..., λ n K be such that λ k+ Av k+ + + λ n Av n =. Then we wish to show that λ k+ = = λ n =. By linearity, we have = λ k+ Av k+ + + λ n Av n = A(λ k+ v k+ + + λ n v n ), meaning, by definition, that λ k+ v k+ + +λ n v n lies in Ker f. Since v,..., v k is a basis of Ker f, there are (unique) λ,..., λ k K such that λ v + + λ k v k = λ k+ v k+ + + λ n v n, giving a linear dependence between v,..., v n. But these vectors are a basis of K n, so this linear dependence must be trivial, i.e. λ i = for all i n. In particular, we get λ k+ = = λ n =, as we wanted.