Lecture 14: Hamming and Hadamard Codes

Similar documents
Lecture B04 : Linear codes and singleton bound

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

CSCI-B609: A Theorist s Toolkit, Fall 2016 Oct 4. Theorem 1. A non-zero, univariate polynomial with degree d has at most d roots.

We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes.

Lecture 3: Error Correcting Codes

Lecture 17: Perfect Codes and Gilbert-Varshamov Bound

EE 229B ERROR CONTROL CODING Spring 2005

Lecture Introduction. 2 Linear codes. CS CTT Current Topics in Theoretical CS Oct 4, 2012

7.1 Definitions and Generator Polynomials

MATH32031: Coding Theory Part 15: Summary

The extended coset leader weight enumerator

Mathematics Department

Error Detection and Correction: Hamming Code; Reed-Muller Code

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

Lecture 12: November 6, 2017

Week 3: January 22-26, 2018

: Error Correcting Codes. October 2017 Lecture 1


Ma/CS 6b Class 25: Error Correcting Codes 2

Lecture 2 Linear Codes

Linear Algebra. F n = {all vectors of dimension n over field F} Linear algebra is about vectors. Concretely, vectors look like this:

11 Minimal Distance and the Parity Check Matrix

Vector spaces. EE 387, Notes 8, Handout #12

16.36 Communication Systems Engineering

3. Coding theory 3.1. Basic concepts

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Arrangements, matroids and codes

Section 3 Error Correcting Codes (ECC): Fundamentals

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

Cyclic Redundancy Check Codes

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory

The Hamming Codes and Delsarte s Linear Programming Bound

Lecture 6: Expander Codes

MATH 291T CODING THEORY

Chapter 3 Linear Block Codes

Lecture 8: Shannon s Noise Models

Investigation of the Elias Product Code Construction for the Binary Erasure Channel

MATH3302. Coding and Cryptography. Coding Theory

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

Lecture 4: Proof of Shannon s theorem and an explicit code

exercise in the previous class (1)

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Linear Codes and Syndrome Decoding

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

MATH 291T CODING THEORY

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

Information Theory. Lecture 7

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal

Physical Layer and Coding

Math 512 Syllabus Spring 2017, LIU Post

Low-Power Cooling Codes with Efficient Encoding and Decoding

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

EE512: Error Control Coding

Solutions to problems from Chapter 3

EE512: Error Control Coding

ECEN 604: Channel Coding for Communications

FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES. Presenting: Wednesday, June 8. Section 1.6 Problem Set: 35, 40, 41, 43

The extended Golay code

(Classical) Information Theory III: Noisy channel coding

Lecture 11: Quantum Information III - Source Coding

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Orthogonal Arrays & Codes

Lecture 4: Codes based on Concatenation

The Golay codes. Mario de Boer and Ruud Pellikaan

Exercise 1. = P(y a 1)P(a 1 )

A Relation Between Weight Enumerating Function and Number of Full Rank Sub-matrices

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Optimum Soft Decision Decoding of Linear Block Codes

Ma/CS 6b Class 24: Error Correcting Codes

ERROR CORRECTING CODES

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics

Algebraic Geometry Codes. Shelly Manber. Linear Codes. Algebraic Geometry Codes. Example: Hermitian. Shelly Manber. Codes. Decoding.

ECE Information theory Final (Fall 2008)

Lecture 7 September 24

Berlekamp-Massey decoding of RS code

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

Lecture 2: August 31

Lecture Introduction. 2 Formal Definition. CS CTT Current Topics in Theoretical CS Oct 30, 2012

Cyclic codes: overview

Hamming codes and simplex codes ( )

Hamming Codes 11/17/04

EE512: Error Control Coding

Definition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}.

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

} has dimension = k rank A > 0 over F. For any vector b!

G Solution (10 points) Using elementary row operations, we transform the original generator matrix as follows.

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Error-Correcting Codes

Transcription:

CSCI-B69: A Theorist s Toolkit, Fall 6 Oct 6 Lecture 4: Hamming and Hadamard Codes Lecturer: Yuan Zhou Scribe: Kaiyuan Zhu Recap Recall from the last lecture that error-correcting codes are in fact injective maps from k symbols to n symbols in Σ, Enc: Σ k Σ n where k and n are referred to as the message dimension and block length respectively. We also call the image of the encoding function code, which is usually denoted by C, i.e. C = Im(Enc; and an element y C a codeword. The minimum distance d is defined as the smallest Hamming distance between two distinct codewords, d = min { (y, y } = min {i : y i y i } y y C y y C We want d to be large so that more errors can be tolerated, but this makes the number of vertices we can put in Σ n smaller. Therefore we have to sacrifice the rate k to generate the n same number of codeword. In many ways, coding theory is about exploring a tradeoff. Linear Codes In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is still a codeword. Linear codes have the following advantages: i. easy to figure out the minimum distance; and ii. simple encoding and decoding algorithms. Definition. (Linear code Let Σ = F q be a finite field with q elements, then C is linear if y, y C F n q, y + y C. In other words, let G F n k q be a full rank n k matrix (making the map injective, then Enc: F k q F n q becomes x Gx, which defines a linear code with its generator matrix G. Example. Let q =, n = 3 and k =. Then the generator matrix G =, so that

Lecture 4: Hamming and Hadamard Codes G ( x x = x x x + x. Thus C = Im(G = { } Note that for linear codes, we introduce the following notation [n, k(, d] q henceforth, where n is the block length, k is the message dimension, and d is the minimum distance if known. Definition. (Hamming weight The Hamming weight of x F n q in a linear code is denoted by wt(x = (x,. Fact. In a linear code, the minimum distance d is equal to the minimum Hamming weight of a nonzero codeword. Proof. d = min { (y, y } = y y C min { (y y, } = y y C min {wt(y} y=y y C. Definition 3. (Dual code Given [n, k] q code C, denote the orthogonal space C {y F n q : y T x =, x C} as the dual code of C. Note that C has parameters [n, n k] q. Definition 4. (Parity check matrix The parity check matrix H of C is defined as an (n k n matrix such that C = Im(Enc, where Enc : Fq n k F n q maps w to H T w. In other words, H T is the generator matrix of C. Example. Reconsider the previous example, in which { C = { } Therefore C = and H = (,,. } Fact. y C Hy =. (re-express the code as null space of the parity check matrix Proof. Notice that H T is the generator matirx of C, i.e. C is the row span of H. Let h T h T x = h T h T x = ( n k H =, then Hx = a, a,, a n k F q, a i h T i x =.. h T i= n k h T n kx = y C, y T x = x (C = C

Lecture 4: Hamming and Hadamard Codes 3 Corollary 3. The minimum distance d is the minimum number of columns in H that are linearly dependent. Proof. d = min {wt(y} = min{wt(y y, Hy = }. y C 3 Hamming Code Hamming code [] is defined by the case of linear code that q =, which has excellent rate k but lower distance as we will see later. n Definition 5. (Hamming code Let r N +. Define the parity check matrix of a Hamming code as H =..... i.e. H F r (r, which is spanned by all distinct r nonzero column vectors. Example. For r =, H = (, and C = { Theorem 4. Hamming code is [ r, r r, 3] code. } Proof. We only need to prove d = 3, which is equivalent to say the minimum number of linearly dependent column is 3. Since is not a column of H, every cloumns are linearly independent. But there exists obviously triple of linearly dependent columns, such as,. +. +. =.. Remark. Let n = r, then Hamming code is [n, n log (n +, 3] code. 3 Since the distance is 3, Hamming code is uniquely decodable for up to = error. In fact, we can correct one error easily. Let y C be any codeword, and z = y + e i be the received message. Then Hz = H(y + e i = He i.

Lecture 4: Hamming and Hadamard Codes 4 which is just the i the column of H. Otherwise Hz = implies that y isnot modified. For ( ( example, with y = and z =, Hz = =. This indicates that index 3 has changed. Definition 6. (Perfect code C is a perfect code if Hamming balls centered at codewords of radius t (i.e. max errors can partition Σ n exactly. Theorem 5. Hamming code is perfect. Proof. x F n, if Hx =, then x C. Otherwise Hx = h i, where h i is the i-th column of H. Hence H(x + e i = and therefore x + e i C. 4 Hadamard Code The Hadamard code is a code with extremely low rate but high distance. It is always used for error detection and correction when transmitting messages over very noisy or unreliable channels. Definition 7. (Hadamard Code Let r N +. The generator matrix of Hadamard code is a r r matrix where the rows are all possible binary strings in F r. Example. For r =, we have G = { }. which maps the messages to Gx = Fact 6. Hadamard code is a [ r, r, r ] code. Proof. It suffices to prove the minimum weight of a nonzero codeword is r. Let x

Lecture 4: Hamming and Hadamard Codes 5 F n, i.e. k s.t. x k =. Then wt(gx = P r i [ r ][gi T x = ] = P y F r [y T x = ] where g T i denote the i-th row of G. = P y F [r]\{k},y k F [ y k x k + ] y ix i = i k [ ] = E y P F [r]\{k} yk F y ix i = + y k = i:i k Remark. In other words, Hadamard code is [n, log n, n ] code with n = r. Reference [] Hamming, R. W. (95. Error detecting and error correcting codes. Bell System technical journal, 9(, 47-6. [] http://www.cs.cmu.edu/~odonnell/toolkit3/lecture.pdf