Hamming Codes 11/17/04

Similar documents
ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound

Orthogonal Arrays & Codes

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

Hamming codes and simplex codes ( )

Cyclic Redundancy Check Codes

arxiv:cs/ v1 [cs.it] 15 Sep 2005

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Can You Hear Me Now?

The Hamming Codes and Delsarte s Linear Programming Bound

exercise in the previous class (1)

Solutions to problems from Chapter 3

MATH 291T CODING THEORY

Chapter 3 Linear Block Codes

EE 229B ERROR CONTROL CODING Spring 2005

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics

MATH 291T CODING THEORY

Codes for Partially Stuck-at Memory Cells

We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes.

Lecture 8: Shannon s Noise Models

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

MATH32031: Coding Theory Part 15: Summary

ERROR CORRECTING CODES

Greedy Codes. Theodore Rice

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

MATH3302. Coding and Cryptography. Coding Theory

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].

ERROR-CORRECTING CODES AND LATIN SQUARES

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

Arrangements, matroids and codes

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

Physical Layer and Coding

1. How many errors may be detected (not necessarily corrected) if a code has a Hamming Distance of 6?

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

Mathematics Department

Error Detection and Correction: Hamming Code; Reed-Muller Code

: Error Correcting Codes. October 2017 Lecture 1

3. Coding theory 3.1. Basic concepts

7.1 Definitions and Generator Polynomials

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

ELEC-E7240 Coding Methods L (5 cr)

Ma/CS 6b Class 24: Error Correcting Codes

Lower Bounds for q-ary Codes with Large Covering Radius

Ma/CS 6b Class 25: Error Correcting Codes 2

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Lecture 3: Error Correcting Codes

Chapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

Lecture 12: November 6, 2017

Information redundancy

11 Minimal Distance and the Parity Check Matrix

Lecture 17: Perfect Codes and Gilbert-Varshamov Bound

MT361/461/5461 Error Correcting Codes: Preliminary Sheet

Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore

G Solution (10 points) Using elementary row operations, we transform the original generator matrix as follows.

Lecture Introduction. 2 Linear codes. CS CTT Current Topics in Theoretical CS Oct 4, 2012

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

COMM901 Source Coding and Compression. Quiz 1

Math 512 Syllabus Spring 2017, LIU Post

EE512: Error Control Coding

A Brief Encounter with Linear Codes

Week 3: January 22-26, 2018

Vector spaces. EE 387, Notes 8, Handout #12

Mathematics 222 (A1) Midterm Examination May 24, 2002

Section 3 Error Correcting Codes (ECC): Fundamentals


Reed-Muller Codes. These codes were discovered by Muller and the decoding by Reed in Code length: n = 2 m, Dimension: Minimum Distance

THIS paper is aimed at designing efficient decoding algorithms

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

The extended coset leader weight enumerator

Notes on Alekhnovich s cryptosystems

MAS309 Coding theory

CSCI 2570 Introduction to Nanocomputing

Codes over Subfields. Chapter Basics

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

Generator Matrix. Theorem 6: If the generator polynomial g(x) of C has degree n-k then C is an [n,k]-cyclic code. If g(x) = a 0. a 1 a n k 1.

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

3F1 Information Theory, Lecture 3

Cyclic codes: overview

Error Correction Review

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Lecture B04 : Linear codes and singleton bound

A 2-error Correcting Code

Linear Codes and Syndrome Decoding

Perfect Two-Fault Tolerant Search with Minimum Adaptiveness 1

Lecture 12. Block Diagram

Cyclic codes. Vahid Meghdadi Reference: Error Correction Coding by Todd K. Moon. February 2008

April 25 May 6, 2016, Verona, Italy. GAME THEORY and APPLICATIONS Mikhail Ivanov Krastanov

Error-Correcting Codes

Lecture 2 Linear Codes

Coding for a Non-symmetric Ternary Channel

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal

PUTNAM PROBLEM SOLVING SEMINAR WEEK 7 This is the last meeting before the Putnam. The Rules. These are way too many problems to consider. Just pick a

16.36 Communication Systems Engineering

Cross-Error Correcting Integer Codes over Z 2

Transcription:

Hamming Codes 11/17/04

History In the late 1940 s Richard Hamming recognized that the further evolution of computers required greater reliability, in particular the ability to not only detect errors, but correct them. His search for error-correcting codes led to the Hamming Codes, perfect 1-error correcting codes, and the extended Hamming Codes, 1-error correcting and 2-error detecting codes.

Uses Hamming Codes are still widely used in computing, telecommunication, and other applications. Hamming Codes also applied in Data compression Some solutions to the popular puzzle The Hat Game Block Turbo Codes

A [7,4] binary Hamming Code Let our codeword be (x 1 x 2 x 7 ) ε F 2 7 x 3, x 5, x 6, x 7 are chosen according to the message (perhaps the message itself is (x 3 x 5 x 6 x 7 )). x 4 := x 5 + x 6 + x 7 (mod 2) x 2 := x 3 + x 6 + x 7 x 1 := x 3 + x 5 + x 7

[7,4] binary Hamming codewords

A [7,4] binary Hamming Code Let a = x 4 + x 5 + x 6 + x 7 (=1 iff one of these bits is in error) Let b = x 2 + x 3 + x 6 + x 7 Let c = x 1 + x 3 + x 5 + x 7 If there is an error (assuming at most one) then abc will be binary representation of the subscript of the offending bit.

A [7,4] binary Hamming Code If (y 1 y 2 y 7 ) is received and abc 000, then we assume the bit abc is in error and switch it. If abc=000, we assume there were no errors (so if there are three or more errors we may recover the wrong codeword).

Definition: Generator and Check Matrices For an [n, k] linear code, the generator matrix is a k n matrix for which the row space is the given code. A check matrix for an [n, k] is a generator matrix for the dual code. In other words, an (n-k) k matrix M for which Mx = 0 for all x in the code.

A Construction for binary Hamming Codes For a given r, form an r 2 r -1 matrix M, the columns of which are the binary representations (r bits long) of 1,, 2 r -1. The linear code for which this is the check matrix is a [2 r -1, 2 r -1 r] binary Hamming Code = {x=(x 1 x 2 x n ) : Mx T = 0}.

Example Check Matrix A check matrix for a [7,4] binary Hamming Code:

Syndrome Decoding Let y = (y 1 y 2 y n ) be a received codeword. The syndrome of y is S:=L r y T. If S=0 then there was no error. If S 0 then S is the binary representation of some integer 1 t n=2 r -1 and the intended codeword is x = (y 1 y r +1 y n ).

Example Using L 3 Suppose (1 0 1 0 0 1 0) is received. 100 is 4 in binary, so the intended codeword was (1 0 1 1 0 1 0).

Extended [8,4] binary Hamm. Code As with the [7,4] binary Hamming Code: x 3, x 5, x 6, x 7 are chosen according to the message. x 4 := x 5 + x 6 + x 7 x 2 := x 3 + x 6 + x 7 x 1 := x 3 + x 5 + x 7 Add a new bit x 0 such that x 0 = x 1 + x 2 + x 3 + x 4 + x 5 + x 6 + x 7. i.e., the new bit makes the sum of all the bits zero. x 0 is called a parity check.

Extended binary Hamming Code The minimum distance between any two codewords is now 4, so an extended Hamming Code is a 1-error correcting and 2-error detecting code. The general construction of a [2r, 2r-1 - r] extended code from a [2r 1, 2r 1 r] binary Hamming Code is the same: add a parity check bit.

Check Matrix Construction of Extended Hamming Code The check matrix of an extended Hamming Code can be constructed from the check matrix of a Hamming code by adding a zero column on the left and a row of 1 s to the bottom.

Decoding Extended Hamming Code

q-ary Hamming Codes The binary construction generalizes to Hamming Codes over an alphabet A={0,, q}, q 2. For a given r, form an r (q r -1)/(q-1) matrix M over A, any two columns of which are linearly independent. M determines a [(q r -1)/(q-1), (q r -1)/(q-1) r] (= [n,k]) q-ary Hamming Code for which M is the check matrix.

Example: ternary [4, 2] Hamming Two check matrices for the some [4, 2] ternary Hamming Codes:

Syndrome decoding: the q-ary case The syndrome of received word y, S:=My T, will be a multiple of one of the columns of M, say S=αm i, α scalar, m i the i th column of M. Assume an error vector of weight 1 was introduced y = x + (0 α 0), α in the i th spot.

Example: q-ary Syndrome [4,2] ternary with check matrix, word (0 1 1 1) received. So decode (0 1 1 1) as (0 1 1 1) (0 0 2 0) = (0 1 2 1).

Perfect 1-error correcting Hamming Codes are perfect 1-error correcting codes. That is, any received word with at most one error will be decoded correctly and the code has the smallest possible size of any code that does this. For a given r, any perfect 1-error correcting linear code of length n=2 r -1 and dimension n-r is a Hamming Code.

Proof: 1-error correcting A code will be 1-error correcting if spheres of radius 1 centered at codewords cover the codespace, and if the minimum distance between any two codewords 3, since then spheres of radius 1 centered at codewords will be disjoint.

Proof: 1-error correcting Suppose codewords x, y differ by 1 bit. Then x-y is a codeword of weight 1, and M(x-y) 0. Contradiction. If x, y differ by 2 bits, then M(x-y) is the difference of two multiples of columns of M. No two columns of M are linearly dependent, so M(x-y) 0, another contradiction. Thus the minimum distance is at least 3.

Perfect A sphere of radius δ centered at x is S δ (x)={y in A n : d H (x,y) δ }. Where A is the alphabet, F q, and d H is the Hamming distance. A sphere of radius e contains words. If C is an e-error correcting code then, so.

Perfect This last inequality is called the sphere packing bound for an e-error correcting code C of length n over F m : where n is the length of the code and in this case e=1. A code for which equality holds is called perfect.

Proof: Perfect The right side of this, for e=1 is q n /(1+n(q-1)). The left side is q n-r where n= (q r -1)/(q-1). q n-r (1+n(q-1)) = q n-r (1+(q r -1)) = q n.

Applications Data compression. Turbo Codes The Hat Game

Data Compression Hamming Codes can be used for a form of lossy compression. If n=2 r -1 for some r, then any n-tuple of bits x is within distance at most 1 from a Hamming codeword c. Let G be a generator matrix for the Hamming Code, and mg=c. For compression, store x as m. For decompression, decode m as c. This saves r bits of space but corrupts (at most) 1 bit.

The Hat Game A group of n players enter a room whereupon they each receive a hat. Each player can see everyone else s hat but not his own. The players must each simultaneously guess a hat color, or pass. The group loses if any player guesses the wrong hat color or if every player passes. Players are not necessarily anonymous, they can be numbered.

The Hat Game Assignment of hats is assumed to be random. The players can meet beforehand to devise a strategy. The goal is to devise the strategy that gives the highest probability of winning.

Source Notes on Coding Theory by J.I. Hall http://www.mth.msu.edu/~jhall/classes/coden otes/coding-notes.html