Reed Muller Error Correcting Codes

Similar documents
Reed-Muller Codes. Sebastian Raaphorst Carleton University

EE 229B ERROR CONTROL CODING Spring 2005

Error Detection and Correction: Hamming Code; Reed-Muller Code

Reed-Muller Codes. These codes were discovered by Muller and the decoding by Reed in Code length: n = 2 m, Dimension: Minimum Distance

MATH32031: Coding Theory Part 15: Summary

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

Math 512 Syllabus Spring 2017, LIU Post

Notes 10: Public-key cryptography

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

MATH3302. Coding and Cryptography. Coding Theory

Algebraic Codes for Error Control

MATH 291T CODING THEORY

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

Decoding Reed-Muller codes over product sets

3. Coding theory 3.1. Basic concepts

MATH 291T CODING THEORY

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

Can You Hear Me Now?

Ma/CS 6b Class 25: Error Correcting Codes 2

Second Order Reed-Muller Decoding Algorithm in Quantum Computing

Cyclic Redundancy Check Codes

Combinatorics in Space. The Mariner 9 Telemetry System

Error-Correcting Codes

Linear Codes and Syndrome Decoding

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory

An Introduction to (Network) Coding Theory

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

Linear Algebra. F n = {all vectors of dimension n over field F} Linear algebra is about vectors. Concretely, vectors look like this:

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Solutions to problems from Chapter 3

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

ELEC3227/4247 Mid term Quiz2 Solution with explanation

An Introduction to (Network) Coding Theory

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Willmar Public Schools Curriculum Map

POLYNOMIAL CODES AND FINITE GEOMETRIES

The Hamming Codes and Delsarte s Linear Programming Bound

Mathematics Department

Lecture B04 : Linear codes and singleton bound

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Lecture Introduction. 2 Linear codes. CS CTT Current Topics in Theoretical CS Oct 4, 2012

Hadamard Codes. A Hadamard matrix of order n is a matrix H n with elements 1 or 1 such that H n H t n = n I n. For example,

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Information Theory. Lecture 7

MT5821 Advanced Combinatorics

Lecture 12: Reed-Solomon Codes

Lecture 1: Shannon s Theorem

Graph-based codes for flash memory

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Lecture 12: November 6, 2017

Lecture Introduction. 2 Formal Definition. CS CTT Current Topics in Theoretical CS Oct 30, 2012

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Matrix Basic Concepts

Construction of Barnes-Wall Lattices from Linear Codes over Rings

Arrangements, matroids and codes

Polynomial Codes over Certain Finite Fields

Some error-correcting codes and their applications

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

Chapter 5. Cyclic Codes

ERROR CORRECTING CODES

Great Theoretical Ideas in Computer Science

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Lecture 4: Codes based on Concatenation

Quasi-cyclic Low Density Parity Check codes with high girth

Physical Layer and Coding

ABriefReviewof CodingTheory

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

Definition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}.

Basic Test. To show three vectors are independent, form the system of equations

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Turbo Codes are Low Density Parity Check Codes

Linear Algebra Practice Problems

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

Application of hyperplane arrangements to weight enumeration

ECE 4450:427/527 - Computer Networks Spring 2017

Lecture 12. Block Diagram

Lecture 22. m n c (k) i,j x i x j = c (k) k=1

Flip-N-Write: A Simple Deterministic Technique to Improve PRAM Write Performance, Energy and Endurance. Presenter: Brian Wongchaowart March 17, 2010

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Lecture 10: Broadcast Channel and Superposition Coding

Applications of Lattices in Telecommunications

Review 1 Math 321: Linear Algebra Spring 2010

A Brief Encounter with Linear Codes

Binary Arithmetic: counting with ones

Errors, Eavesdroppers, and Enormous Matrices

Coding Theory. Golay Codes

Introduction into Quantum Computations Alexei Ashikhmin Bell Labs

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

CSE 123: Computer Networks

Iterative Encoding of Low-Density Parity-Check Codes

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Error Detection & Correction

4 An Introduction to Channel Coding and Decoding over BSC

LDPC Codes. Intracom Telecom, Peania

Chapter 3 Linear Block Codes

Transcription:

Reed Muller Error Correcting Codes Ben Cooke Abstract. This paper starts by defining key terms and operations used with Reed Muller codes and binary numbers. Reed Muller codes are then defined and encoding matrices are discussed. Finally, a method of decoding is explained and an example is given to clarify the method. 1. Introduction. Reed Muller codes are some of the oldest error correcting codes. Error correcting codes are very useful in sending information over long distances or through channels where errors might occur in the message. They have become more prevalent as telecommunications have expanded and developed a use for codes that can self-correct. Reed Muller codes were invented in 1954 by D. E. Muller and I. S. Reed. In 1972, a Reed Muller code was used by Mariner 9 to transmit black and white photographs of Mars, see [7, p. 47]. Reed Muller codes are relatively easy to decode, and first-order codes are especially efficient. In Section 2, we discuss vector operations and describe how to form a vector from a polynomial. In Section 3, we introduce Reed Muller codes and show how to form encoding matrices and encode messages. In Section 4, we give a method for decoding Reed Muller encoded messages. Finally, we give an example of decoding an encoded message in Section 5. 2. Definition of Terms and Operations. The vector spaces used in this paper consist of strings of length 2 m, where m is a positive integer, of numbers in F 2 = {0, 1}. The codewords of a Reed Muller code form a subspace of such a space. Vectors can be manipulated by three main operations: addition, multiplication, and the dot product. For two vectors x = (x 1, x 2,..., x n ) and y = (y 1, y 2,..., y n ), addition is defined by where each x i or y i is either 1 or 0, and x + y = (x 1 + y 1, x 2 + y 2,..., x n + y n ), 1 + 1 = 0, 0 + 1 = 1, 1 + 0 = 1, 0 + 0 = 0. For example, if x and y are defined as x = (10011110) and y = (11100001), then the sum of x and y is x + y = (10011110) + (11100001) = (01111111). The addition of a scalar a F 2 to vector x is defined by a + x = (a + x 1, a + x 2,..., a + x n ). 21

22 MIT Undergraduate Journal of Mathematics The complement x of a vector x is the vector equal to 1+x. An example of the addition of a constant to a vector is 1 + (000111) = (111000). Multiplication is defined by the formula, where each x i and y i is either 1 or 0 and x y = (x 1 y 1, x 2 y 2,..., x n y n ), 1 1 = 1, 0 1 = 0, 1 0 = 0, 0 0 = 0. For example, using the same x and y above, the product of x and y is x y = (10011110) (11100001) = (10000000). The multiplication of a constant a F 2 to vector x is defined by a x = (a x 1, a x 2,..., a x n ). An example is 0 (111001) = (000000). The dot product of x and y is defined by For example, using x and y from above, x y = x 1 y 1 + x 2 y 2 + + x n y n. x y = (10011110) (11100001) = 1 + 0 + 0 + 0 + 0 + 0 + 0 + 0 = 1. All three of these operations require vectors with the same number of coordinates. Vectors can be associated with Boolean polynomials. A Boolean polynomial is a linear combination of Boolean monomials with coefficients in F 2. A Boolean monomial p in the variables x 1,..., x m, is an expression of the form, p = x r 1 1 xr 2 2 xr m m where r i {0, 1, 2,... } and 1 i m. The reduced form p of p is obtained by applying the rules, x i x j = x j x i and x 2 i = x i, until the factors are distinct. The degree of p is the ordinary degree of p, which is the number of variables in p. A Boolean polynomial is in reduced form if each monomial is in reduced form. The degree of a Boolean polynomial q is the ordinary degree of its reduced form q. An example of a Boolean polynomial in reduced form with degree three is q = x 1 + x 2 + x 1 x 2 + x 2 x 3 x 4. See [6, pp. 268 9]. We can now describe how to associate a boolean monomial in m variables to a vector with 2 m entries. The degree-zero monomial is 1, and the degree-one monomials are x 1, x 2,..., x m. First we define the vectors associated with these monomials. The vector

Reed Muller Error Correcting Codes 23 associated with the monomial 1 is simply a vector of length 2 m, where every entry of the vector is 1. So, in a space of size 2 3, the vector associated with 1 is (11111111). The vector associated with the monomial x 1 is 2 m 1 ones, followed by 2 m 1 zeros. The vector associated with the monomial x 2 is 2 m 2 ones, followed by 2 m 2 zeros, then another 2 m 2 ones, followed by another 2 m 2 zeros. In general, the vector associated with a monomial x i is a pattern of 2 m i ones followed by 2 m i zeros, repeated until 2 m values have been defined. For example, in a space of size 2 4, the vector associated with x 4 is (1010101010101010). To form the vector for a monomial x r 1 1, xr 2 2,..., first put the monomial in reduced form. Then multiply the vectors associated with each monomial x i in the reduced form. For example, in a space with m = 3, the vector associated with the monomial x 1 x 2 x 3 can be found by multiplying (11110000) (11001100) (10101010), which gives (10000000). To form the vector for a polynomial, simply reduce all of the monomials in the polynomial, and find the vectors associated with each of the monomials. Then, add all the vectors associated with each of these monomials together to form the vector associated with the polynomial. This gives us a bijection between reduced polynomials and vectors. From now on, we will treat the reduced polynomial and the vector associated with that polynomial interchangeably. 3. Reed Muller Code and Encoding Matrices. An r th order Reed Muller code R(r, m) is the set of all binary strings (vectors) of length n = 2 m associated with the Boolean polynomials p(x 1, x 2,..., x m ) of degree at most r. The 0 th order Reed Muller code R(0, m) consists of the binary strings associated with the constant polynomials 0 and 1; that is, R(0, m) = {0, 1} = Rep(2 m ). Thus, R(0, m) is just a repetition of either zeros or ones of length 2 m. At the other extreme, the m th order Reed Muller code R(m, m) consists of all binary strings of length 2 m, see [6, pp. 270 71]. To define the encoding matrix of R(r, m), let the first row of the encoding matrix be 1, the vector length 2 m with all entries equal to 1. If r is equal to 0, then this row is the only one in the encoding matrix. If r is equal to 1, then add m rows corresponding to the vectors x 1, x 2,..., x m to the R(0, m) encoding matrix. To form a R(r, m) encoding matrix where r is greater than 1, add ( m r ) rows to the R(r 1, m) encoding matrix. These added rows consist of all the possible reduced degree r monomials that can be formed using the rows x 1, x 2,...x m. For example, when m = 3 we have 1 1 1 1 1 1 1 1 1 x 1 1 1 1 1 0 0 0 0 x 2 1 1 0 0 1 1 0 0 x 3 1 0 1 0 1 0 1 0 R(1, 3).

24 MIT Undergraduate Journal of Mathematics The rows x 1 x 2 = 11000000, x 1 x 3 = 10100000, and x 2 x 3 = 10001000 are added to form 1 1 1 1 1 1 1 1 1 x 1 1 1 1 1 0 0 0 0 x 2 1 1 0 0 1 1 0 0 x 3 1 0 1 0 1 0 1 0 R(2, 3). x 1 x 2 1 1 0 0 0 0 0 0 x 2 x 3 1 0 0 0 1 0 0 0 Finally, the row x 1 x 2 x 3 = 10000000 is added to form 1 1 1 1 1 1 1 1 1 x 1 1 1 1 1 0 0 0 0 x 2 1 1 0 0 1 1 0 0 x 3 1 0 1 0 1 0 1 0 x 1 x 2 1 1 0 0 0 0 0 0 x 1 x 3 1 0 1 0 0 0 0 0 x 2 x 3 1 0 0 0 1 0 0 0 x 1 x 2 x 3 1 0 0 0 0 0 0 0 R(3, 3). Another example of a Reed Muller encoding matrix is 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 x 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 x 2 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 x 3 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 x 4 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 x 1 x 2 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 x 1 x 3 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 x 1 x 4 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 x 2 x 3 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 x 2 x 4 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 x 3 x 4 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 R(2, 4). Encoding a message using Reed Muller code R(r, m) is straightforward. Take the code we are using to be R(r, m). Its dimension is ( ) ( ) ( ) m m m k = 1 + + + +. 1 2 r In other words, the encoding matrix has k rows, see [6, pp. 271 72]. We send messages in blocks of length k. Let m = (m 1, m 2,... m k ) be a block, the encoded message M c is M c = k m i R i where R i is a row of the encoding matrix of R(r, m). For example, using R(1, 3) to encode m = (0110) gives i=l 0 (11111111) + 1 (11110000) + 1 (11001100) + 0 (10101010) = (00111100). Similarly, using R(2, 4) to encode m = (10101110010) gives (0011100100000101).

Reed Muller Error Correcting Codes 25 4. Decoding Reed Muller. Decoding Reed Muller encoded messages is more complex than encoding them. The theory behind encoding and decoding is based on the distance between vectors. The distance between any two vectors is the number of places in the two vectors that have different values. The distance between any two codewords in R(r, m) code is 2 m r. The basis for Reed Muller encoding is the assumption that the closest codeword in R(r, m) to the received message is the original encoded message. Thus for e errors to be corrected in the received message, the distance between any two of the codewords in R(r, m) must be greater than 2e. The decoding method used is not very efficient, but is straightforward to implement. It checks each row of the encoding matrix and uses majority logic to determine whether that row was used in forming the encoding message. Thus, it is possible to determine what the error-less encoded message was and what the original message was. This method of decoding is given by the following algorithm: Apply Steps 1 and 2 below, to each row of the matrix, starting from the bottom and working upwards. Step 1. Choose a row in the R(r, m) encoding matrix. Find 2 m r characteristic vectors (this process is described below) for that row, and then take the dot product of each of those rows with the encoded message. Step 2. Take the majority of the values of the dot products and assign that value to the coefficient of the row. Step 3. After doing Steps 1 and 2 for each row except the top row from the bottom of the matrix up, multiply each coefficient by its corresponding row and add the resulting vectors to form M y. Add this result to the received encoded message. If the resulting vector has more ones than zeros, then the top row s coefficient is 1, otherwise it is 0. Adding the top row, multiplied by its coefficient, to M y gives the original encoded message. Thus, we can identify the errors. The vector formed by the sequence of coefficients starting from the top row of the encoding matrix and ending with the bottom row is the original message. To find the characteristic vectors of any row of the matrix, take the monomial r associated with the row of the encoding matrix. Then, take E to be the set of all x i that are not in the monomial r, but are in the encoding matrix. The characteristic vectors are the vectors corresponding to the monomials in x i and x i, such that exactly one of x i or x i is in each monomial for all x i in E. For example, the last row of the encoding matrix R(2, 4) is associated with x 3 x 4, so the characteristic vectors correspond to the following combinations of x 1, x 2, x 1, and x 2 : x 1 x 2, x 1 x 2, x 1 x 2, x 1 x 2. These characteristic vectors have the property that the dot product is zero with all the rows in R(r, m) except the row to which the characteristic vectors correspond. 5. Example. If the original message is m = (0110) using R(1, 3), then the encoded message is M c = (00111100). Because the distance in R(1, 3) is 2 3 1 = 4, this code can correct one error. Let the encoded message after the error be M e = (10111100). The characteristic vectors of the last row x 3 = (10101010) are x 1 x 2, x 1 x 2, x 1 x 2, and x 1 x 2. The vector associated with x 1 is (11110000), so x 1 = (00001111). The vector associated with x 2 is (11001100), so x 2 = (00110011). Therefore, we have x 1 x 2 = (11000000), x 1 x 2 = (00110000), x 1 x 2 = (00001100), and x 1 x 2 = (00000011). Taking the dot products of these vectors with M e, we get (11000000) (10111100) = 1, (00110000) (10111100) = 0, (00001100) (10111100) = 0, (00000011) (10111100) = 0.

26 MIT Undergraduate Journal of Mathematics We can conclude that the coefficient of x 3 is 0. Doing the same for the second to last row of the matrix, x 2 = (11001100), we get the characteristic vectors x 1 x 3, x 1 x 3, x 1 x 3, and x 1 x 3. These vectors are (10100000), (01010000), (00001010), and (00000101), respectively. Taking the dot products of these vectors with M e, we get (10100000) (10111100) = 0, (01010000) (10111100) = 1, (00001010) (10111100) = 1, (00000101) (10111100) = 1. So, we can conclude that the coefficient of x 2 is 1. Doing the same for the second row of the matrix x 1 = (11110000), we get (10001000) (10111100) = 0, (00100010) (10111100) = 1, (01000100) (10111100) = 1, (00010001) (10111100) = 1. We can conclude that the coefficient for x 1 is also 1. If we add 0 (10101010) and 1 (11001100) and 1 (11110000) we get M y, which is equal to (00111100). Then we see that the sum of M y and M e is equal to (00111100) + (10111100) = (10000000). This message has more zeros than ones, so the coefficient of the first row of the encoding matrix is zero. Thus we can put together coefficients for the four rows of the matrix, 0,1,1, and 0, and see that the original message was (0110). We can also see that the error was in the first place of the error-free message M c = (00111100). References [1] Arazi, Benjamin, A Commonsense Approach to the Theory of Error Correcting Codes, MIT Press, 1988. [2] Hoffman, D. G. et al., Coding Theory; the Essentials, Marcel Dekker Inc., 1991. [3] Mann, Henry B., Error Correcting Codes, John Wiley and Sons, 1968. [4] Purser, Michael, Introduction to Error-Correcting Codes, Artech House Inc., Norwood, MA, 1995. [5] Reed, Irving S., A Class of Multiple-Error-Correcting Codes and Decoding Scheme, MIT Lincoln Laboratory, 1953. [6] Roman, Steven, Coding and Information Theory, Springer-Verlag, 1992. [7] Van Lindt, J. H., Introduction to Coding Theory, Springer-Verlag, 1982.