The number of message symbols encoded into a

Similar documents
Great Theoretical Ideas in Computer Science

Berlekamp-Massey decoding of RS code

Decoding Reed-Muller codes over product sets

Error Correction Review

1 Reed Solomon Decoder Final Project. Group 3 Abhinav Agarwal S Branavan Grant Elliott. 14 th May 2007

5.0 BCH and Reed-Solomon Codes 5.1 Introduction

Lecture 12: November 6, 2017

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution

Complex Numbers: Definition: A complex number is a number of the form: z = a + bi where a, b are real numbers and i is a symbol with the property: i

CS 70 Discrete Mathematics and Probability Theory Summer 2016 Dinh, Psomas, and Ye Discussion 8A Sol

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

On Irreducible Polynomial Remainder Codes

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

EE512: Error Control Coding

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

3x + 1 (mod 5) x + 2 (mod 5)

4F5: Advanced Communications and Coding

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

RON M. ROTH * GADIEL SEROUSSI **

Homework 8 Solutions to Selected Problems

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

Lecture Introduction. 2 Formal Definition. CS CTT Current Topics in Theoretical CS Oct 30, 2012

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes

Section 0.2 & 0.3 Worksheet. Types of Functions

Chapter 6. BCH Codes

Permutations and Polynomials Sarah Kitchen February 7, 2006

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

ECEN 604: Channel Coding for Communications

Decoding Procedure for BCH, Alternant and Goppa Codes defined over Semigroup Ring

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

Reed-Solomon Error-correcting Codes

Cyclic codes: overview

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

The Golay codes. Mario de Boer and Ruud Pellikaan

Simplifying Rational Expressions and Functions

New Algebraic Decoding of (17,9,5) Quadratic Residue Code by using Inverse Free Berlekamp-Massey Algorithm (IFBM)

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

Polynomial Codes over Certain Finite Fields

Error Correction Methods

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

Lecture 12. Block Diagram

3. Coding theory 3.1. Basic concepts

Reverse Berlekamp-Massey Decoding

VLSI Architecture of Euclideanized BM Algorithm for Reed-Solomon Code

Fault Tolerance & Reliability CDA Chapter 2 Cyclic Polynomial Codes

Coding Theory. Ruud Pellikaan MasterMath 2MMC30. Lecture 11.1 May

Fast Decoding Of Alternant Codes Using A Divison-Free Analog Of An Accelerated Berlekamp-Massey Algorithm

Information redundancy

CHAPTER 2 POLYNOMIALS KEY POINTS

Contents. 4 Arithmetic and Unique Factorization in Integral Domains. 4.1 Euclidean Domains and Principal Ideal Domains

Section Example Determine the Maclaurin series of f (x) = e x and its the interval of convergence.

Today. Polynomials. Secret Sharing.

Section III.6. Factorization in Polynomial Rings

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Chapter 5. Cyclic Codes

Polynomial Review Problems

, a 1. , a 2. ,..., a n

Homework 9 Solutions to Selected Problems

Spring Ammar Abu-Hudrouss Islamic University Gaza

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Codes used in Cryptography

Math 512 Syllabus Spring 2017, LIU Post

Error-correcting codes and Cryptography

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

ELEC3227/4247 Mid term Quiz2 Solution with explanation

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

List Decoding of Reed Solomon Codes

Reed-Solomon codes. Chapter Linear codes over finite fields

Information Theory. Lecture 7

MATH 431 PART 2: POLYNOMIAL RINGS AND FACTORIZATION

Simplification of Procedure for Decoding Reed- Solomon Codes Using Various Algorithms: An Introductory Survey

Linear Algebra, 3rd day, Wednesday 6/30/04 REU Info:

L1 2.1 Long Division of Polynomials and The Remainder Theorem Lesson MHF4U Jensen

Proof: Let the check matrix be

Making Error Correcting Codes Work for Flash Memory

Objective: To become acquainted with the basic concepts of cyclic codes and some aspects of encoder implementations for them.

: Error Correcting Codes. November 2017 Lecture 2

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6B Solution

Solutions to Exercises, Section 2.5

Design and Implementation of Reed-Solomon Decoder using Decomposed Inversion less Berlekamp-Massey Algorithm by

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

ERROR CORRECTION BEYOND THE CONVENTIONAL ERROR BOUND FOR REED SOLOMON CODES

New Reed Solomon Encoder Design Using Galois Subfield Multiplier

PAPER A Low-Complexity Step-by-Step Decoding Algorithm for Binary BCH Codes

List Decoding of Binary Goppa Codes up to the Binary Johnson Bound

AP Calculus Chapter 9: Infinite Series

Decoding Algorithm and Architecture for BCH Codes under the Lee Metric

Cyclic codes. Vahid Meghdadi Reference: Error Correction Coding by Todd K. Moon. February 2008

1/30: Polynomials over Z/n.

Error Detection & Correction

Polynomial functions over nite commutative rings

New algebraic decoding method for the (41, 21,9) quadratic residue code

Minimized Logic Gates Number Of Components In The Chien Search Block For Reed-Solomon (RS)

Lecture 8 - Algebraic Methods for Matching 1

REED-SOLOMON codes are powerful techniques for

On the NP-Hardness of Bounded Distance Decoding of Reed-Solomon Codes

Lecture 03: Polynomial Based Codes

L1 2.1 Long Division of Polynomials and The Remainder Theorem Lesson MHF4U Jensen

Transcription:

L.R.Welch THE ORIGINAL VIEW OF REED-SOLOMON CODES THE ORIGINAL VIEW [Polynomial Codes over Certain Finite Fields, I.S.Reed and G. Solomon, Journal of SIAM, June 1960] Parameters: Let GF(2 n ) be the eld with 2 n elements. The number of message symbols encoded into a codeword is M. The number of code symbols transmitted is N=2 n. Message symbols are elements of GF(2 n ). Code symbols are elements of GF(2 n ). 1

The original view of Reed and Solomon is that the codeword symbols are the VALUES of a polynomial whose COEFFICIENTS are the message symbols. 2

MATHEMATICALLY: Let (x 1 ;x 2 ;;x N )beanenumeration of the elements of the eld. Let (m 1 ;m 2 ;;m M ) be a message where m i 2 GF (2 n ). Dene a polynomial by P (Z) =m 1 +m 2 Z+m 3 Z 2 ++m M Z M,1 Then the codeword is (P (x 1 );P(x 2 );:::;P(x N )) The enumeration used by Reed and Solomon was as follows. Let be a primitive element in GF(2 n ). Then the enumeration is (x 1 ;x 2 ;;x N )=(0;; 2 ;; N,2 ;1) 3

Suppose the received word is (R(x 1 );R(x 2 );:::;R(x N )) At the receiver, N equations can be formed, some of which are wrong if there are transmission errors. R(0) = m 1 R() = m 1 + m 2 + m 3 2 + +m M M,1 R( 2 ) = m 1 +m 2 +m 3 22 ++m M 2(M,1). R(1) = m 1 + m 2 + m 3 + +m M Any M of these equations have full rank and can be solved. By trying many sets of M equations, the correct answer will appear more often than any wrong answer provided the number of errors is less than (2 n, M +1)=2. Choosing M equations and solving those equations is equivalent to Lagrange interpolation. 4

THE CLASSICAL VIEW Reed Solomon Codes are a sub type of BCH Codes I don't know who rst made this observation but it leads to a practical decoding algorithm developed by Wes Peterson. Parameters: Let GF(2 n ) be the eld with 2 n elements. Let be a primitive element in GF(2 n ). The number of message symbols encoded into a codeword is M. The number of code symbols transmitted is N =2 n,1. Let G(Z) be the polynomial whose roots are (; 2 ;; N,M ) Message symbols are elements of GF(2 n ). The BCH view is that the symbols of a codeword are the COEFFICIENTS of a polynomial in Z which is divisible by G. 5

MATHEMATICALLY: Let G(Z) = N,M Y i=1 (Z, i ) Let (m 1 ;m 2 ;;m M ) be a message where m i 2 GF (2 n ). Dene a polynomial by P (Z) =m 1 +m 2 Z+m 3 Z 2 ++m M Z M,1 and dene C(Z) = P (Z)G(Z) X = N,1 i=0 c i Z i Then the codeword is (c 0 ;c 1 ;;c N,1 ) 6

Alternatively, dividing Z N,M P (Z) byg(z)to get Z N,M P (Z) =Q(Z)G(Z)+R(Z) Then C(Z) =Z N,M P(Z),R(Z) is divisible by G(Z) and its coecients form a systematic codeword. 7

DECODING AS PRESENTED IN THE REED-SOLOMON PAPER Reed and Solomon described the following decoding procedure. Receive (r 1 ;r 2 ;;r N ) Select M indices in all possible ways (Lagrange Interpolation)For each selection, nd P (Z) with deg(p )=M,1 P(x i )=r i at these indices. The coecients of P form a potential message. As all possible selections are made, the message which gives a codeword closest to the received word occurs more often than any other. 8

An alternative is to form a codeword corresponding to each potential message and stop when the result disagrees with the received message in at most (N, M)=2 places. If none is found, failure to decode occurs. These procedures are very inecient. return to this later. We will 9

CLASSICAL DECODING [W.W. Peterson, IRE IT-6 Sept 1960] Receive (r 1 ;r 2 ;;r N ) and form R(Z) = C( i ) = 0 N X i=1 r i Z i,1 = C(Z)+E(Z) S i = R( i ) = E( i ) for 1 i N, M for 1 i N, M Peterson observed that the S i satisfy a linear recursion. 0= e X i=0 q i S k+i for 1 k N, M, e where Q(Z) = P e i=0 q iz i has the error locations as its roots. This gives a simple system of linear equations to nd the error locations. Another system solves for the error values. 10

Later Berlekamp found a faster algorithm for nding the recursion satised by the S i. 11

Let us return to the Reed & Solomon's Original View and see if there is another decoding method. We begin in the same way Reed and Solomon did by selecting a set of M indices, S, and nding P (Z) for which degree of P is at most M, 1 and P (x i )=r i for i 2 S The tool is Lagrange interpolation. We ask what is the eect of errors at the selected indices. 12

LAGRANGE INTERPOLATION For each i 2 S we need a polynomial, i (Z), of degree M, 1 which has the value 1 at x i and the value 0 at x j for j 2 S; j 6= i With these, the solution for P is P (Z) = X i2s r i i (Z) If r i includes an error e i then e i i (Z) is added to the correct P (Z). 13

SYNDROMES Let T (Z) be the message polynomial and e k be the error at position k Then r k = T (x k )+e k Given an index selection, S, the Lagrange Interpolation polynomial evaluated at x k will be P (x k ) = X r i i (x k ) i2s = X T (x i ) i (x k )+ X e i i (x k ) i2s i2s 14

Subtracting the interpolated values from the received values: r k, P (x k ) = T (x k ), X +e k, X i2s i2s T (x i ) i (x k ) e i i (x k ) Since T has degree at most M, 1, and we have T (x k )= X i2s T(x i ) i (x k ) k =r k,p(x k )=e k, X for k 2 S c i2s which is a new family of syndromes. e i i (x k ) Given k for k 2 S c and assuming the number of errors is within the R-S bound, we would like to solve for the e j 's. 15

INTERPOLATION POLYNOMIALS Given S, the 's are constructed as follows: Let G(Z) = Y (Z,x j ) j2s and let G 0 (Z) be its formal derivative. Then i (Z) = G(Z) (Z,x i )G 0 (x i ) It is clear that i (x j ) = 0 for i; j 2 S and j 6= i. That i (x i ) = 1 follows from the rule of L'Hospital which is valid for ratios of polynomials over any eld. 16

The syndromes can now be expressed as: k = e k, X i2s e i G(x k ) (x k, x i )G 0 (x i ) or k G(x k ) = e k G(x k ), X i2s e i 1 G 0 (x i ) (x k, x i ) This looks simpler if we dene k = k G(x k ) e k = e k G(x k ) for k 2 Sc and e i = e i G 0 (x i ) then k = e k, X i2s e i (x k, x i ) 17

Not all of the e's are non-zero. Let E be the set of indices for which e i 6=0 Dene Q S (Z) = Y i2e\s (Z,x i ) and Q C (Z) = Y i2e\s c (Z,x i ) The roots of Q S (Z) are those x i for which i 2 S and e i 6= 0 While the roots of Q C (Z) are those x i for which i 2 S c and e i 6=0 The summation expression in the syndrome equations can now be put over a common denominator to give k = e k, A(x k) Q S (x k ) where the degree of Q S is the number of errors at selected indices and the degree of A is less 18

For k 2 S c, either e k =0orQ C (x k ) = 0 so the product is 0 for k 2 S c. Multiplying the previous syndrome expression by Q(x k ) Q S (x k )Q C (x k ) gives Q(x k ) k = Q C(x k )A(x k ) P (x k ) Q(x k ) k = P (x k ) This expression has, as unknowns, the coecients of Q and P and provides a system of linear equations for their solution. At rst glance this looks the same has the Classical case but is not. The classical equations are statements about the relationship between COEF- FICIENTS of polynomials while the above expression is a relation about the VALUES of polynomials. 19

A FAST ALGORITHM FOR SOLVING THE WELCH-BERLEKAMP EQUATIONS This algorithm was developed jointly by Elwyn Berlekamp Tze Hua Liu Po Tong Lloyd Welch The Key WB Equations are Q(x k ) k = P (x k ) for k 2 S c (1) These equations will be used sequentially in some order depending on auxillary information such as symbol reliability We will assume that the x k 's are renumbered so that the equations are Q(x k ) k = P (x k) for k =1;;N,M 20

We introduce an auxillary pair of polynomials, W (Z);V(Z) and recursively compute an integer and four polynomials as follows J k ;Q k (Z);P k (Z);W k (Z);V k (Z) The pair (Q k ;P k ) will constitute a \minimal" solution to the rst k equations and the pair (W k ;V k ) will also solve the rst k equations but, in a certain sense, will not be minimal. 21

Initialization: J 0 =0; Q 0 (Z)=1;P 0 (Z)=0;W 0 (Z)=1;V 0 (Z)=1 Since k = 0, there are no equations to verify. 22

When we have generated Q k ;P k ;W k ;V k we will have two solutions to the rst k equations. (Q k ;P k ) and (W k ;V k ) However the pair (W k ;V k ) will be "less suitable" than (Q k ;P k ) in a certain sense. But the pair will be useful in constructing (Q k+1 ;P k+1 ) 23

STEP k At the beginning of step k we have J k,1 and Q k,1 (Z);P k,1 (Z);W k,1 (Z);V k,1 (Z) We form d k = Q k,1 (x k ) k, P k,1 (x k ) IF d k =0 J k = J k,1 +1 Q k (Z) = Q k,1 (Z) P k (Z) = P k,1 (Z) W k (Z) = (Z,x k )W k,1 (Z) V k (Z) = (Z,x k )V k,1 (Z) In this case (Q k,1 ;P k,1 ) already satisfy the k'th equation and multiplication by (Z,x k ) forces the pair W k (Z);V k (Z) to satisfy the k'th equation. 24

IF d k 6=0 We form another quantity, c k = d,1 k (W k,1 (x k ) k, V k,1(x k )) and set Q k (Z) = (Z, x k )Q k,1 (Z) P k (Z) = (Z, x k )P k,1 (Z) W k (Z) = W k,1, c k Q k,1 (Z) V k (Z) = V k,1, c k P k,1 (Z) It is readily veried that the two pair of polynomials satisfy the rst k equations. HOWEVER, we are not done with this case. IF J k,1 = 0 then swap the two pair. Q k (Z) $ W k (Z) P k (Z) $ V k (Z) and set J k =0 OTHERWISE set J k = J k,1, 1 and do not swap pairs. 25

A MATRIX DESCRIPTION The four polynomials can be written as components of a matrix. 0 B @ Q k(z) P k (Z) W k (Z) V k (Z) 1 C A In this form the k'th step can be written 26

IF d k =0 0 B @ Q k(z) P k (Z) W k (Z) V k (Z) 1 C A= 0 B @ 1 0 0 (Z,x k ) 10 C A B @ Q k,1(z) P k,1 (Z) W k,1 (Z) V k,1 (Z) 1 C A IF d k 6= 0 and J k,1 6=0 0 B @ Q k(z) P k (Z) W k (Z) V k (Z) 1 C A= 0 B @ (Z,x k) 0 c k 1 10 C A B @ Q k,1(z) P k,1 (Z) W k,1 (Z) V k,1 (Z) 1 C A IF d k 6= 0 and J k,1 =0 0 B @ Q k(z) P k (Z) W k (Z) V k (Z) 1 C A= 0 B @,c k 1 (Z,x k ) 1 10 C A B @ Q k,1(z) P k,1 (Z) W k,1 (Z) V k,1 (Z) 1 C A 27

The fact that each pair, (Q k (Z);P k (Z)) and (W k (Z);V k (Z)) satisfy the rst k equations can be expressed as 0 B @ Q k(x i ) P k (x i ) W k (x i ) V k (x i ) 1 0 C A B @ i,1 1 C A = 0 B @ 0 0 1 C A for i k 28

WHAT ABOUT MINIMALITY? This is the dicult part of the theorem and I will just outline the proof with a few comments. I will pattern this proof after that in Tze Hwa Liu's dissertation ( 1984) 29

First we are going to replace the decision criterion using J k by something more intuitive. ( It is equivalent) Dene the length of an ordered pair of polynomials by L(Q(Z);P(Z)) = max(deg(q),1+deg(p )) and dene L k = min L(Q(Z);P(Z)) where the minimum is taken over all pairs of polynomials satisfying the rst k equations. 30

THEOREM: The algorithm described above gives a sequence of pairs, (Q k (Z);P k (Z)) for which L k = L(Q k (Z);P k (Z)) That is: (Q k (Z);P k (Z)) is minimal. I will just list the lemmas, with a comment about some of them. Lemma 1 L k is monotone increasing. Lemma 2 If 2L k k then there is a unique minimum pair satisfying the rst k equations. 31

Lemma 3 If 2L k k and d k+1 6=0 when computed for a minimal pair at level k then L k+1 = L k +1 *** Lemma 4 Q k (Z) P k (Z) W k (Z) V k (Z) = Y k i=1 (Z, x i ) (determinant of product is product of determinants) 32

Lemmas 5,6 L(Q k (Z);P k (Z)) + L(W k (Z);V k (Z)) = k +1 THE SWAP PAIRS DECISION The swap pairs decision based on the value of J k is really the comparison of L(Q k (Z);P k (Z)) and L(W k (Z);V k (Z)) and picking L(Q k (Z);P k (Z)) to be the smaller. 33

We restate the theorem: THEOREM: The algorithm described above gives a sequence of pairs, (Q k (Z);P k (Z)) for which L k = L(Q k (Z);P k (Z)) That is: (Q k (Z);P k (Z)) is minimal. 34