Coding Theory. Ruud Pellikaan MasterMath 2MMC30. Lecture 11.1 May

Similar documents
The Golay codes. Mario de Boer and Ruud Pellikaan

Berlekamp-Massey decoding of RS code

On Syndrome Decoding of Chinese Remainder Codes

An Interpolation Algorithm for List Decoding of Reed-Solomon Codes

Algebraic Codes for Error Control

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

Gröbner bases for decoding

Reed-Solomon codes. Chapter Linear codes over finite fields

Reverse Berlekamp-Massey Decoding

On Irreducible Polynomial Remainder Codes

Error-correcting Pairs for a Public-key Cryptosystem

Decoding Reed-Muller codes over product sets

List Decoding of Binary Goppa Codes up to the Binary Johnson Bound

Algebra for error control codes

REED-SOLOMON codes are powerful techniques for

A Fresh Look at the Berlekamp-Massey Algorithm with Application to Low Power BCH decoding Ishai Ilani, Western Digital

Modified Euclidean Algorithms for Decoding Reed-Solomon Codes

Error Correction Methods

MATH32031: Coding Theory Part 15: Summary

: Error Correcting Codes. October 2017 Lecture 1

Generalized Reed-Solomon Codes

5.0 BCH and Reed-Solomon Codes 5.1 Introduction

Notes 6: Polynomials in One Variable

Lecture 12. Block Diagram

Decoding Procedure for BCH, Alternant and Goppa Codes defined over Semigroup Ring

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes

Skew cyclic codes: Hamming distance and decoding algorithms 1

Fast Decoding Of Alternant Codes Using A Divison-Free Analog Of An Accelerated Berlekamp-Massey Algorithm

Computing Error Distance of Reed-Solomon Codes

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials

New Steganographic scheme based of Reed- Solomon codes

Error Correction Review

The Euclidean Algorithm and Multiplicative Inverses

ECEN 604: Channel Coding for Communications

List decoding of binary Goppa codes and key reduction for McEliece s cryptosystem

Algebraic Geometry Codes. Shelly Manber. Linear Codes. Algebraic Geometry Codes. Example: Hermitian. Shelly Manber. Codes. Decoding.

Number Theory Basics Z = {..., 2, 1, 0, 1, 2,...} For, b Z, we say that divides b if z = b for some. Notation: b Fact: for all, b, c Z:

VLSI Architecture of Euclideanized BM Algorithm for Reed-Solomon Code

Chinese Remainder Theorem

Lecture 12: Reed-Solomon Codes

Error Correcting Codes Questions Pool

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].

2. THE EUCLIDEAN ALGORITHM More ring essentials

Great Theoretical Ideas in Computer Science

Generator Matrix. Theorem 6: If the generator polynomial g(x) of C has degree n-k then C is an [n,k]-cyclic code. If g(x) = a 0. a 1 a n k 1.

6.S897 Algebra and Computation February 27, Lecture 6

Arrangements, matroids and codes

a = qb + r where 0 r < b. Proof. We first prove this result under the additional assumption that b > 0 is a natural number. Let

Error-Correcting Codes

Chapter 6. BCH Codes

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

Intermediate Math Circles February 29, 2012 Linear Diophantine Equations I

Lecture 19: Elias-Bassalygo Bound

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

EUCLID S ALGORITHM AND THE FUNDAMENTAL THEOREM OF ARITHMETIC after N. Vasiliev and V. Gutenmacher (Kvant, 1972)

RON M. ROTH * GADIEL SEROUSSI **

NOTES ON SIMPLE NUMBER THEORY

MSRI-UP 2009 PROJECT TOPIC IDEAS. 1. Toric Codes. A first group of project topics deals with a class of codes known as toric codes.

A Combinatorial Bound on the List Size

Lecture 03: Polynomial Based Codes

Combinatória e Teoria de Códigos Exercises from the notes. Chapter 1

Decoding linear codes via systems solving: complexity issues and generalized Newton identities

4F5: Advanced Communications and Coding

Lecture Notes. Advanced Discrete Structures COT S

Chapter 1 A Survey of Divisibility 14

Computing over Z, Q, K[X]

: Error Correcting Codes. November 2017 Lecture 2

The extended coset leader weight enumerator

ERROR CORRECTION BEYOND THE CONVENTIONAL ERROR BOUND FOR REED SOLOMON CODES

Polynomials. In many problems, it is useful to write polynomials as products. For example, when solving equations: Example:

Alternant and BCH codes over certain rings

Fault Tolerance & Reliability CDA Chapter 2 Cyclic Polynomial Codes

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Intermediate Math Circles February 26, 2014 Diophantine Equations I

Optimum Soft Decision Decoding of Linear Block Codes

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

Adaptive decoding for dense and sparse evaluation/interpolation codes

Permanent is hard to compute even on a good day

The number of message symbols encoded into a

arxiv: v1 [cs.it] 27 May 2017

Chapter 1 Divide and Conquer Polynomial Multiplication Algorithm Theory WS 2015/16 Fabian Kuhn

CMPUT 403: Number Theory

Polynomial interpolation over finite fields and applications to list decoding of Reed-Solomon codes

Algebraic Soft-Decision Decoding of Reed Solomon Codes

ELEC3227/4247 Mid term Quiz2 Solution with explanation

Decoding of the Five-Error-Correcting Binary Quadratic Residue Codes

New Algebraic Decoding of (17,9,5) Quadratic Residue Code by using Inverse Free Berlekamp-Massey Algorithm (IFBM)

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission

Lecture 12: November 6, 2017

Code Based Cryptology at TU/e

Reed-Solomon Error-correcting Codes

EDULABZ INTERNATIONAL NUMBER SYSTEM

4 Powers of an Element; Cyclic Groups

The Berlekamp-Massey Algorithm revisited

Introduction to finite fields

Kevin James. MTHSC 412 Section 3.4 Cyclic Groups

On the BMS Algorithm

Transcription:

Coding Theory Ruud Pellikaan g.r.pellikaan@tue.nl MasterMath 2MMC30 /k Lecture 11.1 May 12-2016

Content lecture 11 2/31 In Lecture 8.2 we introduced the Key equation Now we introduce two algorithms which solve the key equation and thus decode cyclic codes efficiently Interpolating with a twovariate polynomial gives list decoding 1. Decoding by key equation Algorithm of Euclid-Sugiyama Algorithm of Berlekamp-Massey 2. List decoding Johnson bound Algorithm of Sudan

Key equation - 1 3/31 In Lecture 8.2 we have seen that the decoding of BCH code with designed minimum distance δ is reduced to the problem of finding a pair of polynomials (σ (Z), ω(z)) satisfying the following key equation for a given syndrome polynomial S(Z) = δ 1 i=1 S i Z i 1 σ (Z)S(Z) ω(z) (mod Z δ 1 ) such that deg(σ (Z)) t = (δ 1)/2 and deg(ω(z)) < deg(σ (Z)) σ (Z) = t i=0 σ i Z i 1 is the error-locator polynomial and ω(z) = t 1 i=0 ω i Z i 1 is the error-evaluator polynomial Note that σ 0 = 1 by definition

Key equation - 2 4/31 Given the key equation the Euclid-Sugiyama algorithm or Sugiyama algorithm finds the error-locator and error-evaluator polynomials by an iterative procedure This algorithm is based on the well-known Euclidean algorithm We briefly review the Euclidean algorithm first For a pair of univariate polynomials r 1 (Z) and r 0 (Z) the Euclidean algorithm finds their greatest common divisor denoted by gcd(r 1 (Z), r 0 (Z))

Algorithm of Euclid - 1 5/31 The Euclidean algorithm proceeds as follows r 1 (Z) = q 1 (Z)r 0 (Z) + r 1 (Z), deg(r 1 (Z)) < deg(r 0 (Z)) r 0 (Z) = q 2 (Z)r 1 (Z) + r 2 (Z), deg(r 2 (Z)) < deg(r 1 (Z))... r s 2 (Z) = q s (Z)r s 1 (Z) + r s (Z), deg(r s (Z)) < deg(r s 1 (Z)) r s 1 (Z) = q s+1 (Z)r s (Z).

Algorithm of Euclid - 2 6/31 In each iteration of the algorithm, the operation of r j 2 (Z) = q j (Z)r j 1 (Z) + r j (Z) with deg(r j (Z)) < deg(r j 1 (Z)) is implemented by division of polynomials that is dividing r j 2 (Z) by r j 1 (Z) with quotient q j (Z) and remainder r j (Z) The algorithm stops after it completes the s-iteration where s is the smallest j such that r j+1 (Z) = 0 It is easy to prove that /k r s (Z) = gcd(r 1 (Z), r 0 (Z))

Algorithm of Euclid-Sugiyama - 1 7/31 Input: r 1 (Z) = Z δ 1, r 0 (Z) = S(Z), U 1 (Z) = 0 and U 0 (Z) = 1 Proceed with the Euclidean algorithm for r 1 (Z) and r 0 (Z) until an r s (Z) is reached such that deg(r s 1 (Z)) 1 2 (δ 1) and deg(r s(z)) 1 (δ 3) 2 Update U j (Z) = q j (Z)U j 1 (Z) + U j 2 (Z) Output: The following pair of polynomials: σ (Z) = ɛu s (Z) /k ω(z) = ( 1) s ɛr s (Z) where ɛ is chosen such that σ 0 = σ (0) = 1

Algorithm of Euclid-Sugiyama - 2 8/31 The error-locator and evaluator polynomials are given as σ (Z) = ɛu s (Z) and ω(z) = ( 1) s ɛr s (Z) Note that the Euclid-Sugiyama algorithm does not have to run the Euclidean algorithm completely it has a different stopping parameter s

EXAMPLE - 1 9/31 Consider the code C as given in Examples before It is a narrow sense BCH code of length 15 over F 16 of designed minimum distance δ = 5 Let r be the received word r = (α 5, α 8, α 11, α 10, α 10, α 7, α 12, α 11, 1, α, α 12, α 14, α 12, α 2, 0) Then S 1 = α 12, S 2 = α 7, S 3 = 0 and S 4 = α 2 So S(Z) = α 12 + α 7 Z + α 2 Z 3

EXAMPLE - 2 10/31 j r j 1 (Z) r j (Z) U j 1 (Z) U j (Z) 0 Z 4 α 2 Z 3 + α 7 Z + α 12 0 1 1 α 2 Z 3 + α 7 Z + α 12 α 5 Z 2 + α 10 Z 1 α 13 Z 2 α 5 Z 2 + α 10 Z α 2 Z + α 12 α 13 Z α 10 Z 2 + Z + 1 Thus the error-locator polynomial is σ (Z) = U 2 (Z) = 1 + Z + α 10 Z 2 the error-evaluator polynomial is ω(z) = r 2 (Z) = α 12 + α 2 Z

Key equation 11/31 Suppose is given Consider the key equation δ 1 S(Z) = S i Z i 1 i=1 σ (Z)S(Z) ω(z) (mod Z δ 1 ) such that and /k deg(σ (Z)) t = (δ 1)/2 deg(ω(z)) deg(σ (Z)) 1

System of linear euqations 12/31 It is easy to show that the problem of solving the key equation is equivalent to the problem of solving the following matrix equation with unknown (σ 1,..., σ t ) T S t S t 1... S 1 S t+1 S t... S 2... S 2t 1 S 2t 2... S t σ 1 σ 2. σ t = S t+1 S t+2. S 2t

Algorithm of Berlekamp-Massey - 1 13/31 The Berlekamp-Massey algorithm can solve this matrix equation by finding σ 1,..., σ t for the following recursion t S i = σ j S i j, j=1 i = t + 1,..., 2t

Algorithm of Berlekamp-Massey - 2 14/31 We should point out that the Berlekamp-Massey algorithm actually solves a more general problem that is, for a given sequence of length N: /k E 0, E 1, E 2,..., E N 1 which we denote by E it finds the recursion E i = L j E i j, i = L,..., N 1 j=1 for which L is smallest If the matrix equation has no solution the Berlekamp-Massey algorithm then finds a recursion with L > t

Algorithm of Berlekamp-Massey - 3 15/31 To make it more convenient to present the Berlekamp-Massey algorithm and to prove the correctness, we denote (Z) = L i Z i i=0 with 0 = 1 The above recursion is denoted by is called the length of the recursion ( (Z), L) and L = deg( (Z))

Algorithm of Berlekamp-Massey - 4 16/31 The Berlekamp-Massey algorithm is an iterative procedure for finding the shortest recursion for producing successive terms of the sequence E The r-th iteration of the algorithm finds the shortest recursion ( (r) (Z), L r ) where L r = deg( (r) (X)) for producing the first r terms of the sequence E, that is /k L r E i = (r) j E i j, i = L r,..., r 1 j=1 or equivalently with (r) 0 = 0 L r j=0 (r) j E i j = 0, i = L r,..., r 1

Algorithm of Berlekamp-Massey - 5 17/31 (Initialization) r = 0, (Z) = B(Z) = 1, L = 0, λ = 1, and b = 1. (1) If r = N, stop. Otherwise, compute = L j=0 j E r j (2) If = 0, then λ λ + 1, and go to (5) (3) If = 0 and 2L > r, then (Z) (Z) b 1 Z λ B(Z) λ λ + 1 and go to (5) (4) If = 0 and 2L r, then T(Z) (Z) (temporary storage of (Z)) (Z) (Z) b 1 Z λ B(Z) L r + 1 L B(Z) T(Z) b λ 1 (5) r r + 1 and return to (1) For the the proof of correctness we refer to the literature /k

EXAMPLE - 3 18/31 Consider again the code C of the previous Example Let r be the received word Then r = (α 5, α 8, α 11, α 10, α 10, α 7, α 12, α 11, 1, α, α 12, α 14, α 12, α 2, 0) /k S 1 = α 12, S 2 = α 7, S 3 = 0, S 4 = α 2 Now let us compute the error-locator polynomial σ (Z) by using the Berlekamp-Massey algorithm Letting E i = S i+1, for i = 0, 1, 2, 3, we have a sequence as the input of the algorithm E = {E 0, E 1, E 2, E 3 } = {α 12, α 7, 0, α 2 }

EXAMPLE - 4 19/31 The intermediate and final results of the algorithm are given in the following table /k r B(Z) (Z) L 0 α 12 1 1 0 1 1 1 1 + α 12 Z 1 2 α 2 1 1 + α 10 Z 1 3 α 1 + α 10 Z 1 + α 10 Z + α 5 Z 2 2 4 0 1 + α 10 Z 1 + Z + α 10 Z 2 2

EXAMPLE - 5 20/31 The result of the last iteration the Berlekamp-Massey algorithm, (Z), is the error-locator polynomial That is σ (Z) = σ 1 + σ 2 Z + σ 2 Z 2 = (Z) = 0 + λ 1 Z + 2 Z 2 = 1 + Z + α 10 Z 2 Substituting this into the key equation, we then get ω(z) = α 12 + α 2 Z

List-decoding - 1 21/31 A decoding algorithm is efficient if the complexity is bounded above by a polynomial in the code length Brute-force decoding is not efficient because for a received word it may need to compare q k codewords for the closest codewords The idea behind list decoding is that instead of returning a unique codeword the decoder returns a small list of codewords A list-decoding algorithm is efficient if both the complexity and the size of the output list of the algorithm are bounded above by polynomials in the code length List decoding was first introduced by Elias and Wozencraft in 1950 s /k

List-decoding - 2 22/31 Suppose C is a q-ary [n, k, d] code Let t n is a positive integer For any received word r = (r 1,, r n ) F n q we refer to any codeword c in C satisfying d(c, r) t as a t-consistent codeword Let l be a positive integer less than or equal to q k The code C is called (t, l)-decodable, if for any word r F n q the number of t-consistent codewords is at most l If for any received word, a list decoder can find all the t-consistent codewords and the output list has at most l codewords then the decoder is called a (t, l)-list decoder /k

List-decoding - 3 23/31 In 1997 for the first time Sudan proposed an efficient list-decoding algorithm for Reed-Solomon codes Later Sudan s list-decoding algorithm was generalized to decoding algebraic-geometric codes and Reed-Muller codes

Error-correcting capacity 24/31 Suppose a decoding algorithm can find all the t-consistent codewords for any received word We call t the error-correcting capacity or decoding radius of the decoding algorithm As we have seen for any [n, k, d] code, if t d 1 2 then there is only one t-consistent codeword for any received word In other words, any [n, k, d] code is ( ) d 1 2, 1 -decodable The decoding algorithms in the previous sections return a unique codeword for any received word and they achieve an error-correcting capability less than or equal to d 1 2 The list decoding achieves a decoding radius greater than d 1 2 and the size of the output list must be bounded by a polynomial in n /k

Johnson bound 25/31 It is natural to ask the following question for a class of codes C For an [n, k, d] F q -linear code C in C what is the maximal value t, such that C is (t, l)-decodable for a l which is bounded above by a polynomial in n? In the following, we give a lower bound on the maximum t such that C is (t, l)-decodable This is called the Johnson bound

PROPOSITION Johnson bound 26/31 Let 0 < β < 1 and 0 < γ < 1 Let C F n q be a linear code of minimum distance Letd = (1 1/q)(1 β)n Let t = (1 1/q)(1 γ )n Then for any word r F n q { { min n(q 1), (1 β)/(γ B t (r) C 2 β) }, when γ > β 2n(q 1) 1, when γ = β where /k B t (r) = {x F n q d(x, r) t} is the Hamming ball of radius t around r Proof is referred to the literature

THEOREM Johnson bound 27/31 For any linear code C F n q of relative minimum distance δ = d/n, it is (t, l(n))-decodable with l(n) bounded above by a linear function in n, provided that ( t n 1 1 ) ( 1 1 q ) q q 1 δ

PROOF Johnson bound 28/31 For any received word r F n q, the set of t-consistent codewords {c C d(c, r) t} = B t (r) C Let β be a positive real number and β < 1 Denote d = (1 1/q) (1 β)n Let t = (1 1/q) (1 γ )n for some 0 < γ < 1 Suppose Then, γ /k ( t n 1 1 ) ( 1 1 q ) q q 1 δ 1 q q 1 d n = β By the previous Proposition, the number of t-consistent codewords, l(n), which is B t (r) C, is bounded above by a polynomial in n, here q is viewed as a constant

REMARK - 1 29/31 The classical error-correcting capability is t = d 1 2 For a linear [n, k] code of minimum distance d, we have d n k + 1 Note that for Reed-Solomon codes, d = n k + 1 Thus, the normalized capability τ = t/n τ = t n 1 n k n = 1 2 2 1 2 R where R is the code rate

REMARK - 2 30/31 Let us compare this with the Johnson bound From the Theorem and d n k + 1, the Johnson bound is ( ) ( ) 1 1 1 1 q q q 1 ( ) ( δ ) 1 1 1 1 q q q 1 (1 k n + 1 n ) 1 R for large n and large q

Figure of Johnson bound 31/31