REED-SOLOMON codes are powerful techniques for

Size: px
Start display at page:

Download "REED-SOLOMON codes are powerful techniques for"

Transcription

1 1 Efficient algorithms for decoding Reed-Solomon codes with erasures Todd Mateer Abstract In this paper, we present a new algorithm for decoding Reed-Solomon codes with both errors and erasures. The algorithm combines an efficient method for solving the Key Equation and a technique which separates the error locator polynomial from the erasure locator polynomial. The new algorithm is compared to two other efficient Reed-Solomon decoding algorithms and shown to be significantly faster for errors and erasures decoding. Applications to BCH decoding are also provided. Index Terms Reed-Solomon Codes, BCH Codes, Berlekamp- Massey Algorithm, Euclidean Algorithm REED-SOLOMON codes are powerful techniques for correcting multiple errors introduced when a message is transmitted in a noisy environment. These codes are very popular and can be found in compact disc players and NASA satellites used for deep-space exploration. The power of these codes resides in algebraic properties of finite fields which allows for several errors to be corrected in each codeword. Each codeword in the standard (n, k, d) Reed-Solomon code is a multiple of the generator polynomial g(x) = (x α) (x α 2 ) (x α n k ) (1) defined over F[x] where F is a finite field with primitive root of unity α. The code has minimum distance d = n k + 1 which means that the code is capable of correcting up to t = (n k)/2 errors. This paper will first review several observations made at the 2011 Canadian Information Theory Workshop regarding the equivalence of the Extended Euclidean Algorithm and Berlekamp-Massey Algorithm for solving the Reed-Solomon decoding key equation. Next, we will present a number of additional simplifications that can be made to the algorithm given at the conference which improve its performance. Finally, we will apply the resulting Key Equation solver to errors and erasure decoding of Reed-Solomon codes. I. EQUIVALENCE OF THE BERLEKAMP-MASSEY AND EUCLIDEAN ALGORITHMS In [3], Dornstetter first demonstrated that the Berlekamp- Massey [8] and Euclidean algorithms [12] are equivalent methods for solving the so-called Key Equation used in Reed- Solomon decoding. Heydt [7] provided additional insights regarding this equivalence in his 2000 paper. At the 2011 Canadian Information Theory Workshop, the current author presented his own perspective on this relationship [10]. In this Todd Mateer is with the Mathematics Division, Howard Community College, Little Patuxent Parkway, Columbia, MD USA tmateer@howardcc.edu. Manuscript received XXXX; revised XXXXX. section, we now summarize some of the key observations made at the workshop. First, the Extended Euclidean Algorithm processes the syndrome polynomial in the order of high degree terms to low degree terms while the Berlekamp-Massey algorithm processes the syndrome polynomial from low degree terms to high degree terms. In order for the two algorithms to produce polynomials with the same coefficients, we must use S(x) = 2t 1 j=0 r(α j+1 ) x j (2) as the syndrome polynomial for the Berlekamp-Massey algorithm and Ŝ(x) = 2t 1 j=0 r(α 2t j ) x j. (3) as the syndrome polynomial for the Extended Euclidean Algorithm. In these formulas, r(x) is the polynomial representation of the received vector of the message transmission. As a consequence of the fact that Ŝ(x) is the reversal of S(x), 1 every intermediate result of the Extended Euclidean Algorithm will be the reversal of the corresponding result in the Berlekamp- Massey algorithm. Second, while it is traditional to let r 1 (x) = a(x) and r 0 (x) = b(x) in the Extended Euclidean Algorithm, it is often more advantageous to set r 1 (x) = a(x) + b(x) for algebraic decoding, particularly for BCH codes. In this case, we would alter v 1 (x) = 1 in the initialization of the algorithm. Third, we can multiply r i 2 (x) = u i 2 a(x) + v i 2 b(x) by any nonzero constant C to obtain C r i 2 (x) = C u i 2 (x) a(x) + C v i 2 (x) b(x) (4) The remainder polynomials for the various choices of C will have different leading coefficients, but will all produce the same result when normalized. If r i 2 (x) is modified by C, then the corresponding u i (x) and v i (x) polynomials are given by u i (x) = C u i 2 (x) q i (x) u i 1 (x), (5) v i (x) = C v i 2 (x) q i (x) v i 1 (x). (6) Fourth, we can compute r i (x) in the Extended Euclidean Algorithm using an iterative procedure similar to the Berlekamp-Massey algorithm. At iteration i of the Extended 1 In the case of the syndrome polynomials, one must be careful to define the definition of reversal in terms of a polynomial of degree 2t 1. By this definition, S j = Ŝ2t 1 j for each j in 0 j < 2t 1. This does not always coincide with definition of reciprocal polynomials.

2 2 Algorithm 1 : Improved Extended Euclidean Algorithm for solving the Key Equation Input: The syndrome polynomial S(x) F[x] for some finite field F and integer t 0. Output: The polynomial v(x) such that r(x) = u(x) x 2t + v(x) S(x) for some polynomials u(x) and r(x) where deg(r) < t. 0. Assign T (x) = v 1 (x) = 1, δ = 2t, D 1 = 1, i = 0, v 0 (x) = 1, L i = 0, K = 0. If t is equal to 0 (no error correction capability), then go to step Assign K equal to K + 1. Assign m equal to 2t + L i K. 2. Compute the coefficient of degree m in r i (x) = v i (x) S(x), i.e. D i = L i j=0 (v i) j S m j where S m j is the degree m j coefficient of S(x) 3. If D i = 0, then go to step If m < δ then 5. Assign i equal to i + 1 Assign v i (x) = C v i 2 (x) C (D i 2 /D i 1 ) x δ m v i 1 (x) for arbitrary C. NOTE: If C = D i 1 /D i 2, then use v i (x) = D i 1 T (x) x δ m v i 1 (x) 6. Assign T (x) equal to v i 1 (x)/d i Assign δ equal to m and L i equal to the degree of v i, i.e. K L i else 9. Assign v i (x) equal to v i (x) D i T (x) x m δ 10. end if 11. If K < 2t and m t then go to step Return v(x) = v i (x) Euclidean Algorithm, let the leading coefficients of r i 2 (x) and r i 1 (x) be denoted as D i 2 x δ and D i 1 x m respectively. The symbol D i was selected as the variable to denote the leading coefficient of r i (x) because it corresponds to the discrepancy of the Berlekamp-Massey algorithm. If r i 2 (x) is multiplied by a constant C, then the leading term of q i (x) is given by q (1) i (x) = C D i 2 x δ D i 1 x m (7) Here, the notation q (1) i (x) will be used to denote this as our first guess for q i (x). The polynomial q (1) i (x) can be used to determine our first guess for r i (x) through the formula r (1) i (x) = C r i 2 (x) q (1) i (x) r i 1 (x) (8) If q i (x) = q (1) i (x), then r (1) i (x) will have degree less than deg(r i 1 ) and we can move on to division step i + 1 of the Extended Euclidean algorithm. Otherwise, the leading coefficient of r (1) i (x) is a discrepancy that we can use to adjust q i (x). Suppose that we are given guess γ 1 for q i (x), r i (x) and we wish to determine guess γ for these polynomials. If the leading terms of r (γ 1) i (x) and r i 1 (x) are D i x m and D i 1 x δ respectively, then observe that D i x m δ r i 1 (x) (9) D i 1 is an expression that matches the leading term of r (γ 1) i (x). Hence, we can assign q (γ) i (x) = q (γ 1) i (x) + D i D i 1 x m δ (10) r (γ) i (x) = C r i 2 (x) q (γ) i (x) r i 1 (x) (11) to obtain a polynomial for r (γ) i (x) with degree less than deg(r (γ 1) i ). Thus, after a finite number of iterations we will obtain a quotient and remainder that equal q i (x) and r i (x). The fifth observation is that it is only necessary to compute the leading terms of the intermediate r i (x) polynomials in order to obtain the outputs u(x) and v(x). One can easily verify this claim by substituting r i 2 (x) = u i 2 (x) a(x) + v i 2 (x) b(x), r i 1 (x) = u i 1 (x) a(x) + v i 1 (x) b(x), into the discussion given in the previous two paragraphs. Observations four and five above can be applied to any application of the Extended Euclidean Algorithm. However, for algebraic decoding we restrict ourselves to the case where a(x) = x 2t. Furthermore, the only results that need to be computed to determine the recursion polynomial v(x) are the intermediate v i (x) polynomials. These facts can be used to greatly simplify the Extended Euclidean algorithm calculations. For any i 1, the polynomial r i (x) will have degree less than 2t when a(x) = x 2t. Hence if b(x) = Ŝ(x), then r i (x) = v i (x) Ŝ(x) mod x2t. (12) Furthermore, if the degree of v i (x) is L i, then the coefficient of degree m in r i (x) is given by the convolution formula L i j=0 (v i ) j Ŝm j. (13) Here, (v i ) j denotes the coefficient of degree j in v i (x) and Ŝ m j denotes the coefficient of degree m j in Ŝ(x).

3 3 Algorithm 2 : Simplified algorithm for solving the Key Equation Input: The syndrome polynomial S(x) F[x] for some finite field F and integer t 0. Output: The polynomial v(x) such that r(x) = u(x) x 2t + v(x) S(x) for some polynomials u(x) and r(x) where deg(r) < t. 0. Assign K = 0, v (0) (x) = 1, L = 0, T (x) = 1, ψ = 1. If t is equal to 0 (no error correction capability), then go to step Assign K equal to K Assign D K = S 2t K + L 1 j=0 (v K 1) j S 2t+L K j 3. If D k = 0, then v (K) (x) = v (K 1) (x) and go to step If 2L < K then 5. Assign v (K) (x) = (ψ D K ) T (x) x K 2L v (K 1) (x) 6. Assign T (x) equal to v (K 1) (x) and ψ = (D K ) Assign L equal to K L. 8. else 9. Assign v (K) (x) = v (K 1) (x) (ψ D K ) T (x) x 2L K 10. end if 11. If K t + L then go to step Return v(x) = v (K) (x) We are now ready to present an improved version of the Extended Euclidean Algorithm which can produce the same intermediate results of the Berlekamp-Massey algorithm. Each iteration of the algorithm will given by computing the leading coefficient of r i (x) from v i (x) and S(x) using (13) where m is the desired degree of the leading coefficient. If the coefficient is zero, then we continue to decrement m until a nonzero coefficient is encountered. Once the leading coefficient has been found, we compare the degree of r i (x) to the degree of r i 1 (x) which is stored in variable δ. If m < δ, then we can proceed to the next division step of the Euclidean algorithm. Increment i and compute our first guess for v i (x) using formulas (6) and (7) where C is any nonzero constant. We will check to see how good our guess was on the next iteration of the algorithm. In the meantime, it is also useful to save off the normalized v i 1 (x) polynomial in the temporary variable T (x). Since, we have obtained a new v i (x), we also save off the degree of the last reminder polynomial obtained in the variable δ as well as the degree of v i (x) in L i. It turns out that this is simply equal to K L i 1. If m δ, then r i (x) has degree greater than or equal to r i 1 (x). We need to adjust r i (x) before proceeding to the next division step of the algorithm. By repeating the analysis used to produce (11), an updated guess for v i (x) is given by v (γ) i (x) = C v i 2 (x) q (γ) i (x) v i 1 (x) x m δ (14) where C is the constant chosen to produce v (1) i (x). However, we only need to update v i (x) based on the new part of q i (x). Hence, (14) simplifies to v (γ) i (x) = v (γ 1) i (x) D i T i 1 (x) x m δ (15) when T i 1 (x) = v i 1 (x)/d i 1 is also substituted into the formula. We will check to see how good this guess was on the next iteration of the algorithm. Algorithm 1 implements the improved Extended Euclidean Algorithm when C = 1 is assigned during every iteration. It can also implement the Berlekamp-Massey algorithm by assigning C to D i 2 /D i 1 on each iteration and initializing the algorithm with v 1 (x) = 1, v 0 (x) = 1. This enforces the property of the Berlekamp-Massey algorithm that the generating polynomial of the shift register associated with v i (x) always has a constant term of 1. When the polynomial is reversed to translate to the Extended Euclidean Algorithm, this implies that every v i (x) polynomial will be monic and (6) simplies to v (1) i (x) = D i 1 T (x) x δ m v i 1 (x). (16) If we make the substitutions m = 2t + L i K, δ = 2t + L i 1 K 0, and L i = K 0 L i 1 in Step 4 of Algorithm 1, we get the condition 2L i < K which corresponds to a similar condition which appears in the Berlekamp-Massey algorithm. If the syndrome polynomial S(x) is set to be Ŝ(x) (formula (3)), then the v i (x) polynomials of Algorithm 1 will be the reversal of the Λ (K) (x) polynomials found in the version of the Berlekamp-Massey algorithm presented in the conference paper and the T (x) and discrepancy values of the two algorithms should similarly correspond. Alternatively, one can call algorithm 1 with syndrome polynomial S(x) (formula (2)) and the algorithm will produce Λ(x) (the Berlekamp-Massey error-locator polynomial) as the output. II. IMPROVEMENTS TO THE NEW ALGORITHM Algorithm 1 can be cleaned up to produce a more efficient algorithm. In particular, we can eliminate variables i, δ, and m, making every computation in the algorithm a function of both K and the degree of the recursion polynomial at iteration K. We can also omit the normalization of the recursion polynomial in step 6 and save several multiplications. In order for the algorithm to still produce the correct result, we store the inverse of the leading coefficient of the recursion polynomial into the variable ψ and use this value in steps 5 and 9 of subsequent iterations. Step 11 can be simplified into the

4 4 Algorithm 3 : An Inverse-Free Key Equation Solver Input: The syndrome polynomial S(x) F[x] for some finite field F and integer t 0. Output: The polynomial v(x) such that r(x) = u(x) x 2t + v(x) S(x) for some polynomials u(x) and r(x) where deg(r) < t. 0. Assign K = 0, v (0) (x) = 1, L = 0, T (x) = 1, γ = 1. If t is equal to 0 (no error correction capability), then go to step Assign K equal to K Assign D K = L j=0 (v K 1) j S 2t+L K j 3. If D K = 0, then v (K) (x) = v (K 1) (x) and go to step If 2L < K then 5. Assign v (K) (x) = D K T (x) γ x K 2L v (K 1) (x) 6. Assign T (x) equal to v (K 1) (x) and γ = D K. 7. Assign L equal to K L. 8. else 9. Assign v (K) (x) = γ v (K 1) (x) D K T (x) x 2L K 10. end if 11. If K t + L then go to step Return v(x) = v (K) (x) single inequality K t + L for return to Step 1. It appears that the assignment of C = D i 2 /D i 1 at each iteration is slightly more efficient than the assignment C = 1. Algorithm 2 presents the result of making these modifications to Algorithm 1. Each line of Algorithm 2 matches the same computation performed in Algorithm 1. III. AN INVERSE-FREE KEY EQUATION SOLVER In [4], Eastman demonstrated that it is possible to solve the Key Equation without computing any inverses. By applying the techniques given in the conference paper to this approach, we obtain Algorithm 3. Observe that the recursion polynomials are no longer monic. The tradeoff with avoiding the computation of the inverse of a finite field element is the multiplication of the recursion polynomial by a constant. If one is solving the key equation in a highly parallel environment where coefficients of this polynomial can be distributed to many computation modules, then this tradeoff is advantageous. However, if space permits the storage of a lookup table for the inverse of each element of a finite field, then Algorithm 2 may be the better approach. The decision of whether Algorithm 2 or Algorithm 3 is better depends on the computing environment of an individual s application. In some environments, the computation of an inverse is expensive and should be avoided. If the inverse can be determined by a lookup table, then Algorithm 2 is more efficient. The reader can decide whether Algorithm 2 or 3 is more advantageous for his or her computation environment. IV. EFFICIENT IMPLEMENTATION OF THE SIMPLIFIED ALGORITHM Through the use of pointers, we can implement the shifts and assignments involving the recursion polynomials at no cost. As part of the initialization of the algorithm, declare two arrays of size 2t = n k. The pointer V will point to one of the arrays which is initialized to v (0) (x) and T will point to the other array which will be initialized with v ( 1) (x). Shifts are implemented by moving the pointer within the array while the assignment of v (K 1) (x) to T (x) in Algorithm 2 is implemented by swapping the roles of the two pointers. We can also generalize the algorithm to start at any step K if v (K) (x) and v (K 1) (x) are known. Pseudocode for this improved algorithm is given in Algorithm 4. To match Algorithm 2, Algorithm 4 should be called with e = 0, S (x) = Ŝ(x), P (x) = 1 (so that L = 0), K = 0, Q = t, and INV=1. To match Algorithm 3, use the same parameters with the exception that INV should be set to 0. The role of e and the optional instructions will be discussed in later sections. V. NEW ALGORITHM FOR DECODING REED-SOLOMON CODES WITH ERASURES We will now apply the new Key Equation solvers to the decoding of Reed-Solomon codes with erasures. An erasure is a position where we suspect that a symbol was received incorrectly. Here, we regard an error as a mistake where we know neither its position or value. The t errors will be represented by the roots of W 1 (x) and the e erasures at known locations {ɛ 1, ɛ 2,..., ɛ e } will be represented by the roots of W 2 (x). In other words, where W (x) = W 1 (x) W 2 (x) (17) W 1 (x) = (x α i1 ) (x α i2 ) (x α it ), (18) W 2 (x) = (x α ɛ1 ) (x α ɛ2 ) (x α ɛe ). (19) Since the locations of the erasures are known, it is possible to compute W 2 (x) at the beginning of the decoding process. When erasures are present, it is possible to correct any number of errors or erasures that satisfy 2t + e n k. In this case, observe that deg(w 1 ) = t (n k e)/2, deg(w 2 ) = e, and deg(w ) = t + e (n k + e)/2.

5 5 Algorithm 4 : Efficient implementation of Key Equation solver Input: The (possibly modified) syndrome polynomial S (x) F[x] for finite field F; Initialization polynomial P (x) and optional second initialization polynomial Υ(x); Starting step value K, stopping criteria Q, and integers t, e 0; Inverse flag (INV) equal to 0 or 1 Output: The polynomial v(x) such that r(x) = u(x) x 2t+e + v(x) S (x) for some polynomials u(x) and r(x) where deg(r) < t. [Optional: and polynomial Ω(x) ] 0. Allocate two arrays A and B each of size 2t + e and initialized to all 0. [Optional: Allocate two arrays Y and Z each of size 2t + e and initialized to all 0. ] Set L be the degree of P (x); Set L T = L Copy A[i] = P i (the degree i coefficient of P ) and B[i] = P i for each i in 0 i L. [Opt: Copy Y [i] = Υ i and Z[i] = Υ i for each i in 0 i < 2t + e.] Set pointer V to the starting address of A and T to the starting address of B [Opt: Set pointer Ω to the starting address of Y and Φ to the starting address of Z ] Assign ψ := 1 and γ := 1 If (K L Q) then go to step Assign K := K Assign D := L j=0 V [j] S [2t + e + L K j]. NOTE: S [i] is the degree i coefficient of S (x) for all i 0 3. If D = 0, then go to step Set C := ψ D. If 2L < K then 5. Assign T [j] := C T [j] for each j in 0 j L T [Opt: Assign Φ[j] := C Φ[j] for each j in 0 j < 2t + e ] Then assign T [j + K 2L] := T [j + K + 2L] γ V [j] for each j in 0 j L. [Opt: and assign Φ[j +K 2L] := Φ[j +K +2L] γ Ω[j] for each j in 0 j < 2t+e+2L K. 6. Swap pointers T and V. [Opt: Swap pointers Φ and Ω ]. Assign L T = L If INV=0, assign γ := D; If INV=1, assign ψ := D Assign L := K L. 8. else 9. If INV=0: Assign V [j] := γ V [j] for each j in 0 j L If INV=0: [Opt: Assign Ω[j] := γ Ω[j] for each j in 0 j 2t + e ] Assign V [j + 2L K] := V [j + 2L K] C T [j] for each j in 0 j L T [Opt: and Ω[j + 2L K] := Ω[j + 2L K] C Φ[j] for each j in 0 j < 2t + e 2L + K ] 10. end if 11. If (K L < Q) then go to step Return v(x) = {V [0], V [1],..., V [L]} [opt: and Ω(x) = {Ω[0], Ω[1],, Ω[2t + e]}. ] The key equation for Reed-Solomon codes with erasures can be expressed by either or W 1 (x) W 2 (x) Ŝ(x) Ω(x) mod x n k (20) Λ 1 (x) Λ 2 (x) S(x) Ω(x) mod x n k (21) where Λ 2 (x) is the reversal of (19), Ŝ(x) is given by (3), and S(x) is given by (2). Observe that n k = 2t + e. In [11], it was shown that the key equation (20) can be solved by initializing the Euclidean algorithm with the inputs x 2t+e and Ĥ(x) = W 2 (x) Ŝ(x) mod x2t+e (22) while stopping the algorithm when we observe a remainder of degree less than t + e. The recursion polynomial W 1 (x) is given by the value of v(x) at this point. An improved version of Forney s formula [5] based on L Hopital s rule is also presented in [11] to recover the error magnitudes. Although most Reed-Solomon codes use generating polynomial (1), it is possible to form a Reed-Solomon code with generator polynomial g(x) = (x α b ) (x α b+1 ) (x α b+n k 1 ) (23) for any b. The standard codes use b = 1 while the nonstandard codes use other values of b, particularly values near n/2. Details for necessary adjustments to the decoding algorithm to handle nonstandard codes are also provided in [11]. In the case of standard Reed-Solomon codes (b=1), Forney s formula is slightly simpler when we instead compute the solution to Key Equation (21). This equation can be solved using a similar technique to that given in [11]. In particular, we initialize the Euclidean algorithm with the inputs x 2t+e and H(x) = Λ 2 (x) S(x) mod x 2t+e, (24)

6 6 Algorithm 5 : New algorithm for decoding systematic Reed-Solomon code with erasures Input: The polynomial r(x) F[x] of degree less than n which represents the received vector of a (n, k, d) Reed-Solomon codeword transmitted through a noisy environment where d = n k + 1; the set {ɛ 1, ɛ 2,..., ɛ e } of erasure positions in the received vector; An integer b. Here, F is a finite field of characteristic 2. Output: Either (1) a message polynomial m(x) F[x] of degree less than k which can be encoded with the Reed-Solomon codeword c(x) F[x] where c(x) and r(x) differ in no more than t + e positions, (t is the error capacity, e is the number of erasures and 2t + e n k) or (2) Decoding Failure. 0. Set t = (n k e)/2. 1. Compute the syndrome S(x) = S n k 1 x n k S 1 x + S 0 where S i = r(α b+i ). 2. Compute Λ 2 (x) := (α ɛ1 x 1) (α ɛ2 x 1) (α ɛe x 1). NOTE: If e = 0, then Λ 2 (x) := Compute H(x) = (S(x) Λ 2 (x)) mod x 2t+e (ignore coefficients of degree 2t+e and higher) 4. Set S (x) = H(x), P (x) = 1, (opt: Ψ(x) = H(x)), K := 0 and Q := t 5. Call Algorithm 4 to solve Key Equation with solution {V [0], V [1],, V [L]}. 6. Assign Λ 1 (x) := V [L] x L + V [L 1] x L V [1] x + V [0]. 7. Determine the values {i 1, i 2,..., i τ } such that Λ 1 (α ij ) = 0 for each 1 j τ. If τ L, then return Decoding Failure ; 8. If (τ is equal to L) then 9. Compute Λ 1(x) and Λ 2(x), the formal derivatives of Λ 1 (x) and Λ 2 (x) respectively. 10. Compute Ω(x) = Λ 1 (x) H(x) mod x 2t+e (or add optional code of Algorithm 4) 11. Let c(x) = r(x). For each 1 j τ, change c ij = r ij + Ω(α ij )/((α ij ) 1 b Λ 1(α ij ) Λ 2 (α ij )) 12. For each 1 j e, change c ɛj = r ɛj + Ω(α ɛj )/((α ɛj ) 1 b Λ 1 (α ɛj ) Λ 2(α ɛj )) 13. End if 14. Extract m(x) from the coefficients of c(x) of degree n k and higher. 15. Return m(x). again stopping the algorithm when we observe a remainder of degree less than t+e. When F is a finite field of characteristic 2, Forney s formula becomes E(α ij ) = Ω(α ij ) (α ij ) 1 b Λ 2 (α ij ) Λ 1 (α ij ) (25) where {i 1, i 2,, i τ } are the error locations given by the reciprocals of the roots of Λ 1 (x) and E(α ɛj ) = Ω(α ɛj ) (α ɛj ) 1 b Λ 2 (α ɛj ) Λ 1 (α ɛj ), (26) where {ɛ 1, ɛ 2,, ɛ e } are the known erasure locations. We can repeat the analysis given in this paper to improve the algorithm given in [11] into one that is as efficient as the Berlekamp-Massey algorithm for correcting errors and erasures. The resulting decoder is given in Algorithm 5. The algorithm can be used to decode Reed-Solomon codes without erasures by simply setting e = 0. This essentially ignores those parts of the algorithm that involve Λ 2 (x) and the erasure correction. If the reader is programming on an architecture where some degree of parallelism exists (e.g. VLSI implementation), then Steps 2 and 3 can be computed in parallel. In this case, H(x) is initialized to S(x) and the binomial multiplications used to build Λ 2 (x) are mirrored to build H(x). If this parallelism is not present, then the standard convolution formula should be used to compute H(x). Similarly, parallel computing environments should benefit from including the optional code in Algorithm 4 to construct Ω(x). Otherwise, the standard convolution formula should be used to construct Ω(x) in step 10 of Algorithm 5. VI. OTHER EFFICIENT REED-SOLOMON DECODING ALGORITHMS Alternatively, we can initialize the Euclidean algorithm with x 2t+e and Ŝ(x) with the recursion polynomial is initialized to W 2 (x). In this case, the output of the Key Equation solver is W (x) rather than W 1 (x). This technique has the advantage of avoiding an expensive polynomial multiplication to compute Ω(x) at the cost of longer recursion polynomials when solving the Key Equation. When the Key Equation is solved, we can reverse the polynomials to obtain the version of Forney s formula that is advantageous in the standard b = 1 case. By applying the analysis given in this paper to an algorithm

7 7 Algorithm 6 : Blahut algorithm for Reed-Solomon decoding (modified to use syndrome Ŝ(x)) Input: The polynomial r(x) F[x] of degree less than n which represents the received vector of a (n, k, d) Reed-Solomon codeword transmitted through a noisy environment where d = n k + 1; the set {ɛ 1, ɛ 2,..., ɛ e } of erasure positions in the received vector; An integer b. Here, F is a finite field of characteristic 2. Output: Either (1) a message polynomial m(x) F[x] of degree less than k which can be encoded with the Reed-Solomon codeword c(x) F[x] where c(x) and r(x) differ in no more than t + e positions (t is the error capacity, e is the number of erasures and 2t + e n k), or (2) Decoding Failure. 0. Set t = (n k e)/2. 1. Compute the syndrome Ŝ(x) = Ŝn k 1 x n k Ŝ1 x + Ŝ0 where Ŝj = r(α n k j+b 1 ). 2. Compute W 2 (x) := (x α ɛ1 ) (x α ɛ2 ) (x α ɛe ) NOTE: If e = 0, then W 2 (x) := Set S to point to the degree e coefficient of Ŝ(x). So S [i] will be the degree i + e coefficient of Ŝ(x) for all i Set P := W 2 (x) 5. Set K := 2e and Q := t + e 6. Call Algorithm 4 to solve Key Equation with solution {V [0], V [1],, V [L]}. 7. Assign Λ(x) := V [0] x L + V [1] x L V [L 1] x + V [L]. 8. Determine the positions {i 1, i 2,..., i τ } such that Λ(α ij ) = 0 and i j / {ɛ 1, ɛ 2, ɛ e } for each 1 j τ. NOTE: The roots of Λ(x) include both errors and erasures. If τ + e L, then return Decoding Failure ; 9. If (τ + e is equal to L) then 10. Compute Λ (x), the formal derivative of Λ(x). 11. Compute S(x) = Ŝ0 x n k 1 + Ŝ1 x n k Ŝn k 2 x + Ŝn k Compute Ω(x) = Λ(x) S(x) mod x n k 13. Let c(x) = r(x). For each 1 j τ, change c ij = r ij +Ω(α ij )/((α ij ) 1 b Λ (α ij )) 14. For each 1 j e, change c ɛj = r ɛj + Ω(α ɛj )/((α ɛj ) 1 b Λ (α ɛj )) 15. End if 16. Extract m(x) from the coefficients of c(x) of degree n k and higher. 17. Return m(x). given by Blahut in [2], we obtain Algorithm 6. 2 Since a clever programmer can implement polynomial reversals at no cost by transforming loop indices, these reversals do not slow down the performance of Algorithm 6. Note that the optional code used to compute Ω(x) cannot be used with Algorithm 6 because H(x) is not explicitly computed in the algorithm. Finally, consider an algorithm introduced by Truong, Jeng, and Cheng [13]. The present author has modified this algorithm so that Algorithm 4 is used to solve the Key Equation. The result is given as Algorithm 7. Several main features of the Truong-Jeng-Cheng algorithm are: Λ 2 (x) is constructed in parallel with H(x) (the decode flag = 0 work of their algorithm); Λ(x) is constructed in parallel with Ω(x) (the decode flag = 1 work of their algorithm); and the use of the inverse-free algorithm (essentially Algorithm 3). To match these computations, Steps 2 and 3 should be computed in parallel if possible, the flag INV should be set to 0, and the 2 The stopping condition Q in Algorithm 6 is e more than the stopping condition of Algorithm 5 because the initial values for K differ in the two algorithms. optional code of Algorithm 4 should be turned on. It should be noted that the t+e saved by the simplified Forney s formula is not significant when put in perspective with the total number of multiplications in the decoding process. Also, when b 1, the simplified Forney s formula cannot be used. In these cases, Algorithms 5-7 can be simplified to work exclusively with the polynomials Ŝ(x) and W (x). Formulas used to correct the errors and erasures in this case can be found in [11]. VII. COMPARISON OF THE THREE ALGORITHMS It can be shown that the computational complexity of algorithms 5, 6, and 7 are all the same. However, there are several key differences in the three approaches which affect each algorithm s running time. First, Algorithms 5 and 7 use Λ 2 (x) as the erasure locator polynomial whereas Algorithm 6 uses its reverse. This feature is not important because any of the algorithms can be easily modified to use either of the erasure locator polynomials. Second, Algorithms 5 and 7 allow for the erasure locator polynomial and H(x) to be computed

8 8 Algorithm 7 : Truong-Jeng-Chang algorithm for decoding systematic Reed-Solomon code with erasures Input: The polynomial r(x) F[x] of degree less than n which represents the received vector of a (n, k, d) Reed-Solomon codeword transmitted through a noisy environment where d = n k + 1; the set {ɛ 1, ɛ 2,..., ɛ e } of erasure positions in the received vector; An integer b. Here, F is a finite field of characteristic 2. Output: Either (1) a message polynomial m(x) F[x] of degree less than k which can be encoded with the Reed-Solomon codeword c(x) F[x] where c(x) and r(x) differ in no more than t + e positions, (t is the error capacity, e is the number of erasures and 2t + e n k) or (2) Decoding Failure. 0. Set t = (n k e)/2. 1. Compute the syndrome S(x) = S n k 1 x n k S 1 x + S 0 where S i = r(α b+i ). 2. Compute Λ 2 (x) := (α ɛ1 x 1) (α ɛ2 x 1) (α ɛe x 1). NOTE: If e = 0, then Λ 2 (x) := Compute H(x) = (S(x) Λ 2 (x)) mod x 2t+e (ignore coefficients of degree 2t + e and higher) 4. Set S to point to the degree e coefficient of Ŝ(x). So S [i] will be the degree i + e coefficient of Ŝ(x) for all i Set P (x) = 1, (opt: Υ(x) = H(x)), K := 2e and Q := t + e 6. Call Algorithm 4 to solve Key Equation with solution {V [0], V [1],, V [L]}. 7. Assign Λ(x) := V [0] x L + V [1] x L V [L 1] x + V [L]. 8. Determine the positions {i 1, i 2,..., i τ } such that Λ(α ij ) = 0 and i j / {ɛ 1, ɛ 2, ɛ e } for each 1 j τ. NOTE: Roots of Λ(x) include both errors and erasures. If τ L, then return Decoding Failure ; 9. If (τ + e is equal to L) then 10. Compute Λ (x), the formal derivative of Λ(x). 12. Compute Ω(x) = Λ(x) S(x) mod x n k (or use optional code of Algorithm 4) 13. Let c(x) = r(x). For each 1 j τ, change c ij = r ij + Ω(α ij )/((α ij ) 1 b Λ (α ij )) 14. For each 1 j e, change c ɛj = r ɛj + Ω(α ɛj )/((α ɛj ) 1 b Λ (α ɛj )) 15. End if 16. Extract m(x) from the coefficients of c(x) of degree n k and higher. 17. Return m(x). in parallel whereas Algorithm 6 does not use H(x). As a consequence of this property, Ω(x) can be computed as a byproduct of the Key Equation solver for Algorithms 5 and 7, but not in Algorithm 6. This difference may be important for certain computer architectures. Finally, Algorithms 6 and 7 compute the error-erasure locator as one result whereas Algorithm 5 separates the error locator polynomial from the erasure locator polynomial. We will discuss the consequences of this difference later in this section. Each of the three algorithms was implemented in the C programming language and timing results were computed using a (255, 239) code for various numbers of errors and erasures. These timing results are summarized in Table 1 and are based on 10,000,000 decoding trials. To provide the fairest comparison between Algorithms 5 and 7, all three algorithms used the inverse-free Key Equation Solver and computed Ω(x) as part of the Key Equation Solver whenever possible. From the timing results, we see that each of the algorithms perform about the same in the errors-only case and erasuresonly case with the best performance achieved by Algorithm 6. However, this is due to the fact that Ω(x) is computed using the convolution formula rather than a byproduct of the Key Equation solver. While the convolution formula requires slightly fewer multiplications, computing the error locator polynomial at the same time as Ω(x) is advantageous in a parallel computing environment (e.g. VLSI implementation). If this is not important, then Algorithms 5 and 7 can be modified to use the convolution formula and achieve similar timing results as Algorithm 6 in these cases. Algorithm 5 significantly outperforms the other two algorithms in the errors and erasure cases. This is because Algorithms 6 and 7 compute an error-and-erasure polynomial as the output of the Key Equation Solver. One must evaluate this polynomial at each nonzero element of the finite field to determine the error locations. 3 In constast, Algorithm 5 evaluates a lower degree polynomial since the erasure locations are already known. It turns out that the determination of the error locations is one of the most computationally expensive steps of the entire decoding process. The timing results for this step can be significantly improved for certain finite fields by using a Fast Fourier Transform (FFT) algorithm described 3 Observe that in the erasures only case, there are no errors and it is not necessary to evaluate the errors-and-erasure polynomial

9 9 Table 1: Timing results of Algorithms 5, 6, 7 on (255, 239) Reed-Solomon code Algorithm 5 Algorithm 6 Algorithm 7 8 errors, 0 erasures microseconds microseconds microseconds 4 errors, 8 erasures microseconds microseconds microseconds 1 error, 14 erasures microseconds microseconds microseconds 0 errors, 16 erasures microseconds microseconds microseconds in [6] and [9]. However, Algorithm 5 will still significantly outperform the other two algorithms because the running time of the FFT is proportional to the polynomial length and Algorithm 5 uses a smaller degree polynomial than the other two algorithms. VIII. IMPROVEMENTS FOR BCH DECODING It should also be mentioned that the Berlekamp-Massey algorithm can be improved for the case of BCH codes with no erasures that have the property that S 2j = (S j ) 2 for all j (see [2], [1], [14]) where S j = e(α j ) for all j 0. When c(α j ) = 0, then S j = r(α j ). Standard BCH codes (b = 1) are examples of codes with the property. Most presentations of these improved decoding techniques require a complete rewrite of the algorithm. The only corresponding adjustments needed for Algorithms 2-4 are: (1) K should be initialized to -1, (2) K should be incremented by 2 in Step 1; and (3) the inequality in Step 11 should be changed to K < 2t 1 since the iteration when K = 2t will not modify v(x). It is also possible to similarly adapt Algorithm 6 for efficient BCH decoding by simply changing line 17 to increment by 2. However, this improvement cannot be applied to Algorithms 5 and 7 since they use Ŝ(x) for the syndrome polynomial and processes the syndome coefficients in reverse order. If one adapts Algorithms 5 and 7 to use S(x), then the result is equivalent to Algorithm 6 in the BCH code case. For BCH codes with a small numbers of errors, it is possible to treat the coefficients of the syndrome polynomial as indeterminants in Algorithm 6 and obtain explicit formulas for the error locator polynomials. In the case of standard BCH codes, the formulas W (1) (x) = x + S 1 (27) W (2) (x) = x 2 + S 1 x + S 3 + S 3 1 S 1 (28) W (3) (x) = x 3 + S 1 x 2 + S2 1S 3 + S 5 S S 3 x +(S S 3 ) + S 1 S2 1S 3 + S 5 S S 3 (29) can be used to locate errors in codes with up to three errors. Observe that these formulas coincide with the reversal of Peterson s direct-solution algorithm formulas given in [14]. IX. CONCLUDING REMARKS In this paper, we presented a simplification of the algorithms given in [3], [7] and demonstrated that the Berlekamp-Massey algorithm is equivalent to the Extended Euclidean Algorithm. We then showed how these new Key Equation solvers can be used as a component in efficient algorithms for decoding Reed-Solomon codes, both with and without erasures. An efficient Key equation Solver was also presented and used as a component in several efficient Reed-Solomon decoding algorithms. The algorithms have been presented to allow the reader several options for Reed-Solomon decoding. First, the choice of an Inverse-Free or more standard Key Equation solver is allowed in Algorithm 4. Second, the reader may choose to compute the error locator and Ω(x) simultaneously or to compute Ω(x) separately using the standard convolution formula. For parallel computing envionments (e.g. VLSI implementation), it is more advantageous to select the first of the two options. For implementation on a standard workstation, the second of the two options should be selected. The algorithms can also be adapted to work with BCH codes and different set of tradeoffs might be made in this case. We ultimately considered three efficient Reed-Solomon decoding algorithms. All three algorithms performed about the same in the errors-only case and erasures-only case. However, the new decoding method summarized in Algorithm 5 was demonstrated to be much more efficient than the other two algorithms in the errors and erasures case. This is a consequence of the fact that smaller degree polynomials are used to find the error locations in the Reed-Solomon decoding process. REFERENCES [1] E.R. Berlekamp. Algebraic Coding Theory, McGraw-Hill (1968). [2] Richard E. Blahut. Algebraic Codes for Data Transmission, Cambridge (2002). [3] J.L Dornstetter. On the equivalence between Berlekamp s and Euclid s algorithms IEEE Trans. Inform. Theory, IT-33: , [4] W.L. Eastman. Euclideanization of the Berlekamp-Massey algorithm. Proceedings of the Tactical Communication Conference, Vol 1, 1988, pp [5] G.D. Forney. On Decoding BCH Codes, IEEE Trans. Inf. Theory, IT-11: , [6] Shuhong Gao and Todd D. Mateer. Additive Fast Fourier Transforms over Finite Fields. IEEE Transactions in Information Theory, vol 56 no 12. pp , [7] Agnes Heydtmann and Jorn Jensen On the equivalence of the Berlekamp-Massey and the Euclidean Algorithms for Decoding IEEE Trans. Inform. Theory, 46(7): , [8] J.L.Massey. Shift-register synthesis and BCH decoding, IEEE Trans. Inform. Theory, IT-15: , [9] Todd D. Mateer, Fast Fourier Transform Algorithms with Applications. PhD Dissertation. Available at: [10] Todd D. Mateer. On the Equivalence of the Berlekamp-Massey and Euclidean Algorithms for Algebraic Decoding. Proceedings of the 2011 Canadian Workshop on Information Theory (IEEE), Kelowna, British Columbia, Canada. May [11] Todd D. Mateer. Simple Algorithms for Decoding Reed-Solomon Codes. To appear in Designs, Codes, and Cryptography. Published online by Springer on July 18, 2012.

10 10 [12] Y. Suguyama, M. Kasahara, S. Hirasawa, and Toshihiko Namekawa. A method for solving key equation for decoding Goppa codes, Information and Control, 27: 87-99, [13] T.-K. Truong, J.-H. Jeng, and T. C. Cheng, A new decoding algorithm for correcting both erasures and errors of ReedSolomon codes, Vol. 51, no. 3, PP , IEEE Trans. Communications, 51(3), , [14] Stephen B. Wicker. Error Control Systems for Digital Communication and Storage, Prentice Hall (1995). Dr. Todd Mateer received his PhD in Mathematical Sciences from Clemson University in 2008 under the direction of Dr. Shuhong Gao. His dissertation discusses Fast Fourier Transform Algorithms and their applications in signal analysis, computer algebra, and coding theory. He was the first student to earn two undergraduate degrees from Grove City College where he received both a B.S.E.E. degree and and a B.S. degree in Mathematics / Computer Science. In 1999, he earned a Masters Degree from Clemson University under the direction of Dr. Joel Brawley where he conducted a mathematical analysis of video poker in South Carolina and mathematically proved that one can profit from certain casino games such as video poker over a long period of time with the appropriate strategy. In 2001, he joined Howard Community College where he currently serves as Master Adjunct Instructor. During the summers, Dr. Mateer teaches elementary classical cryptography, the mathematics of casino games, and the drawbacks of gambling at the Math and Related Sciences camps held at the University of Maryland Eastern Shore. He also does work for the Department of Defense, has four children, and is an amateur magician. His magic tricks teach basic concepts of coding theory and computer science.

VLSI Architecture of Euclideanized BM Algorithm for Reed-Solomon Code

VLSI Architecture of Euclideanized BM Algorithm for Reed-Solomon Code JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 2, 4-4 (29) VLSI Architecture of Euclideanized BM Algorithm for Reed-Solomon Code HUANG-CHI CHEN,2, YU-WEN CHANG 3 AND REY-CHUE HWANG Deaprtment of Electrical

More information

Reverse Berlekamp-Massey Decoding

Reverse Berlekamp-Massey Decoding Reverse Berlekamp-Massey Decoding Jiun-Hung Yu and Hans-Andrea Loeliger Department of Information Technology and Electrical Engineering ETH Zurich, Switzerland Email: {yu, loeliger}@isi.ee.ethz.ch arxiv:1301.736v

More information

Berlekamp-Massey decoding of RS code

Berlekamp-Massey decoding of RS code IERG60 Coding for Distributed Storage Systems Lecture - 05//06 Berlekamp-Massey decoding of RS code Lecturer: Kenneth Shum Scribe: Bowen Zhang Berlekamp-Massey algorithm We recall some notations from lecture

More information

Simplification of Procedure for Decoding Reed- Solomon Codes Using Various Algorithms: An Introductory Survey

Simplification of Procedure for Decoding Reed- Solomon Codes Using Various Algorithms: An Introductory Survey 2014 IJEDR Volume 2, Issue 1 ISSN: 2321-9939 Simplification of Procedure for Decoding Reed- Solomon Codes Using Various Algorithms: An Introductory Survey 1 Vivek Tilavat, 2 Dr.Yagnesh Shukla 1 PG Student,

More information

On Irreducible Polynomial Remainder Codes

On Irreducible Polynomial Remainder Codes 2011 IEEE International Symposium on Information Theory Proceedings On Irreducible Polynomial Remainder Codes Jiun-Hung Yu and Hans-Andrea Loeliger Department of Information Technology and Electrical Engineering

More information

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding Chapter 6 Reed-Solomon Codes 6. Finite Field Algebra 6. Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding 6. Finite Field Algebra Nonbinary codes: message and codeword symbols

More information

Fast Decoding Of Alternant Codes Using A Divison-Free Analog Of An Accelerated Berlekamp-Massey Algorithm

Fast Decoding Of Alternant Codes Using A Divison-Free Analog Of An Accelerated Berlekamp-Massey Algorithm Fast Decoding Of Alternant Codes Using A Divison-Free Analog Of An Accelerated Berlekamp-Massey Algorithm MARC A. ARMAND WEE SIEW YEN Department of Electrical & Computer Engineering National University

More information

Modified Euclidean Algorithms for Decoding Reed-Solomon Codes

Modified Euclidean Algorithms for Decoding Reed-Solomon Codes Modified Euclidean Algorithms for Decoding Reed-Solomon Codes Dilip V. Sarwate Department of Electrical and Computer Engineering and the Coordinated Science Laboratory University of Illinois at Urbana-Champaign

More information

The Golay codes. Mario de Boer and Ruud Pellikaan

The Golay codes. Mario de Boer and Ruud Pellikaan The Golay codes Mario de Boer and Ruud Pellikaan Appeared in Some tapas of computer algebra (A.M. Cohen, H. Cuypers and H. Sterk eds.), Project 7, The Golay codes, pp. 338-347, Springer, Berlin 1999, after

More information

Decoding Algorithm and Architecture for BCH Codes under the Lee Metric

Decoding Algorithm and Architecture for BCH Codes under the Lee Metric Decoding Algorithm and Architecture for BCH Codes under the Lee Metric Yingquan Wu and Christoforos N. Hadjicostis Coordinated Science Laboratory and Department of Electrical and Computer Engineering University

More information

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission P.Mozhiarasi, C.Gayathri, V.Deepan Master of Engineering, VLSI design, Sri Eshwar College of Engineering, Coimbatore- 641 202,

More information

An Interpolation Algorithm for List Decoding of Reed-Solomon Codes

An Interpolation Algorithm for List Decoding of Reed-Solomon Codes An Interpolation Algorithm for List Decoding of Reed-Solomon Codes Kwankyu Lee Department of Mathematics San Diego State University San Diego, USA Email: kwankyu@sogangackr Michael E O Sullivan Department

More information

Coding Theory. Ruud Pellikaan MasterMath 2MMC30. Lecture 11.1 May

Coding Theory. Ruud Pellikaan MasterMath 2MMC30. Lecture 11.1 May Coding Theory Ruud Pellikaan g.r.pellikaan@tue.nl MasterMath 2MMC30 /k Lecture 11.1 May 12-2016 Content lecture 11 2/31 In Lecture 8.2 we introduced the Key equation Now we introduce two algorithms which

More information

ERROR CORRECTION BEYOND THE CONVENTIONAL ERROR BOUND FOR REED SOLOMON CODES

ERROR CORRECTION BEYOND THE CONVENTIONAL ERROR BOUND FOR REED SOLOMON CODES Journal of ELECTRICAL ENGINEERING, VOL. 54, NO. -2, 2003, 305 30 ERROR CORRECTION BEYOND THE CONVENTIONAL ERROR BOUND FOR REED SOLOMON CODES Sergey Egorov Garik Markarian A modification of Blahut procedure

More information

Decoding Procedure for BCH, Alternant and Goppa Codes defined over Semigroup Ring

Decoding Procedure for BCH, Alternant and Goppa Codes defined over Semigroup Ring Decoding Procedure for BCH, Alternant and Goppa Codes defined over Semigroup Ring Antonio Aparecido de Andrade Department of Mathematics, IBILCE, UNESP, 15054-000, São José do Rio Preto, SP, Brazil E-mail:

More information

1 Reed Solomon Decoder Final Project. Group 3 Abhinav Agarwal S Branavan Grant Elliott. 14 th May 2007

1 Reed Solomon Decoder Final Project. Group 3 Abhinav Agarwal S Branavan Grant Elliott. 14 th May 2007 1 Reed Solomon Decoder 6.375 Final Project Group 3 Abhinav Agarwal S Branavan Grant Elliott 14 th May 2007 2 Outline Error Correcting Codes Mathematical Foundation of Reed Solomon Codes Decoder Architecture

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

RON M. ROTH * GADIEL SEROUSSI **

RON M. ROTH * GADIEL SEROUSSI ** ENCODING AND DECODING OF BCH CODES USING LIGHT AND SHORT CODEWORDS RON M. ROTH * AND GADIEL SEROUSSI ** ABSTRACT It is shown that every q-ary primitive BCH code of designed distance δ and sufficiently

More information

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often

More information

Error Correction Review

Error Correction Review Error Correction Review A single overall parity-check equation detects single errors. Hamming codes used m equations to correct one error in 2 m 1 bits. We can use nonbinary equations if we create symbols

More information

PAPER A Low-Complexity Step-by-Step Decoding Algorithm for Binary BCH Codes

PAPER A Low-Complexity Step-by-Step Decoding Algorithm for Binary BCH Codes 359 PAPER A Low-Complexity Step-by-Step Decoding Algorithm for Binary BCH Codes Ching-Lung CHR a),szu-linsu, Members, and Shao-Wei WU, Nonmember SUMMARY A low-complexity step-by-step decoding algorithm

More information

New Algebraic Decoding of (17,9,5) Quadratic Residue Code by using Inverse Free Berlekamp-Massey Algorithm (IFBM)

New Algebraic Decoding of (17,9,5) Quadratic Residue Code by using Inverse Free Berlekamp-Massey Algorithm (IFBM) International Journal of Computational Intelligence Research (IJCIR). ISSN: 097-87 Volume, Number 8 (207), pp. 205 2027 Research India Publications http://www.ripublication.com/ijcir.htm New Algebraic

More information

The Berlekamp-Massey Algorithm revisited

The Berlekamp-Massey Algorithm revisited The Berlekamp-Massey Algorithm revisited Nadia Ben Atti ( ), Gema M Diaz Toca ( ) Henri Lombardi ( ) Abstract We propose a slight modification of the Berlekamp-Massey Algorithm for obtaining the minimal

More information

New algebraic decoding method for the (41, 21,9) quadratic residue code

New algebraic decoding method for the (41, 21,9) quadratic residue code New algebraic decoding method for the (41, 21,9) quadratic residue code Mohammed M. Al-Ashker a, Ramez Al.Shorbassi b a Department of Mathematics Islamic University of Gaza, Palestine b Ministry of education,

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

Decoding Reed-Muller codes over product sets

Decoding Reed-Muller codes over product sets Rutgers University May 30, 2016 Overview Error-correcting codes 1 Error-correcting codes Motivation 2 Reed-Solomon codes Reed-Muller codes 3 Error-correcting codes Motivation Goal: Send a message Don t

More information

An algorithm for computing minimal bidirectional linear recurrence relations

An algorithm for computing minimal bidirectional linear recurrence relations Loughborough University Institutional Repository An algorithm for computing minimal bidirectional linear recurrence relations This item was submitted to Loughborough University's Institutional Repository

More information

A Fresh Look at the Berlekamp-Massey Algorithm with Application to Low Power BCH decoding Ishai Ilani, Western Digital

A Fresh Look at the Berlekamp-Massey Algorithm with Application to Low Power BCH decoding Ishai Ilani, Western Digital A Fresh ook at the Berlekamp-Massey Algorithm with Application to ow Power BCH decoding Ishai Ilani, Western Digital Abstract BCH codes are gaining renewed attention lately as new applications based on

More information

Cyclic codes. Vahid Meghdadi Reference: Error Correction Coding by Todd K. Moon. February 2008

Cyclic codes. Vahid Meghdadi Reference: Error Correction Coding by Todd K. Moon. February 2008 Cyclic codes Vahid Meghdadi Reference: Error Correction Coding by Todd K. Moon February 2008 1 Definitions Definition 1. A ring < R, +,. > is a set R with two binary operation + (addition) and. (multiplication)

More information

Complexity Analysis of Reed Solomon Decoding over GF(2 m ) Without Using Syndromes

Complexity Analysis of Reed Solomon Decoding over GF(2 m ) Without Using Syndromes 1 Complexity Analysis of Reed Solomon Decoding over GF( m ) Without Using Syndromes Ning Chen and Zhiyuan Yan Department of Electrical and Computer Engineering Lehigh University, Bethlehem, Pennsylvania

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

The number of message symbols encoded into a

The number of message symbols encoded into a L.R.Welch THE ORIGINAL VIEW OF REED-SOLOMON CODES THE ORIGINAL VIEW [Polynomial Codes over Certain Finite Fields, I.S.Reed and G. Solomon, Journal of SIAM, June 1960] Parameters: Let GF(2 n ) be the eld

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x), Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 + + a n 1 x n 1 + a n x n, where the coefficients a 1, a 2,, a n are

More information

Optical Storage Technology. Error Correction

Optical Storage Technology. Error Correction Optical Storage Technology Error Correction Introduction With analog audio, there is no opportunity for error correction. With digital audio, the nature of binary data lends itself to recovery in the event

More information

Chapter 6. BCH Codes

Chapter 6. BCH Codes Chapter 6 BCH Codes Description of the Codes Decoding of the BCH Codes Outline Implementation of Galois Field Arithmetic Implementation of Error Correction Nonbinary BCH Codes and Reed-Solomon Codes Weight

More information

Chapter 5. Cyclic Codes

Chapter 5. Cyclic Codes Wireless Information Transmission System Lab. Chapter 5 Cyclic Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Description of Cyclic Codes Generator and Parity-Check

More information

Error-correcting codes and Cryptography

Error-correcting codes and Cryptography Error-correcting codes and Cryptography Henk van Tilborg Code-based Cryptography Workshop Eindhoven, May -2, 2 /45 CONTENTS I II III IV V Error-correcting codes; the basics Quasi-cyclic codes; codes generated

More information

Reed-Solomon Error-correcting Codes

Reed-Solomon Error-correcting Codes The Deep Hole Problem Matt Keti (Advisor: Professor Daqing Wan) Department of Mathematics University of California, Irvine November 8, 2012 Humble Beginnings Preview of Topics 1 Humble Beginnings Problems

More information

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Solved Exercises and Problems of Cyclic Codes Enes Pasalic University of Primorska Koper, 2013 Contents 1 Preface 3 2 Problems 4 2 1 Preface This is a collection of solved

More information

ECEN 604: Channel Coding for Communications

ECEN 604: Channel Coding for Communications ECEN 604: Channel Coding for Communications Lecture: Introduction to Cyclic Codes Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 604: Channel Coding for Communications

More information

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials Outline MSRI-UP 2009 Coding Theory Seminar, Week 2 John B. Little Department of Mathematics and Computer Science College of the Holy Cross Cyclic Codes Polynomial Algebra More on cyclic codes Finite fields

More information

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction BCH Codes Outline Binary Primitive BCH Codes Decoding of the BCH Codes Implementation of Galois Field Arithmetic Implementation of Error Correction Nonbinary BCH Codes and Reed-Solomon Codes Preface The

More information

Black Box Linear Algebra

Black Box Linear Algebra Black Box Linear Algebra An Introduction to Wiedemann s Approach William J. Turner Department of Mathematics & Computer Science Wabash College Symbolic Computation Sometimes called Computer Algebra Symbols

More information

Counting Functions for the k-error Linear Complexity of 2 n -Periodic Binary Sequences

Counting Functions for the k-error Linear Complexity of 2 n -Periodic Binary Sequences Counting Functions for the k-error inear Complexity of 2 n -Periodic Binary Sequences amakanth Kavuluru and Andrew Klapper Department of Computer Science, University of Kentucky, exington, KY 40506. Abstract

More information

Alternant and BCH codes over certain rings

Alternant and BCH codes over certain rings Computational and Applied Mathematics Vol. 22, N. 2, pp. 233 247, 2003 Copyright 2003 SBMAC Alternant and BCH codes over certain rings A.A. ANDRADE 1, J.C. INTERLANDO 1 and R. PALAZZO JR. 2 1 Department

More information

Decoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1

Decoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1 Fifteenth International Workshop on Algebraic and Combinatorial Coding Theory June 18-24, 2016, Albena, Bulgaria pp. 255 260 Decoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1 Sven Puchinger

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 x + + a n 1 x n 1 + a n x n, where the coefficients a 0, a 1, a 2,,

More information

Cyclic codes: overview

Cyclic codes: overview Cyclic codes: overview EE 387, Notes 14, Handout #22 A linear block code is cyclic if the cyclic shift of a codeword is a codeword. Cyclic codes have many advantages. Elegant algebraic descriptions: c(x)

More information

Error Detection, Correction and Erasure Codes for Implementation in a Cluster File-system

Error Detection, Correction and Erasure Codes for Implementation in a Cluster File-system Error Detection, Correction and Erasure Codes for Implementation in a Cluster File-system Steve Baker December 6, 2011 Abstract. The evaluation of various error detection and correction algorithms and

More information

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1) Cyclic codes: review EE 387, Notes 15, Handout #26 A cyclic code is a LBC such that every cyclic shift of a codeword is a codeword. A cyclic code has generator polynomial g(x) that is a divisor of every

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

Great Theoretical Ideas in Computer Science

Great Theoretical Ideas in Computer Science 15-251 Great Theoretical Ideas in Computer Science Polynomials, Lagrange, and Error-correction Lecture 23 (November 10, 2009) P(X) = X 3 X 2 + + X 1 + Definition: Recall: Fields A field F is a set together

More information

New Steganographic scheme based of Reed- Solomon codes

New Steganographic scheme based of Reed- Solomon codes New Steganographic scheme based of Reed- Solomon codes I. DIOP; S.M FARSSI ;O. KHOUMA ; H. B DIOUF ; K.TALL ; K.SYLLA Ecole Supérieure Polytechnique de l Université Dakar Sénégal Email: idydiop@yahoo.fr;

More information

Decoding of the Five-Error-Correcting Binary Quadratic Residue Codes

Decoding of the Five-Error-Correcting Binary Quadratic Residue Codes American Journal of Mathematical and Computer Modelling 2017; 2(1): 6-12 http://www.sciencepublishinggroup.com//amcm doi: 10.1168/.amcm.20170201.12 Decoding of the Five-Error-Correcting Binary Quadratic

More information

Code-Based Cryptography Error-Correcting Codes and Cryptography

Code-Based Cryptography Error-Correcting Codes and Cryptography Code-Based Cryptography Error-Correcting Codes and Cryptography I. Márquez-Corbella 0 1. Error-Correcting Codes and Cryptography 1. Introduction I - Cryptography 2. Introduction II - Coding Theory 3. Encoding

More information

Error-correcting codes and applications

Error-correcting codes and applications Error-correcting codes and applications November 20, 2017 Summary and notation Consider F q : a finite field (if q = 2, then F q are the binary numbers), V = V(F q,n): a vector space over F q of dimension

More information

Introduction to finite fields

Introduction to finite fields Chapter 7 Introduction to finite fields This chapter provides an introduction to several kinds of abstract algebraic structures, particularly groups, fields, and polynomials. Our primary interest is in

More information

Fault Tolerance & Reliability CDA Chapter 2 Cyclic Polynomial Codes

Fault Tolerance & Reliability CDA Chapter 2 Cyclic Polynomial Codes Fault Tolerance & Reliability CDA 5140 Chapter 2 Cyclic Polynomial Codes - cylic code: special type of parity check code such that every cyclic shift of codeword is a codeword - for example, if (c n-1,

More information

A Simple Left-to-Right Algorithm for Minimal Weight Signed Radix-r Representations

A Simple Left-to-Right Algorithm for Minimal Weight Signed Radix-r Representations IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. XX, NO. X, MONTH 2007 1 A Simple Left-to-Right Algorithm for Minimal Weight Signed Radix-r Representations James A. Muir Abstract We present a simple algorithm

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

On the BMS Algorithm

On the BMS Algorithm On the BMS Algorithm Shojiro Sakata The University of Electro-Communications Department of Information and Communication Engineering Chofu-shi, Tokyo 182-8585, JAPAN Abstract I will present a sketch of

More information

Information Theory. Lecture 7

Information Theory. Lecture 7 Information Theory Lecture 7 Finite fields continued: R3 and R7 the field GF(p m ),... Cyclic Codes Intro. to cyclic codes: R8.1 3 Mikael Skoglund, Information Theory 1/17 The Field GF(p m ) π(x) irreducible

More information

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

Smart Hill Climbing Finds Better Boolean Functions

Smart Hill Climbing Finds Better Boolean Functions Smart Hill Climbing Finds Better Boolean Functions William Millan, Andrew Clark and Ed Dawson Information Security Research Centre Queensland University of Technology GPO Box 2434, Brisbane, Queensland,

More information

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes. B. Cyclic Codes A cyclic code is a linear block code with the further property that a shift of a codeword results in another codeword. These are based on polynomials whose elements are coefficients from

More information

Objective: To become acquainted with the basic concepts of cyclic codes and some aspects of encoder implementations for them.

Objective: To become acquainted with the basic concepts of cyclic codes and some aspects of encoder implementations for them. ECE 7670 Lecture 5 Cyclic codes Objective: To become acquainted with the basic concepts of cyclic codes and some aspects of encoder implementations for them. Reading: Chapter 5. 1 Cyclic codes Definition

More information

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes S-723410 BCH and Reed-Solomon Codes 1 S-723410 BCH and Reed-Solomon Codes 3 Background The algebraic structure of linear codes and, in particular, cyclic linear codes, enables efficient encoding and decoding

More information

Homework 8 Solutions to Selected Problems

Homework 8 Solutions to Selected Problems Homework 8 Solutions to Selected Problems June 7, 01 1 Chapter 17, Problem Let f(x D[x] and suppose f(x is reducible in D[x]. That is, there exist polynomials g(x and h(x in D[x] such that g(x and h(x

More information

Information redundancy

Information redundancy Information redundancy Information redundancy add information to date to tolerate faults error detecting codes error correcting codes data applications communication memory p. 2 - Design of Fault Tolerant

More information

Notes 7: Justesen codes, Reed-Solomon and concatenated codes decoding. 1 Review - Concatenated codes and Zyablov s tradeoff

Notes 7: Justesen codes, Reed-Solomon and concatenated codes decoding. 1 Review - Concatenated codes and Zyablov s tradeoff Introduction to Coding Theory CMU: Spring 2010 Notes 7: Justesen codes, Reed-Solomon and concatenated codes decoding March 2010 Lecturer: V. Guruswami Scribe: Venkat Guruswami & Balakrishnan Narayanaswamy

More information

Solutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2)

Solutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2) Solutions or answers to Final exam in Error Control Coding, October, Solution to Problem a) G(D) = ( +D, +D + D ) b) The rate R =/ and ν i = ν = m =. c) Yes, since gcd ( +D, +D + D ) =+D + D D j. d) An

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution 1. Polynomial intersections Find (and prove) an upper-bound on the number of times two distinct degree

More information

Implementation of Galois Field Arithmetic. Nonbinary BCH Codes and Reed-Solomon Codes

Implementation of Galois Field Arithmetic. Nonbinary BCH Codes and Reed-Solomon Codes BCH Codes Wireless Information Transmission System Lab Institute of Communications Engineering g National Sun Yat-sen University Outline Binary Primitive BCH Codes Decoding of the BCH Codes Implementation

More information

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16 Solutions of Exam Coding Theory (2MMC30), 23 June 2016 (1.a) Consider the 4 4 matrices as words in F 16 2, the binary vector space of dimension 16. C is the code of all binary 4 4 matrices such that the

More information

McBits: Fast code-based cryptography

McBits: Fast code-based cryptography McBits: Fast code-based cryptography Peter Schwabe Radboud University Nijmegen, The Netherlands Joint work with Daniel Bernstein, Tung Chou December 17, 2013 IMA International Conference on Cryptography

More information

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UFD. Therefore

More information

Fast Polynomial Multiplication

Fast Polynomial Multiplication Fast Polynomial Multiplication Marc Moreno Maza CS 9652, October 4, 2017 Plan Primitive roots of unity The discrete Fourier transform Convolution of polynomials The fast Fourier transform Fast convolution

More information

Remainders. We learned how to multiply and divide in elementary

Remainders. We learned how to multiply and divide in elementary Remainders We learned how to multiply and divide in elementary school. As adults we perform division mostly by pressing the key on a calculator. This key supplies the quotient. In numerical analysis and

More information

Generator Matrix. Theorem 6: If the generator polynomial g(x) of C has degree n-k then C is an [n,k]-cyclic code. If g(x) = a 0. a 1 a n k 1.

Generator Matrix. Theorem 6: If the generator polynomial g(x) of C has degree n-k then C is an [n,k]-cyclic code. If g(x) = a 0. a 1 a n k 1. Cyclic Codes II Generator Matrix We would now like to consider how the ideas we have previously discussed for linear codes are interpreted in this polynomial version of cyclic codes. Theorem 6: If the

More information

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Agenda NAND ECC History Soft Information What is soft information How do we obtain

More information

A Simple Left-to-Right Algorithm for Minimal Weight Signed Radix-r Representations

A Simple Left-to-Right Algorithm for Minimal Weight Signed Radix-r Representations A Simple Left-to-Right Algorithm for Minimal Weight Signed Radix-r Representations James A. Muir School of Computer Science Carleton University, Ottawa, Canada http://www.scs.carleton.ca/ jamuir 23 October

More information

Chapter 7 Reed Solomon Codes and Binary Transmission

Chapter 7 Reed Solomon Codes and Binary Transmission Chapter 7 Reed Solomon Codes and Binary Transmission 7.1 Introduction Reed Solomon codes named after Reed and Solomon [9] following their publication in 1960 have been used together with hard decision

More information

Skew cyclic codes: Hamming distance and decoding algorithms 1

Skew cyclic codes: Hamming distance and decoding algorithms 1 Skew cyclic codes: Hamming distance and decoding algorithms 1 J. Gómez-Torrecillas, F. J. Lobillo, G. Navarro Department of Algebra and CITIC, University of Granada Department of Computer Sciences and

More information

Polynomial Codes over Certain Finite Fields

Polynomial Codes over Certain Finite Fields Polynomial Codes over Certain Finite Fields A paper by: Irving Reed and Gustave Solomon presented by Kim Hamilton March 31, 2000 Significance of this paper: Introduced ideas that form the core of current

More information

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

An Approach to Hensel s Lemma

An Approach to Hensel s Lemma Irish Math. Soc. Bulletin 47 (2001), 15 21 15 An Approach to Hensel s Lemma gary mcguire Abstract. Hensel s Lemma is an important tool in many ways. One application is in factoring polynomials over Z.

More information

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps 2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical

More information

MATH 431 PART 2: POLYNOMIAL RINGS AND FACTORIZATION

MATH 431 PART 2: POLYNOMIAL RINGS AND FACTORIZATION MATH 431 PART 2: POLYNOMIAL RINGS AND FACTORIZATION 1. Polynomial rings (review) Definition 1. A polynomial f(x) with coefficients in a ring R is n f(x) = a i x i = a 0 + a 1 x + a 2 x 2 + + a n x n i=0

More information

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 25 Cyclic Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 26, 2014 2 / 25 Cyclic Codes Definition A cyclic shift

More information

The Berlekamp algorithm

The Berlekamp algorithm The Berlekamp algorithm John Kerl University of Arizona Department of Mathematics 29 Integration Workshop August 6, 29 Abstract Integer factorization is a Hard Problem. Some cryptosystems, such as RSA,

More information

Codes used in Cryptography

Codes used in Cryptography Prasad Krishnan Signal Processing and Communications Research Center, International Institute of Information Technology, Hyderabad March 29, 2016 Outline Coding Theory and Cryptography Linear Codes Codes

More information

5.0 BCH and Reed-Solomon Codes 5.1 Introduction

5.0 BCH and Reed-Solomon Codes 5.1 Introduction 5.0 BCH and Reed-Solomon Codes 5.1 Introduction A. Hocquenghem (1959), Codes correcteur d erreurs; Bose and Ray-Chaudhuri (1960), Error Correcting Binary Group Codes; First general family of algebraic

More information

An introduction to linear and cyclic codes

An introduction to linear and cyclic codes An introduction to linear and cyclic codes Daniel Augot 1, Emanuele Betti 2, and Emmanuela Orsini 3 1 INRIA Paris-Rocquencourt DanielAugot@inriafr 2 Department of Mathematics, University of Florence betti@mathunifiit

More information

Error-Correcting Codes

Error-Correcting Codes Error-Correcting Codes HMC Algebraic Geometry Final Project Dmitri Skjorshammer December 14, 2010 1 Introduction Transmission of information takes place over noisy signals. This is the case in satellite

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

: Error Correcting Codes. November 2017 Lecture 2

: Error Correcting Codes. November 2017 Lecture 2 03683072: Error Correcting Codes. November 2017 Lecture 2 Polynomial Codes and Cyclic Codes Amnon Ta-Shma and Dean Doron 1 Polynomial Codes Fix a finite field F q. For the purpose of constructing polynomial

More information

Error Detection & Correction

Error Detection & Correction Error Detection & Correction Error detection & correction noisy channels techniques in networking error detection error detection capability retransmition error correction reconstruction checksums redundancy

More information

Algebra for error control codes

Algebra for error control codes Algebra for error control codes EE 387, Notes 5, Handout #7 EE 387 concentrates on block codes that are linear: Codewords components are linear combinations of message symbols. g 11 g 12 g 1n g 21 g 22

More information

GF(2 m ) arithmetic: summary

GF(2 m ) arithmetic: summary GF(2 m ) arithmetic: summary EE 387, Notes 18, Handout #32 Addition/subtraction: bitwise XOR (m gates/ops) Multiplication: bit serial (shift and add) bit parallel (combinational) subfield representation

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 9B ERROR CONTROL CODING Spring 005 Solutions for Homework 1. (Weights of codewords in a cyclic code) Let g(x) be the generator polynomial of a binary cyclic code of length n. (a) Show that if g(x) has

More information