REED-SOLOMON codes are powerful techniques for

Similar documents
VLSI Architecture of Euclideanized BM Algorithm for Reed-Solomon Code

Reverse Berlekamp-Massey Decoding

Berlekamp-Massey decoding of RS code

Simplification of Procedure for Decoding Reed- Solomon Codes Using Various Algorithms: An Introductory Survey

On Irreducible Polynomial Remainder Codes

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Fast Decoding Of Alternant Codes Using A Divison-Free Analog Of An Accelerated Berlekamp-Massey Algorithm

Modified Euclidean Algorithms for Decoding Reed-Solomon Codes

The Golay codes. Mario de Boer and Ruud Pellikaan

Decoding Algorithm and Architecture for BCH Codes under the Lee Metric

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission

An Interpolation Algorithm for List Decoding of Reed-Solomon Codes

Coding Theory. Ruud Pellikaan MasterMath 2MMC30. Lecture 11.1 May

ERROR CORRECTION BEYOND THE CONVENTIONAL ERROR BOUND FOR REED SOLOMON CODES

Decoding Procedure for BCH, Alternant and Goppa Codes defined over Semigroup Ring

1 Reed Solomon Decoder Final Project. Group 3 Abhinav Agarwal S Branavan Grant Elliott. 14 th May 2007

Lecture 12. Block Diagram

RON M. ROTH * GADIEL SEROUSSI **

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Error Correction Review

PAPER A Low-Complexity Step-by-Step Decoding Algorithm for Binary BCH Codes

New Algebraic Decoding of (17,9,5) Quadratic Residue Code by using Inverse Free Berlekamp-Massey Algorithm (IFBM)

The Berlekamp-Massey Algorithm revisited

New algebraic decoding method for the (41, 21,9) quadratic residue code

Reed-Solomon codes. Chapter Linear codes over finite fields

Decoding Reed-Muller codes over product sets

An algorithm for computing minimal bidirectional linear recurrence relations

A Fresh Look at the Berlekamp-Massey Algorithm with Application to Low Power BCH decoding Ishai Ilani, Western Digital

Cyclic codes. Vahid Meghdadi Reference: Error Correction Coding by Todd K. Moon. February 2008

Complexity Analysis of Reed Solomon Decoding over GF(2 m ) Without Using Syndromes

Error Correction Methods

The number of message symbols encoded into a

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

Optical Storage Technology. Error Correction

Chapter 6. BCH Codes

Chapter 5. Cyclic Codes

Error-correcting codes and Cryptography

Reed-Solomon Error-correcting Codes

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

ECEN 604: Channel Coding for Communications

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

Black Box Linear Algebra

Counting Functions for the k-error Linear Complexity of 2 n -Periodic Binary Sequences

Alternant and BCH codes over certain rings

Decoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

Cyclic codes: overview

Error Detection, Correction and Erasure Codes for Implementation in a Cluster File-system

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

3. Coding theory 3.1. Basic concepts

Great Theoretical Ideas in Computer Science

New Steganographic scheme based of Reed- Solomon codes

Decoding of the Five-Error-Correcting Binary Quadratic Residue Codes

Code-Based Cryptography Error-Correcting Codes and Cryptography

Error-correcting codes and applications

Introduction to finite fields

Fault Tolerance & Reliability CDA Chapter 2 Cyclic Polynomial Codes

A Simple Left-to-Right Algorithm for Minimal Weight Signed Radix-r Representations

IN this paper, we consider the capacity of sticky channels, a

On the BMS Algorithm

Information Theory. Lecture 7

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Smart Hill Climbing Finds Better Boolean Functions

B. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.

Objective: To become acquainted with the basic concepts of cyclic codes and some aspects of encoder implementations for them.

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

Homework 8 Solutions to Selected Problems

Information redundancy

Notes 7: Justesen codes, Reed-Solomon and concatenated codes decoding. 1 Review - Concatenated codes and Zyablov s tradeoff

Solutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2)

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution

Implementation of Galois Field Arithmetic. Nonbinary BCH Codes and Reed-Solomon Codes

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

McBits: Fast code-based cryptography

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

Fast Polynomial Multiplication

Remainders. We learned how to multiply and divide in elementary

Generator Matrix. Theorem 6: If the generator polynomial g(x) of C has degree n-k then C is an [n,k]-cyclic code. If g(x) = a 0. a 1 a n k 1.

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

A Simple Left-to-Right Algorithm for Minimal Weight Signed Radix-r Representations

Chapter 7 Reed Solomon Codes and Binary Transmission

Skew cyclic codes: Hamming distance and decoding algorithms 1

Polynomial Codes over Certain Finite Fields

Iterative Encoding of Low-Density Parity-Check Codes

An Approach to Hensel s Lemma

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps

MATH 431 PART 2: POLYNOMIAL RINGS AND FACTORIZATION

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

The Berlekamp algorithm

Codes used in Cryptography

5.0 BCH and Reed-Solomon Codes 5.1 Introduction

An introduction to linear and cyclic codes

Error-Correcting Codes

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

: Error Correcting Codes. November 2017 Lecture 2

Error Detection & Correction

Algebra for error control codes

GF(2 m ) arithmetic: summary

EE 229B ERROR CONTROL CODING Spring 2005

Transcription:

1 Efficient algorithms for decoding Reed-Solomon codes with erasures Todd Mateer Abstract In this paper, we present a new algorithm for decoding Reed-Solomon codes with both errors and erasures. The algorithm combines an efficient method for solving the Key Equation and a technique which separates the error locator polynomial from the erasure locator polynomial. The new algorithm is compared to two other efficient Reed-Solomon decoding algorithms and shown to be significantly faster for errors and erasures decoding. Applications to BCH decoding are also provided. Index Terms Reed-Solomon Codes, BCH Codes, Berlekamp- Massey Algorithm, Euclidean Algorithm REED-SOLOMON codes are powerful techniques for correcting multiple errors introduced when a message is transmitted in a noisy environment. These codes are very popular and can be found in compact disc players and NASA satellites used for deep-space exploration. The power of these codes resides in algebraic properties of finite fields which allows for several errors to be corrected in each codeword. Each codeword in the standard (n, k, d) Reed-Solomon code is a multiple of the generator polynomial g(x) = (x α) (x α 2 ) (x α n k ) (1) defined over F[x] where F is a finite field with primitive root of unity α. The code has minimum distance d = n k + 1 which means that the code is capable of correcting up to t = (n k)/2 errors. This paper will first review several observations made at the 2011 Canadian Information Theory Workshop regarding the equivalence of the Extended Euclidean Algorithm and Berlekamp-Massey Algorithm for solving the Reed-Solomon decoding key equation. Next, we will present a number of additional simplifications that can be made to the algorithm given at the conference which improve its performance. Finally, we will apply the resulting Key Equation solver to errors and erasure decoding of Reed-Solomon codes. I. EQUIVALENCE OF THE BERLEKAMP-MASSEY AND EUCLIDEAN ALGORITHMS In [3], Dornstetter first demonstrated that the Berlekamp- Massey [8] and Euclidean algorithms [12] are equivalent methods for solving the so-called Key Equation used in Reed- Solomon decoding. Heydt [7] provided additional insights regarding this equivalence in his 2000 paper. At the 2011 Canadian Information Theory Workshop, the current author presented his own perspective on this relationship [10]. In this Todd Mateer is with the Mathematics Division, Howard Community College, 10901 Little Patuxent Parkway, Columbia, MD 21044 USA email: tmateer@howardcc.edu. Manuscript received XXXX; revised XXXXX. section, we now summarize some of the key observations made at the workshop. First, the Extended Euclidean Algorithm processes the syndrome polynomial in the order of high degree terms to low degree terms while the Berlekamp-Massey algorithm processes the syndrome polynomial from low degree terms to high degree terms. In order for the two algorithms to produce polynomials with the same coefficients, we must use S(x) = 2t 1 j=0 r(α j+1 ) x j (2) as the syndrome polynomial for the Berlekamp-Massey algorithm and Ŝ(x) = 2t 1 j=0 r(α 2t j ) x j. (3) as the syndrome polynomial for the Extended Euclidean Algorithm. In these formulas, r(x) is the polynomial representation of the received vector of the message transmission. As a consequence of the fact that Ŝ(x) is the reversal of S(x), 1 every intermediate result of the Extended Euclidean Algorithm will be the reversal of the corresponding result in the Berlekamp- Massey algorithm. Second, while it is traditional to let r 1 (x) = a(x) and r 0 (x) = b(x) in the Extended Euclidean Algorithm, it is often more advantageous to set r 1 (x) = a(x) + b(x) for algebraic decoding, particularly for BCH codes. In this case, we would alter v 1 (x) = 1 in the initialization of the algorithm. Third, we can multiply r i 2 (x) = u i 2 a(x) + v i 2 b(x) by any nonzero constant C to obtain C r i 2 (x) = C u i 2 (x) a(x) + C v i 2 (x) b(x) (4) The remainder polynomials for the various choices of C will have different leading coefficients, but will all produce the same result when normalized. If r i 2 (x) is modified by C, then the corresponding u i (x) and v i (x) polynomials are given by u i (x) = C u i 2 (x) q i (x) u i 1 (x), (5) v i (x) = C v i 2 (x) q i (x) v i 1 (x). (6) Fourth, we can compute r i (x) in the Extended Euclidean Algorithm using an iterative procedure similar to the Berlekamp-Massey algorithm. At iteration i of the Extended 1 In the case of the syndrome polynomials, one must be careful to define the definition of reversal in terms of a polynomial of degree 2t 1. By this definition, S j = Ŝ2t 1 j for each j in 0 j < 2t 1. This does not always coincide with definition of reciprocal polynomials.

2 Algorithm 1 : Improved Extended Euclidean Algorithm for solving the Key Equation Input: The syndrome polynomial S(x) F[x] for some finite field F and integer t 0. Output: The polynomial v(x) such that r(x) = u(x) x 2t + v(x) S(x) for some polynomials u(x) and r(x) where deg(r) < t. 0. Assign T (x) = v 1 (x) = 1, δ = 2t, D 1 = 1, i = 0, v 0 (x) = 1, L i = 0, K = 0. If t is equal to 0 (no error correction capability), then go to step 12 1. Assign K equal to K + 1. Assign m equal to 2t + L i K. 2. Compute the coefficient of degree m in r i (x) = v i (x) S(x), i.e. D i = L i j=0 (v i) j S m j where S m j is the degree m j coefficient of S(x) 3. If D i = 0, then go to step 11. 4. If m < δ then 5. Assign i equal to i + 1 Assign v i (x) = C v i 2 (x) C (D i 2 /D i 1 ) x δ m v i 1 (x) for arbitrary C. NOTE: If C = D i 1 /D i 2, then use v i (x) = D i 1 T (x) x δ m v i 1 (x) 6. Assign T (x) equal to v i 1 (x)/d i 1. 7. Assign δ equal to m and L i equal to the degree of v i, i.e. K L i 1. 8. else 9. Assign v i (x) equal to v i (x) D i T (x) x m δ 10. end if 11. If K < 2t and m t then go to step 1 12. Return v(x) = v i (x) Euclidean Algorithm, let the leading coefficients of r i 2 (x) and r i 1 (x) be denoted as D i 2 x δ and D i 1 x m respectively. The symbol D i was selected as the variable to denote the leading coefficient of r i (x) because it corresponds to the discrepancy of the Berlekamp-Massey algorithm. If r i 2 (x) is multiplied by a constant C, then the leading term of q i (x) is given by q (1) i (x) = C D i 2 x δ D i 1 x m (7) Here, the notation q (1) i (x) will be used to denote this as our first guess for q i (x). The polynomial q (1) i (x) can be used to determine our first guess for r i (x) through the formula r (1) i (x) = C r i 2 (x) q (1) i (x) r i 1 (x) (8) If q i (x) = q (1) i (x), then r (1) i (x) will have degree less than deg(r i 1 ) and we can move on to division step i + 1 of the Extended Euclidean algorithm. Otherwise, the leading coefficient of r (1) i (x) is a discrepancy that we can use to adjust q i (x). Suppose that we are given guess γ 1 for q i (x), r i (x) and we wish to determine guess γ for these polynomials. If the leading terms of r (γ 1) i (x) and r i 1 (x) are D i x m and D i 1 x δ respectively, then observe that D i x m δ r i 1 (x) (9) D i 1 is an expression that matches the leading term of r (γ 1) i (x). Hence, we can assign q (γ) i (x) = q (γ 1) i (x) + D i D i 1 x m δ (10) r (γ) i (x) = C r i 2 (x) q (γ) i (x) r i 1 (x) (11) to obtain a polynomial for r (γ) i (x) with degree less than deg(r (γ 1) i ). Thus, after a finite number of iterations we will obtain a quotient and remainder that equal q i (x) and r i (x). The fifth observation is that it is only necessary to compute the leading terms of the intermediate r i (x) polynomials in order to obtain the outputs u(x) and v(x). One can easily verify this claim by substituting r i 2 (x) = u i 2 (x) a(x) + v i 2 (x) b(x), r i 1 (x) = u i 1 (x) a(x) + v i 1 (x) b(x), into the discussion given in the previous two paragraphs. Observations four and five above can be applied to any application of the Extended Euclidean Algorithm. However, for algebraic decoding we restrict ourselves to the case where a(x) = x 2t. Furthermore, the only results that need to be computed to determine the recursion polynomial v(x) are the intermediate v i (x) polynomials. These facts can be used to greatly simplify the Extended Euclidean algorithm calculations. For any i 1, the polynomial r i (x) will have degree less than 2t when a(x) = x 2t. Hence if b(x) = Ŝ(x), then r i (x) = v i (x) Ŝ(x) mod x2t. (12) Furthermore, if the degree of v i (x) is L i, then the coefficient of degree m in r i (x) is given by the convolution formula L i j=0 (v i ) j Ŝm j. (13) Here, (v i ) j denotes the coefficient of degree j in v i (x) and Ŝ m j denotes the coefficient of degree m j in Ŝ(x).

3 Algorithm 2 : Simplified algorithm for solving the Key Equation Input: The syndrome polynomial S(x) F[x] for some finite field F and integer t 0. Output: The polynomial v(x) such that r(x) = u(x) x 2t + v(x) S(x) for some polynomials u(x) and r(x) where deg(r) < t. 0. Assign K = 0, v (0) (x) = 1, L = 0, T (x) = 1, ψ = 1. If t is equal to 0 (no error correction capability), then go to step 12 1. Assign K equal to K + 1. 2. Assign D K = S 2t K + L 1 j=0 (v K 1) j S 2t+L K j 3. If D k = 0, then v (K) (x) = v (K 1) (x) and go to step 11. 4. If 2L < K then 5. Assign v (K) (x) = (ψ D K ) T (x) x K 2L v (K 1) (x) 6. Assign T (x) equal to v (K 1) (x) and ψ = (D K ) 1. 7. Assign L equal to K L. 8. else 9. Assign v (K) (x) = v (K 1) (x) (ψ D K ) T (x) x 2L K 10. end if 11. If K t + L then go to step 1 12. Return v(x) = v (K) (x) We are now ready to present an improved version of the Extended Euclidean Algorithm which can produce the same intermediate results of the Berlekamp-Massey algorithm. Each iteration of the algorithm will given by computing the leading coefficient of r i (x) from v i (x) and S(x) using (13) where m is the desired degree of the leading coefficient. If the coefficient is zero, then we continue to decrement m until a nonzero coefficient is encountered. Once the leading coefficient has been found, we compare the degree of r i (x) to the degree of r i 1 (x) which is stored in variable δ. If m < δ, then we can proceed to the next division step of the Euclidean algorithm. Increment i and compute our first guess for v i (x) using formulas (6) and (7) where C is any nonzero constant. We will check to see how good our guess was on the next iteration of the algorithm. In the meantime, it is also useful to save off the normalized v i 1 (x) polynomial in the temporary variable T (x). Since, we have obtained a new v i (x), we also save off the degree of the last reminder polynomial obtained in the variable δ as well as the degree of v i (x) in L i. It turns out that this is simply equal to K L i 1. If m δ, then r i (x) has degree greater than or equal to r i 1 (x). We need to adjust r i (x) before proceeding to the next division step of the algorithm. By repeating the analysis used to produce (11), an updated guess for v i (x) is given by v (γ) i (x) = C v i 2 (x) q (γ) i (x) v i 1 (x) x m δ (14) where C is the constant chosen to produce v (1) i (x). However, we only need to update v i (x) based on the new part of q i (x). Hence, (14) simplifies to v (γ) i (x) = v (γ 1) i (x) D i T i 1 (x) x m δ (15) when T i 1 (x) = v i 1 (x)/d i 1 is also substituted into the formula. We will check to see how good this guess was on the next iteration of the algorithm. Algorithm 1 implements the improved Extended Euclidean Algorithm when C = 1 is assigned during every iteration. It can also implement the Berlekamp-Massey algorithm by assigning C to D i 2 /D i 1 on each iteration and initializing the algorithm with v 1 (x) = 1, v 0 (x) = 1. This enforces the property of the Berlekamp-Massey algorithm that the generating polynomial of the shift register associated with v i (x) always has a constant term of 1. When the polynomial is reversed to translate to the Extended Euclidean Algorithm, this implies that every v i (x) polynomial will be monic and (6) simplies to v (1) i (x) = D i 1 T (x) x δ m v i 1 (x). (16) If we make the substitutions m = 2t + L i K, δ = 2t + L i 1 K 0, and L i = K 0 L i 1 in Step 4 of Algorithm 1, we get the condition 2L i < K which corresponds to a similar condition which appears in the Berlekamp-Massey algorithm. If the syndrome polynomial S(x) is set to be Ŝ(x) (formula (3)), then the v i (x) polynomials of Algorithm 1 will be the reversal of the Λ (K) (x) polynomials found in the version of the Berlekamp-Massey algorithm presented in the conference paper and the T (x) and discrepancy values of the two algorithms should similarly correspond. Alternatively, one can call algorithm 1 with syndrome polynomial S(x) (formula (2)) and the algorithm will produce Λ(x) (the Berlekamp-Massey error-locator polynomial) as the output. II. IMPROVEMENTS TO THE NEW ALGORITHM Algorithm 1 can be cleaned up to produce a more efficient algorithm. In particular, we can eliminate variables i, δ, and m, making every computation in the algorithm a function of both K and the degree of the recursion polynomial at iteration K. We can also omit the normalization of the recursion polynomial in step 6 and save several multiplications. In order for the algorithm to still produce the correct result, we store the inverse of the leading coefficient of the recursion polynomial into the variable ψ and use this value in steps 5 and 9 of subsequent iterations. Step 11 can be simplified into the

4 Algorithm 3 : An Inverse-Free Key Equation Solver Input: The syndrome polynomial S(x) F[x] for some finite field F and integer t 0. Output: The polynomial v(x) such that r(x) = u(x) x 2t + v(x) S(x) for some polynomials u(x) and r(x) where deg(r) < t. 0. Assign K = 0, v (0) (x) = 1, L = 0, T (x) = 1, γ = 1. If t is equal to 0 (no error correction capability), then go to step 12 1. Assign K equal to K + 1. 2. Assign D K = L j=0 (v K 1) j S 2t+L K j 3. If D K = 0, then v (K) (x) = v (K 1) (x) and go to step 11. 4. If 2L < K then 5. Assign v (K) (x) = D K T (x) γ x K 2L v (K 1) (x) 6. Assign T (x) equal to v (K 1) (x) and γ = D K. 7. Assign L equal to K L. 8. else 9. Assign v (K) (x) = γ v (K 1) (x) D K T (x) x 2L K 10. end if 11. If K t + L then go to step 1 12. Return v(x) = v (K) (x) single inequality K t + L for return to Step 1. It appears that the assignment of C = D i 2 /D i 1 at each iteration is slightly more efficient than the assignment C = 1. Algorithm 2 presents the result of making these modifications to Algorithm 1. Each line of Algorithm 2 matches the same computation performed in Algorithm 1. III. AN INVERSE-FREE KEY EQUATION SOLVER In [4], Eastman demonstrated that it is possible to solve the Key Equation without computing any inverses. By applying the techniques given in the conference paper to this approach, we obtain Algorithm 3. Observe that the recursion polynomials are no longer monic. The tradeoff with avoiding the computation of the inverse of a finite field element is the multiplication of the recursion polynomial by a constant. If one is solving the key equation in a highly parallel environment where coefficients of this polynomial can be distributed to many computation modules, then this tradeoff is advantageous. However, if space permits the storage of a lookup table for the inverse of each element of a finite field, then Algorithm 2 may be the better approach. The decision of whether Algorithm 2 or Algorithm 3 is better depends on the computing environment of an individual s application. In some environments, the computation of an inverse is expensive and should be avoided. If the inverse can be determined by a lookup table, then Algorithm 2 is more efficient. The reader can decide whether Algorithm 2 or 3 is more advantageous for his or her computation environment. IV. EFFICIENT IMPLEMENTATION OF THE SIMPLIFIED ALGORITHM Through the use of pointers, we can implement the shifts and assignments involving the recursion polynomials at no cost. As part of the initialization of the algorithm, declare two arrays of size 2t = n k. The pointer V will point to one of the arrays which is initialized to v (0) (x) and T will point to the other array which will be initialized with v ( 1) (x). Shifts are implemented by moving the pointer within the array while the assignment of v (K 1) (x) to T (x) in Algorithm 2 is implemented by swapping the roles of the two pointers. We can also generalize the algorithm to start at any step K if v (K) (x) and v (K 1) (x) are known. Pseudocode for this improved algorithm is given in Algorithm 4. To match Algorithm 2, Algorithm 4 should be called with e = 0, S (x) = Ŝ(x), P (x) = 1 (so that L = 0), K = 0, Q = t, and INV=1. To match Algorithm 3, use the same parameters with the exception that INV should be set to 0. The role of e and the optional instructions will be discussed in later sections. V. NEW ALGORITHM FOR DECODING REED-SOLOMON CODES WITH ERASURES We will now apply the new Key Equation solvers to the decoding of Reed-Solomon codes with erasures. An erasure is a position where we suspect that a symbol was received incorrectly. Here, we regard an error as a mistake where we know neither its position or value. The t errors will be represented by the roots of W 1 (x) and the e erasures at known locations {ɛ 1, ɛ 2,..., ɛ e } will be represented by the roots of W 2 (x). In other words, where W (x) = W 1 (x) W 2 (x) (17) W 1 (x) = (x α i1 ) (x α i2 ) (x α it ), (18) W 2 (x) = (x α ɛ1 ) (x α ɛ2 ) (x α ɛe ). (19) Since the locations of the erasures are known, it is possible to compute W 2 (x) at the beginning of the decoding process. When erasures are present, it is possible to correct any number of errors or erasures that satisfy 2t + e n k. In this case, observe that deg(w 1 ) = t (n k e)/2, deg(w 2 ) = e, and deg(w ) = t + e (n k + e)/2.

5 Algorithm 4 : Efficient implementation of Key Equation solver Input: The (possibly modified) syndrome polynomial S (x) F[x] for finite field F; Initialization polynomial P (x) and optional second initialization polynomial Υ(x); Starting step value K, stopping criteria Q, and integers t, e 0; Inverse flag (INV) equal to 0 or 1 Output: The polynomial v(x) such that r(x) = u(x) x 2t+e + v(x) S (x) for some polynomials u(x) and r(x) where deg(r) < t. [Optional: and polynomial Ω(x) ] 0. Allocate two arrays A and B each of size 2t + e and initialized to all 0. [Optional: Allocate two arrays Y and Z each of size 2t + e and initialized to all 0. ] Set L be the degree of P (x); Set L T = L Copy A[i] = P i (the degree i coefficient of P ) and B[i] = P i for each i in 0 i L. [Opt: Copy Y [i] = Υ i and Z[i] = Υ i for each i in 0 i < 2t + e.] Set pointer V to the starting address of A and T to the starting address of B [Opt: Set pointer Ω to the starting address of Y and Φ to the starting address of Z ] Assign ψ := 1 and γ := 1 If (K L Q) then go to step 12 1. Assign K := K + 1. 2. Assign D := L j=0 V [j] S [2t + e + L K j]. NOTE: S [i] is the degree i coefficient of S (x) for all i 0 3. If D = 0, then go to step 11. 4. Set C := ψ D. If 2L < K then 5. Assign T [j] := C T [j] for each j in 0 j L T [Opt: Assign Φ[j] := C Φ[j] for each j in 0 j < 2t + e ] Then assign T [j + K 2L] := T [j + K + 2L] γ V [j] for each j in 0 j L. [Opt: and assign Φ[j +K 2L] := Φ[j +K +2L] γ Ω[j] for each j in 0 j < 2t+e+2L K. 6. Swap pointers T and V. [Opt: Swap pointers Φ and Ω ]. Assign L T = L If INV=0, assign γ := D; If INV=1, assign ψ := D 1. 7. Assign L := K L. 8. else 9. If INV=0: Assign V [j] := γ V [j] for each j in 0 j L If INV=0: [Opt: Assign Ω[j] := γ Ω[j] for each j in 0 j 2t + e ] Assign V [j + 2L K] := V [j + 2L K] C T [j] for each j in 0 j L T [Opt: and Ω[j + 2L K] := Ω[j + 2L K] C Φ[j] for each j in 0 j < 2t + e 2L + K ] 10. end if 11. If (K L < Q) then go to step 1 12. Return v(x) = {V [0], V [1],..., V [L]} [opt: and Ω(x) = {Ω[0], Ω[1],, Ω[2t + e]}. ] The key equation for Reed-Solomon codes with erasures can be expressed by either or W 1 (x) W 2 (x) Ŝ(x) Ω(x) mod x n k (20) Λ 1 (x) Λ 2 (x) S(x) Ω(x) mod x n k (21) where Λ 2 (x) is the reversal of (19), Ŝ(x) is given by (3), and S(x) is given by (2). Observe that n k = 2t + e. In [11], it was shown that the key equation (20) can be solved by initializing the Euclidean algorithm with the inputs x 2t+e and Ĥ(x) = W 2 (x) Ŝ(x) mod x2t+e (22) while stopping the algorithm when we observe a remainder of degree less than t + e. The recursion polynomial W 1 (x) is given by the value of v(x) at this point. An improved version of Forney s formula [5] based on L Hopital s rule is also presented in [11] to recover the error magnitudes. Although most Reed-Solomon codes use generating polynomial (1), it is possible to form a Reed-Solomon code with generator polynomial g(x) = (x α b ) (x α b+1 ) (x α b+n k 1 ) (23) for any b. The standard codes use b = 1 while the nonstandard codes use other values of b, particularly values near n/2. Details for necessary adjustments to the decoding algorithm to handle nonstandard codes are also provided in [11]. In the case of standard Reed-Solomon codes (b=1), Forney s formula is slightly simpler when we instead compute the solution to Key Equation (21). This equation can be solved using a similar technique to that given in [11]. In particular, we initialize the Euclidean algorithm with the inputs x 2t+e and H(x) = Λ 2 (x) S(x) mod x 2t+e, (24)

6 Algorithm 5 : New algorithm for decoding systematic Reed-Solomon code with erasures Input: The polynomial r(x) F[x] of degree less than n which represents the received vector of a (n, k, d) Reed-Solomon codeword transmitted through a noisy environment where d = n k + 1; the set {ɛ 1, ɛ 2,..., ɛ e } of erasure positions in the received vector; An integer b. Here, F is a finite field of characteristic 2. Output: Either (1) a message polynomial m(x) F[x] of degree less than k which can be encoded with the Reed-Solomon codeword c(x) F[x] where c(x) and r(x) differ in no more than t + e positions, (t is the error capacity, e is the number of erasures and 2t + e n k) or (2) Decoding Failure. 0. Set t = (n k e)/2. 1. Compute the syndrome S(x) = S n k 1 x n k 1 + + S 1 x + S 0 where S i = r(α b+i ). 2. Compute Λ 2 (x) := (α ɛ1 x 1) (α ɛ2 x 1) (α ɛe x 1). NOTE: If e = 0, then Λ 2 (x) := 1. 3. Compute H(x) = (S(x) Λ 2 (x)) mod x 2t+e (ignore coefficients of degree 2t+e and higher) 4. Set S (x) = H(x), P (x) = 1, (opt: Ψ(x) = H(x)), K := 0 and Q := t 5. Call Algorithm 4 to solve Key Equation with solution {V [0], V [1],, V [L]}. 6. Assign Λ 1 (x) := V [L] x L + V [L 1] x L 1 + + V [1] x + V [0]. 7. Determine the values {i 1, i 2,..., i τ } such that Λ 1 (α ij ) = 0 for each 1 j τ. If τ L, then return Decoding Failure ; 8. If (τ is equal to L) then 9. Compute Λ 1(x) and Λ 2(x), the formal derivatives of Λ 1 (x) and Λ 2 (x) respectively. 10. Compute Ω(x) = Λ 1 (x) H(x) mod x 2t+e (or add optional code of Algorithm 4) 11. Let c(x) = r(x). For each 1 j τ, change c ij = r ij + Ω(α ij )/((α ij ) 1 b Λ 1(α ij ) Λ 2 (α ij )) 12. For each 1 j e, change c ɛj = r ɛj + Ω(α ɛj )/((α ɛj ) 1 b Λ 1 (α ɛj ) Λ 2(α ɛj )) 13. End if 14. Extract m(x) from the coefficients of c(x) of degree n k and higher. 15. Return m(x). again stopping the algorithm when we observe a remainder of degree less than t+e. When F is a finite field of characteristic 2, Forney s formula becomes E(α ij ) = Ω(α ij ) (α ij ) 1 b Λ 2 (α ij ) Λ 1 (α ij ) (25) where {i 1, i 2,, i τ } are the error locations given by the reciprocals of the roots of Λ 1 (x) and E(α ɛj ) = Ω(α ɛj ) (α ɛj ) 1 b Λ 2 (α ɛj ) Λ 1 (α ɛj ), (26) where {ɛ 1, ɛ 2,, ɛ e } are the known erasure locations. We can repeat the analysis given in this paper to improve the algorithm given in [11] into one that is as efficient as the Berlekamp-Massey algorithm for correcting errors and erasures. The resulting decoder is given in Algorithm 5. The algorithm can be used to decode Reed-Solomon codes without erasures by simply setting e = 0. This essentially ignores those parts of the algorithm that involve Λ 2 (x) and the erasure correction. If the reader is programming on an architecture where some degree of parallelism exists (e.g. VLSI implementation), then Steps 2 and 3 can be computed in parallel. In this case, H(x) is initialized to S(x) and the binomial multiplications used to build Λ 2 (x) are mirrored to build H(x). If this parallelism is not present, then the standard convolution formula should be used to compute H(x). Similarly, parallel computing environments should benefit from including the optional code in Algorithm 4 to construct Ω(x). Otherwise, the standard convolution formula should be used to construct Ω(x) in step 10 of Algorithm 5. VI. OTHER EFFICIENT REED-SOLOMON DECODING ALGORITHMS Alternatively, we can initialize the Euclidean algorithm with x 2t+e and Ŝ(x) with the recursion polynomial is initialized to W 2 (x). In this case, the output of the Key Equation solver is W (x) rather than W 1 (x). This technique has the advantage of avoiding an expensive polynomial multiplication to compute Ω(x) at the cost of longer recursion polynomials when solving the Key Equation. When the Key Equation is solved, we can reverse the polynomials to obtain the version of Forney s formula that is advantageous in the standard b = 1 case. By applying the analysis given in this paper to an algorithm

7 Algorithm 6 : Blahut algorithm for Reed-Solomon decoding (modified to use syndrome Ŝ(x)) Input: The polynomial r(x) F[x] of degree less than n which represents the received vector of a (n, k, d) Reed-Solomon codeword transmitted through a noisy environment where d = n k + 1; the set {ɛ 1, ɛ 2,..., ɛ e } of erasure positions in the received vector; An integer b. Here, F is a finite field of characteristic 2. Output: Either (1) a message polynomial m(x) F[x] of degree less than k which can be encoded with the Reed-Solomon codeword c(x) F[x] where c(x) and r(x) differ in no more than t + e positions (t is the error capacity, e is the number of erasures and 2t + e n k), or (2) Decoding Failure. 0. Set t = (n k e)/2. 1. Compute the syndrome Ŝ(x) = Ŝn k 1 x n k 1 + + Ŝ1 x + Ŝ0 where Ŝj = r(α n k j+b 1 ). 2. Compute W 2 (x) := (x α ɛ1 ) (x α ɛ2 ) (x α ɛe ) NOTE: If e = 0, then W 2 (x) := 1. 3. Set S to point to the degree e coefficient of Ŝ(x). So S [i] will be the degree i + e coefficient of Ŝ(x) for all i 0. 4. Set P := W 2 (x) 5. Set K := 2e and Q := t + e 6. Call Algorithm 4 to solve Key Equation with solution {V [0], V [1],, V [L]}. 7. Assign Λ(x) := V [0] x L + V [1] x L 1 + + V [L 1] x + V [L]. 8. Determine the positions {i 1, i 2,..., i τ } such that Λ(α ij ) = 0 and i j / {ɛ 1, ɛ 2, ɛ e } for each 1 j τ. NOTE: The roots of Λ(x) include both errors and erasures. If τ + e L, then return Decoding Failure ; 9. If (τ + e is equal to L) then 10. Compute Λ (x), the formal derivative of Λ(x). 11. Compute S(x) = Ŝ0 x n k 1 + Ŝ1 x n k 2 + + Ŝn k 2 x + Ŝn k 1. 12. Compute Ω(x) = Λ(x) S(x) mod x n k 13. Let c(x) = r(x). For each 1 j τ, change c ij = r ij +Ω(α ij )/((α ij ) 1 b Λ (α ij )) 14. For each 1 j e, change c ɛj = r ɛj + Ω(α ɛj )/((α ɛj ) 1 b Λ (α ɛj )) 15. End if 16. Extract m(x) from the coefficients of c(x) of degree n k and higher. 17. Return m(x). given by Blahut in [2], we obtain Algorithm 6. 2 Since a clever programmer can implement polynomial reversals at no cost by transforming loop indices, these reversals do not slow down the performance of Algorithm 6. Note that the optional code used to compute Ω(x) cannot be used with Algorithm 6 because H(x) is not explicitly computed in the algorithm. Finally, consider an algorithm introduced by Truong, Jeng, and Cheng [13]. The present author has modified this algorithm so that Algorithm 4 is used to solve the Key Equation. The result is given as Algorithm 7. Several main features of the Truong-Jeng-Cheng algorithm are: Λ 2 (x) is constructed in parallel with H(x) (the decode flag = 0 work of their algorithm); Λ(x) is constructed in parallel with Ω(x) (the decode flag = 1 work of their algorithm); and the use of the inverse-free algorithm (essentially Algorithm 3). To match these computations, Steps 2 and 3 should be computed in parallel if possible, the flag INV should be set to 0, and the 2 The stopping condition Q in Algorithm 6 is e more than the stopping condition of Algorithm 5 because the initial values for K differ in the two algorithms. optional code of Algorithm 4 should be turned on. It should be noted that the t+e saved by the simplified Forney s formula is not significant when put in perspective with the total number of multiplications in the decoding process. Also, when b 1, the simplified Forney s formula cannot be used. In these cases, Algorithms 5-7 can be simplified to work exclusively with the polynomials Ŝ(x) and W (x). Formulas used to correct the errors and erasures in this case can be found in [11]. VII. COMPARISON OF THE THREE ALGORITHMS It can be shown that the computational complexity of algorithms 5, 6, and 7 are all the same. However, there are several key differences in the three approaches which affect each algorithm s running time. First, Algorithms 5 and 7 use Λ 2 (x) as the erasure locator polynomial whereas Algorithm 6 uses its reverse. This feature is not important because any of the algorithms can be easily modified to use either of the erasure locator polynomials. Second, Algorithms 5 and 7 allow for the erasure locator polynomial and H(x) to be computed

8 Algorithm 7 : Truong-Jeng-Chang algorithm for decoding systematic Reed-Solomon code with erasures Input: The polynomial r(x) F[x] of degree less than n which represents the received vector of a (n, k, d) Reed-Solomon codeword transmitted through a noisy environment where d = n k + 1; the set {ɛ 1, ɛ 2,..., ɛ e } of erasure positions in the received vector; An integer b. Here, F is a finite field of characteristic 2. Output: Either (1) a message polynomial m(x) F[x] of degree less than k which can be encoded with the Reed-Solomon codeword c(x) F[x] where c(x) and r(x) differ in no more than t + e positions, (t is the error capacity, e is the number of erasures and 2t + e n k) or (2) Decoding Failure. 0. Set t = (n k e)/2. 1. Compute the syndrome S(x) = S n k 1 x n k 1 + + S 1 x + S 0 where S i = r(α b+i ). 2. Compute Λ 2 (x) := (α ɛ1 x 1) (α ɛ2 x 1) (α ɛe x 1). NOTE: If e = 0, then Λ 2 (x) := 1. 3. Compute H(x) = (S(x) Λ 2 (x)) mod x 2t+e (ignore coefficients of degree 2t + e and higher) 4. Set S to point to the degree e coefficient of Ŝ(x). So S [i] will be the degree i + e coefficient of Ŝ(x) for all i 0. 5. Set P (x) = 1, (opt: Υ(x) = H(x)), K := 2e and Q := t + e 6. Call Algorithm 4 to solve Key Equation with solution {V [0], V [1],, V [L]}. 7. Assign Λ(x) := V [0] x L + V [1] x L 1 + + V [L 1] x + V [L]. 8. Determine the positions {i 1, i 2,..., i τ } such that Λ(α ij ) = 0 and i j / {ɛ 1, ɛ 2, ɛ e } for each 1 j τ. NOTE: Roots of Λ(x) include both errors and erasures. If τ L, then return Decoding Failure ; 9. If (τ + e is equal to L) then 10. Compute Λ (x), the formal derivative of Λ(x). 12. Compute Ω(x) = Λ(x) S(x) mod x n k (or use optional code of Algorithm 4) 13. Let c(x) = r(x). For each 1 j τ, change c ij = r ij + Ω(α ij )/((α ij ) 1 b Λ (α ij )) 14. For each 1 j e, change c ɛj = r ɛj + Ω(α ɛj )/((α ɛj ) 1 b Λ (α ɛj )) 15. End if 16. Extract m(x) from the coefficients of c(x) of degree n k and higher. 17. Return m(x). in parallel whereas Algorithm 6 does not use H(x). As a consequence of this property, Ω(x) can be computed as a byproduct of the Key Equation solver for Algorithms 5 and 7, but not in Algorithm 6. This difference may be important for certain computer architectures. Finally, Algorithms 6 and 7 compute the error-erasure locator as one result whereas Algorithm 5 separates the error locator polynomial from the erasure locator polynomial. We will discuss the consequences of this difference later in this section. Each of the three algorithms was implemented in the C programming language and timing results were computed using a (255, 239) code for various numbers of errors and erasures. These timing results are summarized in Table 1 and are based on 10,000,000 decoding trials. To provide the fairest comparison between Algorithms 5 and 7, all three algorithms used the inverse-free Key Equation Solver and computed Ω(x) as part of the Key Equation Solver whenever possible. From the timing results, we see that each of the algorithms perform about the same in the errors-only case and erasuresonly case with the best performance achieved by Algorithm 6. However, this is due to the fact that Ω(x) is computed using the convolution formula rather than a byproduct of the Key Equation solver. While the convolution formula requires slightly fewer multiplications, computing the error locator polynomial at the same time as Ω(x) is advantageous in a parallel computing environment (e.g. VLSI implementation). If this is not important, then Algorithms 5 and 7 can be modified to use the convolution formula and achieve similar timing results as Algorithm 6 in these cases. Algorithm 5 significantly outperforms the other two algorithms in the errors and erasure cases. This is because Algorithms 6 and 7 compute an error-and-erasure polynomial as the output of the Key Equation Solver. One must evaluate this polynomial at each nonzero element of the finite field to determine the error locations. 3 In constast, Algorithm 5 evaluates a lower degree polynomial since the erasure locations are already known. It turns out that the determination of the error locations is one of the most computationally expensive steps of the entire decoding process. The timing results for this step can be significantly improved for certain finite fields by using a Fast Fourier Transform (FFT) algorithm described 3 Observe that in the erasures only case, there are no errors and it is not necessary to evaluate the errors-and-erasure polynomial

9 Table 1: Timing results of Algorithms 5, 6, 7 on (255, 239) Reed-Solomon code Algorithm 5 Algorithm 6 Algorithm 7 8 errors, 0 erasures 70.07 microseconds 68.49 microseconds 70.13 microseconds 4 errors, 8 erasures 59.73 microseconds 82.45 microseconds 83.42 microseconds 1 error, 14 erasures 54.98 microseconds 92.87 microseconds 93.15 microseconds 0 errors, 16 erasures 54.14 microseconds 53.47 microseconds 53.71 microseconds in [6] and [9]. However, Algorithm 5 will still significantly outperform the other two algorithms because the running time of the FFT is proportional to the polynomial length and Algorithm 5 uses a smaller degree polynomial than the other two algorithms. VIII. IMPROVEMENTS FOR BCH DECODING It should also be mentioned that the Berlekamp-Massey algorithm can be improved for the case of BCH codes with no erasures that have the property that S 2j = (S j ) 2 for all j (see [2], [1], [14]) where S j = e(α j ) for all j 0. When c(α j ) = 0, then S j = r(α j ). Standard BCH codes (b = 1) are examples of codes with the property. Most presentations of these improved decoding techniques require a complete rewrite of the algorithm. The only corresponding adjustments needed for Algorithms 2-4 are: (1) K should be initialized to -1, (2) K should be incremented by 2 in Step 1; and (3) the inequality in Step 11 should be changed to K < 2t 1 since the iteration when K = 2t will not modify v(x). It is also possible to similarly adapt Algorithm 6 for efficient BCH decoding by simply changing line 17 to increment by 2. However, this improvement cannot be applied to Algorithms 5 and 7 since they use Ŝ(x) for the syndrome polynomial and processes the syndome coefficients in reverse order. If one adapts Algorithms 5 and 7 to use S(x), then the result is equivalent to Algorithm 6 in the BCH code case. For BCH codes with a small numbers of errors, it is possible to treat the coefficients of the syndrome polynomial as indeterminants in Algorithm 6 and obtain explicit formulas for the error locator polynomials. In the case of standard BCH codes, the formulas W (1) (x) = x + S 1 (27) W (2) (x) = x 2 + S 1 x + S 3 + S 3 1 S 1 (28) W (3) (x) = x 3 + S 1 x 2 + S2 1S 3 + S 5 S 3 1 + S 3 x +(S 3 1 + S 3 ) + S 1 S2 1S 3 + S 5 S 3 1 + S 3 (29) can be used to locate errors in codes with up to three errors. Observe that these formulas coincide with the reversal of Peterson s direct-solution algorithm formulas given in [14]. IX. CONCLUDING REMARKS In this paper, we presented a simplification of the algorithms given in [3], [7] and demonstrated that the Berlekamp-Massey algorithm is equivalent to the Extended Euclidean Algorithm. We then showed how these new Key Equation solvers can be used as a component in efficient algorithms for decoding Reed-Solomon codes, both with and without erasures. An efficient Key equation Solver was also presented and used as a component in several efficient Reed-Solomon decoding algorithms. The algorithms have been presented to allow the reader several options for Reed-Solomon decoding. First, the choice of an Inverse-Free or more standard Key Equation solver is allowed in Algorithm 4. Second, the reader may choose to compute the error locator and Ω(x) simultaneously or to compute Ω(x) separately using the standard convolution formula. For parallel computing envionments (e.g. VLSI implementation), it is more advantageous to select the first of the two options. For implementation on a standard workstation, the second of the two options should be selected. The algorithms can also be adapted to work with BCH codes and different set of tradeoffs might be made in this case. We ultimately considered three efficient Reed-Solomon decoding algorithms. All three algorithms performed about the same in the errors-only case and erasures-only case. However, the new decoding method summarized in Algorithm 5 was demonstrated to be much more efficient than the other two algorithms in the errors and erasures case. This is a consequence of the fact that smaller degree polynomials are used to find the error locations in the Reed-Solomon decoding process. REFERENCES [1] E.R. Berlekamp. Algebraic Coding Theory, McGraw-Hill (1968). [2] Richard E. Blahut. Algebraic Codes for Data Transmission, Cambridge (2002). [3] J.L Dornstetter. On the equivalence between Berlekamp s and Euclid s algorithms IEEE Trans. Inform. Theory, IT-33: 428-431, 1987. [4] W.L. Eastman. Euclideanization of the Berlekamp-Massey algorithm. Proceedings of the Tactical Communication Conference, Vol 1, 1988, pp. 295-303. [5] G.D. Forney. On Decoding BCH Codes, IEEE Trans. Inf. Theory, IT-11: 549-57, 1965. [6] Shuhong Gao and Todd D. Mateer. Additive Fast Fourier Transforms over Finite Fields. IEEE Transactions in Information Theory, vol 56 no 12. pp. 6265-6272, 2010. [7] Agnes Heydtmann and Jorn Jensen On the equivalence of the Berlekamp-Massey and the Euclidean Algorithms for Decoding IEEE Trans. Inform. Theory, 46(7): 2614-2624, 2000. [8] J.L.Massey. Shift-register synthesis and BCH decoding, IEEE Trans. Inform. Theory, IT-15: 122-127, 1969. [9] Todd D. Mateer, Fast Fourier Transform Algorithms with Applications. PhD Dissertation. Available at: http://cr.yp.to/f2mult/mateerthesis.pdf. [10] Todd D. Mateer. On the Equivalence of the Berlekamp-Massey and Euclidean Algorithms for Algebraic Decoding. Proceedings of the 2011 Canadian Workshop on Information Theory (IEEE), Kelowna, British Columbia, Canada. May 2011. [11] Todd D. Mateer. Simple Algorithms for Decoding Reed-Solomon Codes. To appear in Designs, Codes, and Cryptography. Published online by Springer on July 18, 2012.

10 [12] Y. Suguyama, M. Kasahara, S. Hirasawa, and Toshihiko Namekawa. A method for solving key equation for decoding Goppa codes, Information and Control, 27: 87-99, 1975. [13] T.-K. Truong, J.-H. Jeng, and T. C. Cheng, A new decoding algorithm for correcting both erasures and errors of ReedSolomon codes, Vol. 51, no. 3, PP. 381-388, IEEE Trans. Communications, 51(3), 381-388, 2003. [14] Stephen B. Wicker. Error Control Systems for Digital Communication and Storage, Prentice Hall (1995). Dr. Todd Mateer received his PhD in Mathematical Sciences from Clemson University in 2008 under the direction of Dr. Shuhong Gao. His dissertation discusses Fast Fourier Transform Algorithms and their applications in signal analysis, computer algebra, and coding theory. He was the first student to earn two undergraduate degrees from Grove City College where he received both a B.S.E.E. degree and and a B.S. degree in Mathematics / Computer Science. In 1999, he earned a Masters Degree from Clemson University under the direction of Dr. Joel Brawley where he conducted a mathematical analysis of video poker in South Carolina and mathematically proved that one can profit from certain casino games such as video poker over a long period of time with the appropriate strategy. In 2001, he joined Howard Community College where he currently serves as Master Adjunct Instructor. During the summers, Dr. Mateer teaches elementary classical cryptography, the mathematics of casino games, and the drawbacks of gambling at the Math and Related Sciences camps held at the University of Maryland Eastern Shore. He also does work for the Department of Defense, has four children, and is an amateur magician. His magic tricks teach basic concepts of coding theory and computer science.