General error locator polynomials for nth-root codes

Similar documents
A commutative algebra approach to linear codes

General error locator polynomials for binary cyclic codes with t 2 and n < 63

Groebner basis techniques to compute weight distributions of shortened cyclic codes

A classification of MDS binary systematic codes

On the Weight Distribution of N-th Root Codes

Interesting Examples on Maximal Irreducible Goppa Codes

A theory for the distance of cyclic codes

Computing Minimal Polynomial of Matrices over Algebraic Extension Fields

ON THE GRÖBNER BASIS OF A FAMILY OF QUASI-CYCLIC LDPC CODES. 1. Introduction

On formulas for decoding binary cyclic codes

New algebraic decoding method for the (41, 21,9) quadratic residue code

Construction of a Class of Algebraic-Geometric Codes via Gröbner Bases

Alternant and BCH codes over certain rings

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Decoding Procedure for BCH, Alternant and Goppa Codes defined over Semigroup Ring

Decoding error-correcting codes with Gröbner bases

Error-Correcting Codes

Code-Based Cryptography McEliece Cryptosystem

Generalized Reed-Solomon Codes

4 Hilbert s Basis Theorem and Gröbner basis

Counting and Gröbner Bases

ALGEBRAIC GROUPS J. WARNER

Reverse Berlekamp-Massey Decoding

but no smaller power is equal to one. polynomial is defined to be

arxiv: v1 [math.gr] 10 Jun 2008

Polynomials as Generators of Minimal Clones

On the BMS Algorithm

On the Griesmer bound for nonlinear codes

Error-correcting codes and Cryptography

Computation of the Minimal Associated Primes

On the Gröbner basis of a family of quasi-cyclic LDPC codes

Polynomials, Ideals, and Gröbner Bases

Bounding the number of affine roots

Groebner Bases in Boolean Rings. for Model Checking and. Applications in Bioinformatics

Code-Based Cryptography Error-Correcting Codes and Cryptography

Integral Extensions. Chapter Integral Elements Definitions and Comments Lemma

9. Integral Ring Extensions

MATH 326: RINGS AND MODULES STEFAN GILLE

MIT Algebraic techniques and semidefinite optimization February 16, Lecture 4

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

g(x) = 1 1 x = 1 + x + x2 + x 3 + is not a polynomial, since it doesn t have finite degree. g(x) is an example of a power series.

A distinguisher for high-rate McEliece Cryptosystems

Efficient algorithms for finding the minimal polynomials and the

On the centre of the generic algebra of M 1,1

Fast Decoding Of Alternant Codes Using A Divison-Free Analog Of An Accelerated Berlekamp-Massey Algorithm

MATH32031: Coding Theory Part 15: Summary

Average Coset Weight Distributions of Gallager s LDPC Code Ensemble

Möller s Algorithm. the algorithm developed in [14] was improved in [18] and applied in order to solve the FGLM-problem;

FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS.

Algebraic function fields

Published in: Proceedings of the 21st Symposium on Mathematical Theory of Networks and Systems

Computational Formal Resolution of Surfaces in P 3 C

Rings If R is a commutative ring, a zero divisor is a nonzero element x such that xy = 0 for some nonzero element y R.

January 2016 Qualifying Examination

Standard Bases for Linear Codes over Prime Fields

MCS 563 Spring 2014 Analytic Symbolic Computation Monday 14 April. Binomial Ideals

Determinant Formulas for Inhomogeneous Linear Differential, Difference and q-difference Equations

Idempotent Generators of Generalized Residue Codes


ERRATA. Abstract Algebra, Third Edition by D. Dummit and R. Foote (most recently revised on February 14, 2018)

Gröbner bases for decoding

A connection between number theory and linear algebra

MINIMAL GENERATING SETS OF GROUPS, RINGS, AND FIELDS

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

Topics in linear algebra

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

8. Prime Factorization and Primary Decompositions

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples

RON M. ROTH * GADIEL SEROUSSI **

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12

The F 4 Algorithm. Dylan Peifer. 9 May Cornell University

Algebra Homework, Edition 2 9 September 2010

GRÖBNER BASES AND POLYNOMIAL EQUATIONS. 1. Introduction and preliminaries on Gróbner bases

Math 547, Exam 2 Information.

Section 33 Finite fields

Finiteness Issues on Differential Standard Bases

School of Mathematics and Statistics. MT5836 Galois Theory. Handout 0: Course Information

Various algorithms for the computation of Bernstein-Sato polynomial

Math 418 Algebraic Geometry Notes

Lower bound of the covering radius of binary irreducible Goppa codes

FULLY COMMUTATIVE ELEMENTS AND KAZHDAN LUSZTIG CELLS IN THE FINITE AND AFFINE COXETER GROUPS. Jian-yi Shi

Skew cyclic codes: Hamming distance and decoding algorithms 1

Rational Univariate Representation

Analysis of Some Quasigroup Transformations as Boolean Functions

HILBERT FUNCTIONS. 1. Introduction

Journal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

Rank Analysis of Cubic Multivariate Cryptosystems

Generalized hyper-bent functions over GF(p)

Polynomial interpolation over finite fields and applications to list decoding of Reed-Solomon codes

ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS N ( ) 451

GALOIS THEORY. Contents

Decoding linear codes via systems solving: complexity issues and generalized Newton identities

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

MCS 563 Spring 2014 Analytic Symbolic Computation Monday 27 January. Gröbner bases

Error-correcting Pairs for a Public-key Cryptosystem

Projective space. There are some situations when this approach seems to break down; for example with an equation like f(x; y) =y 2 (x 3 5x +3) the lin

Math 429/581 (Advanced) Group Theory. Summary of Definitions, Examples, and Theorems by Stefan Gille

Transcription:

General error locator polynomials for nth-root codes Marta Giorgetti 1 and Massimiliano Sala 2 1 Department of Mathematics, University of Milano, Italy 2 Boole Centre for Research in Informatics, UCC Cork, Ireland Abstract. All interesting linear codes (i.e., with d 2) form a class, called nth-root codes. We investigate the decoding for an interesting subclass, proving the existence of general error locator polynomials. 1 Introduction We introduce a class of linear codes called n-th root codes, that essentially includes all linear codes (as soon as their minimum distance is greater than or equal to two). Those codes are defined by means of their parity-check matrix, the expression of which generalizes the one for cyclic codes. In [8] the notation of general error locator polynomials for correctable linear codes was introduced. These are multivariable polynomials such that their specializations (to a correctable syndrome s) give the error locations (corresponding to s). We exhibit a subclass of linear codes for which general error locator polynomials do exist. To do so, we construct an ideal involving the polynomials defining the parity-check matrix of a (proper maximal zerofree) n-th root code. We investigate properties of this ideal and show that its totally reduced Gröbner basis contains a (unique) general error locator polynomial. In the same spirit, we show the existence of general locator polynomials of type ν - handling errors and erasures - for such codes. 2 Preliminaries We denote by F q the finite field with q elements, where q is a power of a prime, and by n a natural number such that q and n are relatively prime. Let k, N N such that 1 k N n + 1. We refer to the vector space of dimension N over F q as to (F q ) N. The zeros of polynomial x n 1, which are called n-th roots of unity, lie in an extension field F q m and in no smaller field. We denote the set of all these roots by R n. From now on, q,n, k, N and m are understood. All the following statements and definitions can be found in [5] and [4], unless otherwise stated.

2.1 General error locator polynomial Let C be an [N, k, d] code over F q, t its correction capability and H a paritycheck matrix over F q m. Let d 3. The syndromes lie in (F q m) N k and form a vector space of dimension (N k) over F q. Let α be a primitive N-th root of unity in F q m, so that n = N. Let r = N k. Definition 1 ([8]). Let L C be a polynomial in F q [X, z], where X = (x 1,..., x r ). Then L C is a general error locator polynomial of C if 1. L C (X, z) = z t + a t 1 z t 1 + + a 0, with a j F q [X], 0 j t 1, that is, L C is a monic polynomial with degree t with respect to the variable z and its coefficients are in F q [X]; 2. given a syndrome s = (s 1,... s r ) (F q m) N k, corresponding to a vector error of weight µ t and error locations {k 1,..., k µ }, if we evaluate the X variables in s, then the roots of L C (s, z) are {α k1,..., α kµ, 0,..., 0}. }{{} t µ Definition 2 ([8]). Let L be a polynomial in F q [X, W, z], X = (x 1,..., x r ) and W = (w ν,..., w 1 ), where ν 1 is the number of erasures that occurred. Then L is a general error locator polynomial of type ν of C if 1. L(X, W,z) = z τ + a τ 1 z τ 1 + + a 0, with a j F q [X, W ], for any 0 j τ 1, that is, L is a monic polynomial with degree τ in the variable z and coefficients in F q [X, W ]; 2. for any syndrome s = (s 1,..., s r ) and any erasure location vector w= (w 1,..., w ν ), corresponding to an error of weight µ τ and error locations {k 1,..., k µ }, if we evaluate the X variables in s and the W variables in w, then the roots of L(s, w, z) are {α k1,..., α kµ, 0,..., 0}. }{{} τ µ If such L exists for a given code C, then we name the polynomial L ν C. We denote by L 0 C polynomial L C. For a code C, the possession of a general locator polynomial L ν C of type ν for all 0 ν < d might be a stronger condition than the possession of a general error locator polynomial L C, but in [8] the authors prove that any cyclic code admits a general locator polynomial of type ν, for 0 ν < d. 2.2 Definition and first properties of nth-root codes Definition 3. Let L R n {0}, L = {l 1,..., l N } and P= {g 1 (x),..., g r (x)} in F q m[x] such that i = 1,..., N there is at least one j = 1,..., r such that g j (l i ) 0. We denote by C = Ω(q, n, q m, L, P) the linear code defined over F q having g 1 (l 1 )... g 1 (l N ) g 1 (L) g 2 (l 1 )... g 2 (l N ) g 2 (L) H =.. = g r (l 1 )... g r (l N ). g r (L)

as its parity-check matrix. We say that C is an nth-root code. Remark 1. Code C = Ω(q, n, q m, L, P) is linear over F q, its length is N = L and its distance d is greater than or equal to 2, because there are no columns in H composed only of zeros. If 0 L we assume l N = 0 (any re-ordering of L gives an equivalent code). Definition 4. Let C = Ω(q, n, q m, L, P) be an nth-root code. If L = R n \ L =, we say that C is maximal. If P F q [x], we say that C is proper. If 0 / L, we say that C is zerofree, non-zerofree otherwise. Since any function from F q m to itself can be expressed as a polynomial, we can accept in P also rational functions of type f/g, f, g F q m, such that g( x) 0 for any x F q m. We do so from now on, without further comments. Example 1. Let q = 2, n = 7, q m = 8, L = F 2 3 = β {0}, the minimal polynomial of β is z 3 1 + z + 1, and P= {g 1 (x) = x 2 +x+1, g x 2(x) = x 2 +x+1 }. The seven 7th roots of unity are all the elements of F 8, R 7 = F 8. The nth-root code C = Ω(2, 7, 8, F 8, {g 1, g 2 }) is non-zerofree (0 L), maximal and it is easy to see that C is an [8,2,5] code. Remark 2. In order to define the same nth-root code, it is possible to use different n. For example, to define an nth-root code with length N = 5, we can use the five 5th roots of unity or five 7th roots of unity. Proposition 1. Let C be a linear code over F q of length N and d 2. Then C is an nth-root code for any n N 1 such that (n, q) = 1. In particular: 1. if n = N, then C can be maximal zerofree, 2. if n = N 1, then C is maximal non-zerofree. Corollary 1. Let C be a linear code. C is an nth-root code if and only if d 2. 3 General error locator polynomial 3.1 Application of the Gianni-Kalkbrener Theorem Let K be a (not necessarily finite) field. Assume G is a Gröbner basis for a 0 dimensional ideal J K[S, A, T ], S = (s 1,..., s H ), A = (a 1,..., a L ), T = (t 1,..., t M ) w.r.t. a order with S < A < T and with the A variables lexicographically ordered by a 1 > a 2 >... > a L. Then the elements of set G (K[S, A] \ K[S]) can be collected into blocks {G i } 1 i L : G 1 = {g 1,1 (S, a L,..., a 1 ),..., g 1,l1 (S, a L,..., a 1 )}, G 2 = {g 2,1 (S, a L,..., a 2 ),..., g 2,l2 (S, a L,..., a 2 )},. G L = {g L,1 (S, a L ),..., g L,lL (S, a L )},

in such a way that: for any i, G i K[S, a L,..., a i+1 ][a i ] \ K[S, a L,..., a i+1 ], the ideal generated by j>i G j is actually the i-th elimination ideal J i, J i = J K[S, a L,..., a i ]. The Gianni-Kalkbrener Theorem [3] ensures that G i for 1 i L. Clearly any G i, 1 i L, can be decomposed into blocks of polynomials according to their degree with respect to the variable a i : G i = i δ=1 G iδ, but some G iδ could be empty. In this way, if g G iδ, we have: g K[S, a L,..., a i+1 ][a i ] \ K[S, a L,..., a i+1 ], deg ai (g) = δ, i.e. g = ba δ i + and b = Lp(g) K[S, a L,..., a i+1 ]. Let N iδ be the number of elements of G iδ. We name the elements of set G iδ = {g iδj, 1 j N iδ } according to their order: h < j Lt(g iδh ) < Lt(g iδj ). Remark 3. We can summarize our description. Given any two polynomials g ldh G ld and g iδj G iδ, then l > i or g ldh < g iδj Lt(g ldh ) < Lt(g iδj ) l = i, D < δ or l = i, D = δ, h < j Since J is 0 dimensional, we can clearly decompose the variety of its elimination ideals as follows. Let J S = J K[S], J S {al } = J K[S, a L ],..., J S {al,...,a 1} = J K[S, a L,..., a 1 ] = J K[S, A]. We have: 1) V(J S ) = λ(l) j=1 ΣL j, with Σ L j = {(s 1,..., s N ) V(J S ) there are exactly j values {ā (1) L,..., ā(j) L }, s.t.(s 1,..., s N, ā (i) L ) V(J S {a L }), 1 i j}; 2) V(J S {al }) = λ(l 1) Σ L 1 j j=1 Σ L 1 j,with = {(s 1,..., s N, a L ) V(J S {al }) there are exactly j values {ā (1) L 1,..., ā(j) L 1 }, s.t.(s 1,..., s N, a L, ā (i) L 1 ) V(J S {a L,a L 1 }), 1 i j}; 3) V(J S {al,...,a h }) = λ(h 1) Σ h 1 j j=1 Σ h 1 j, 2 h L with = {(s 1,..., s N, a L,..., a h ) V(J S {al,...,a h }) exactly j values {ā (1) h 1,..., ā(j) h 1 }, s.t.(s 1,..., s N, a L,..., a h, ā (i) h 1 ) V(J S {a L,...,a h+1 }), 1 i j}; (1)

Note that, for a general 0-dimensional ideal J, nothing can be said about λ(h), except that λ(h) 1 for any 2 h L. We now introduce a class of ideals which are very useful in our context. Definition 5. With the above notation we say that J is stratified if: 1. λ(h) = h, 1 h L and 2. h j, 1 h L, 1 j h. Example 2. Let S = {s 1 }, A = {a 1, a 2 } (so that L = 2) and T = {t 1 } such that S < A < T and a 1 > a 2. Let K = C and J be the ideal in C[s 1, a 1, a 2, t 1 ] generated by: {s 2 1 s 1, a 2 3, a 1 s 1 2s 1, a 2 1 + a 1 s 1 3a 1 2s 1 + 2, t 1 }. The variety of J is V(J) = {(0, 1, 3, 0), (0, 2, 3, 0), (1, 2, 3, 0)}. Let J S = J C[S] = s 2 1 s 1, then V(J S ) = J S = λ(l) L j=1 j = 2 λ(2) j=1 j = {0, 1}. Clearly {1} = 2 1, which means λ(2) = 2 satisfying condition (1) in Definition 5, for h = 1, 2. Variety V(J S {a2}) = λ(l 1) λ(1) j=1 j = {(0, 1), (0, 2), (1, 2)}. Clearly {(0, 1), (0, 2), (1, 2)} = 1 1 which means λ(l 1) = λ(1) = 1 satisfying condition (1) and all i j, i, j = 1, 2, are not empty, so that ideal J is stratified. See Figure 1 (A) and (B). and {0} = 2 3 Theorem 1. Let J be a radical stratified ideal, then for 1 i L, G i = i δ=1g iδ, with G iδ, 1 δ i and 1 i L. Moreover 1 i L, G ii = {g ii1 }, i.e. only one polynomial exists in G i with degree i w.r.t. a i ; 1 i L, Lp(g ii1 ) = 1, Lt(g ii1 ) = a i i. 3.2 Ideals for the decoding of nth-root codes Definition 6. Let C = Ω(q, n, q m, L, P) be a zerofree maximal nth-root code, with correction capability t. We denote by J C,t the ideal J C,t F q m[x 1,..., x r, z t,..., z 1, y 1,..., y t ], { t } { } J C,t = h=1 y hg s (z h ) x s, y q 1 j 1, 1 s r 1 j t {z i z j p(z i, z j )} i j, 1 i,j t, { zj n z } (2) j 1 j t where p(x, y) = n 1 h=0 xh y n 1 h. We denote by G C,t the totaly reduced Gröbner basis of J C,t w.r.t. >. Note that variables x 1,..., x r represent correctable syndromes, z 1,..., z t error locations and y 1,..., y t error values.

5 a 2 (A) a 1 4 5 4 (B) 3 3 2 2 1 1 0 0 1 2 3 4 5 s 1 0 0 (1,0) (2,0) (2,1) (a,s ) 2 1 Fig. 1. Varieties in a stratified case. Lemma 1. Ideal J C,t is radical and stratified. Applying Theorem 1 to J C,t (and Lemma 1), we have the following proposition. Proposition 2. In Gröbner basis G C,t there exists a unique polynomial of type g = z t t + a t 1 z t 1 t We now state the main result of this paper. +... + a 0, a i F q m[x]. Theorem 2. If code C is a proper maximal zerofree nth-root code with correction capability t, then C possesses a general error locator polynomial. Since cyclic codes are proper maximal zerofree nth-root codes we obtain, as a special case of Theorem 2, that cyclic codes have general error locator polynomials (Theorem 6.9 in [8]). From now on we often shorten general error locator polynomial to OS polynomial. In the next two examples we show two methods to compute OS polynomials. The former is suggested by Proposition 2. In the latter we assume we know that a general error locator polynomial exists for the code and hence we apply directly Definition 1. Example 3. Let G and H be following binary matrices ( ) 1 1 1 0 0 G = H = 1 0 1 0 1 0 1 1 0 1. 0 0 1 1 1 0 0 0 1 1

Let C be the [5, 2, 3] linear code over F 2 with G as a generator matrix and H as a parity-check matrix. Note t = 1. Let γ be a primitive element of F 16 with minimal polynomial z 4 + z + 1. Then C is the zerofree maximal nth-root code Ω(2, 5, 2 4, R 5, P), where P = { g 1 (x) = γ 4 x 4 + γ 8 x 3 + γ 2 x 2 + γx + 1, g 2 (x) = γ 10 x 4 + γ 5 x 3 + γ 5 x 2 + γ 10 x + 1, g 3 (x) = γ 11 x 4 + γ 7 x 3 + γ 13 x 2 + γ 14 x}. We construct ideal J C,t F 16 [x 1, x 2, x 3, z 1 ] = F 16 [X, Z], as follows: J C,1 = {g h (z 1 ) x h } 1 h 3, z n 1 z 1. If we calculate Gröbner basis G C,t = G X G X,z1 induced by x 1 < x 2 < x 3 < z 1, we obtain: w.r.t. the lexicographical order G X = {x 2 3 + x 3, x 2 2 + x 2, x 1 x 3 + x 2 x 3, x 1 x 2 + x 1 + x 2 x 3 + x 2 + x 3 + 1, x 2 1 + x 1 } and G X,z1 = {g 111 = z 1 + (γ 2 +γ)x 1 + (γ 3 +γ)x 2 x 3 + γx 2 + x 3 + (γ 3 +γ 2 +γ)}. In G X,z1 there is only one polynomial in z 1 of degree 1, as we expected, g 111, and it must be an OS polynomial for C thanks to Fact 2. Example 4. Let C be the code in Example 3. Another way to compute an OS polynomial is to see code C with parity-check matrix H = (γ 6, γ 2, γ 3, γ 14, 1), so that C = Ω(2, 5, 2 4, R 5, P ), where P = {γ 12 x 4 + γ 11 x 3 + x 2 + γ 14 x + γ 3 }. If we calculate the Gröbner basis G w.r.t. the lexicographical order induced by x 1 < z 1, its elements are: G x 1 = x 5 1 + (γ 3 )x 4 1 + (γ 3 + γ)x 2 1 + γ 2 x 1 + (γ 2 + γ + 1), G x 1,z 1 = z 1 + x 3 1. There is only one polynomial in z 1 of degree 1, as we expected, and it is another OS polynomial for C. Example 5. Another way to compute OS polynomials for a code is to suppose that those polynomials exist. Let C be the code studied in Example 3. We assume that its parity-check matrix is a row, H = (e 1, e 2, e 3, e 4, e 5 ). We search for an OS polynomial z + f(x) (the degree t of z is 1). It must satisfy the following conditions: f(e i ) = α i, 1 i 5, and f(0) = 0. Polynomial f(x) has degree at most 5 with coefficients b i in F 2, so that we can write f(x) = b 5 x 5 + b 4 x 4 + b 3 x 3 + b 2 x 2 + b 1 x (f(0) = 0 b 0 = 0). We compute a Gröbner basis of ideal J F 16 [b 1, b 2, b 3, b 4, b 5, e 2, e 3, e 5 ] given by J = e 1 + e 2 + e 3, e 3 + e 4 + e 5, {e 15 i + 1} 1 i 5, {b 2 i + b i} 1 i 5, f(e 1 ) + γ 3, f(e 2 ) + γ 6, f(e 3 ) + γ 9, f(e 4 ) + γ 12, f(e 5 ) + γ 15

where relations e 1 = e 2 + e 3, e 4 = e 3 + e 5 follow from matrix G. We obtain e 1 = γ 6, e 2 = γ 2, e 3 = γ 3, e 4 = γ 14, e 5 = 1, so that the parity-check matrix is H = (γ 6, γ 2, γ 3, γ 14, 1) and the OS polynomial is f(x) = x 3. We note that it is the same as in Example 4. Remark 4. The previous example is interesting because we have simultaneously computed for C an nth-root presentation and a general error locator polynomial. The nice shape of the general error locator polynomial reveals an unexpected structure in this code. If the approach presented in Example 5 fails for a code C, that is, if V(J) =, then it means that C does not possess an OS polynomial for any nth-root presentation such that H is composed of one row. However, it could be that C possesses an OS polynomial for H with up to N k rows. We think that it is obvious how this may be checked with a similar commutative algebra approach, and so we do not detail it. Example 6. Consider the nth-root code of Example 1, shortened in position 0. It is a classical Goppa code with g(x) = x 2 + x + 1 and L = F 8. An OS polynomial for this code is L = z 2 2 + z 2 (x 5 1x 2 2 + x 5 1 + x 3 1x 2 2 + x 3 1 + x 2 1x 2 2 + x 2 1x 2 + x 1 x 5 2 + x 1 x 4 2 + x 1 x 3 2+ x 1 x 2 2 + x 1 x 2 + x 1 + x 7 2 + x 4 2 + x 3 2 + x 2 2 + 1) + x 5 1x 2 2 + x 5 1x 2 + x 5 1 + x 4 1x 2 2+ x 3 1x 3 2 + x 2 1x 2 + x 2 1 + x 1 x 6 2 + x 1 x 2 + x 1 + x 7 2 + x 6 2. 3.3 Extended syndrome variety We extend previous results to the case when there are also erasures. Let τ be a natural number corresponding to the number of errors, µ be a natural number corresponding to the number of erasures and such that 2τ + µ < d. We have to find solutions of equations of type: s j + τ a l g j (α k l ) + l=1 ν l=1 c lg j (α h l ), j = 1,..., r (3) where {k l }, {a l } and {c l} are unknown and { s j }, {h l} are known. We introduce variables W = (w ν,..., w 1 ) and U = (u 1,..., u ν ), where the {w h } stand for erasure locations (α h l ) and the {uh } stand for erasure values c l (h = 1,..., ν). When the word v(x) is received, the number ν of erasures and their positions {w h } are known. We rewrite equations (3) in terms of X, Y, Z, W and U, where the {x j } stand for the syndromes (j = 1,..., r), as: J C,τ,ν = { τ l=1 y lg j (z l ) + ν l u lg j (w l) x j } j=1,...,r,, {z n+1 i z i } i=1,...,τ, {y q 1 i 1} i=1,...,τ, {u q h u h} h=1,...,ν, {wh n 1} h=1,...,ν, {x qm j x j } j=1,...,r, {p(w h, w k )} h k,h,k=1,...,ν, {z i p(z i, w h )} i=1,...,τ,h=1,...,ν, {z i z j p(z i, z j )} i j,i,j=1,...,τ.

We observe that: - polynomials τ l=1 y lg j (z l ) + ν l u lg j (w l) x j characterize the nth-root code; - polynomials z n+1 i z i ensure that z i are nth-roots of unity or 0; - polynomials y q 1 i 1, wh n 1, uq h u h ensure that y i, w h F q and u h F q ; - polynomials z i p(z i, w h ) ensure that an error cannot occur in a position corresponding to an erasure; - polynomials p(w h, w k ) ensure that any two erasure locations are distinct; - polynomials z i z j p(z i, z j ) ensure that any two error locations are distinct. Ideal J C,τ,ν depends only on code C and on ν. Proposition 3. In Gröbner basis G C,τ,ν there is a unique polynomial of type g = z τ τ + a τ 1 z τ 1 +... + a 0, a i F q m[x, W ]. Theorem 3. If code C is a proper maximal zerofree nth-root code, then C possesses general error locator polynomials of type ν, for any ν 0. Example 7. Let C be the shortened code obtained from code C presented in Example 1. Code C is a [7, 1, 6] linear code, so that τ (errors) and µ (erasures) satisfy relation τ + µ < 6. If τ = 1, µ = 2, the syndrome ideal is J = {g 1 (z 1 ) + u 1 g 1 (w 1 ) + u 2 g(w 2 ) + x 1, g 2 (z 1 ) + u 1 g 2 (w 1 ) + u 2 g 2 (w 2 ) + x 2, z 8 1 z 1, w 7 1 1, w 7 2 1, x 8 1 x 1, x 8 2 + x 2, u 2 1 + u 1, u 2 2 + u 2, z 1 p(z 1, w 1 ), z 1 p(z 1, w 2 ), p(w 1, w 2 )} and in the reduced Gröbner basis there is only one polynomial having z 1 as leading term (see Appendix of [5]). 4 Conclusions and further research Linear codes are traditionally specified starting from a parity-check matrix H. In particular, cyclic codes are such that the entries of H consist of the evaluation of univariate monomials on all the n-th roots of unity. Our approach is to specify any linear code (with d 2) as a code such that the entries of H consist of the evaluation of generic (univariate) polynomials on all the n-th roots of unity. In this sense, we say that linear codes are a generalization of cyclic codes. This point of view allows to extend to linear codes some computational algebra techniques and some argument, that have been previously applied to cyclic codes ([6],[7],[8]). This translates in new tools, but also in new challenges. In particular, we can identify a new decoding algorithm for a (potentially very large) sub-class, via the general error locator polynomial. The problem of decoding linear codes is NP-hard ([1], [2]), but if a linear code admits a sparse general error locator polynomial (or such a polynomial with a sparse representation), then it can be decoded very fast ([9]). We have provided an explicit example when the locator polynomial is very small, given a certain nth-root presentation, and large when given another.

Acknowledgments The first author would like to thank the second author (her supervisor). The authors would like to thank the following people for their comments and suggestions: J. Abbot, M. Bardet, F. Caruso, F. Dalla Volta, J. C. Faugere, P. Fitzpatrick, T. Mora, E. Orsini, M. Pellegrini, L. Perret, I. Simonetti, C. Traverso. We have run our computer simulations using the software package Singular (http://www.singular.uni-kl.de) at the computational centre MEDICIS (http://medicis.polytechnique.fr). This work has been partially supported by the STMicroelectronics contract Complexity issues in algebraic Coding Theory and Cryptography. References 1. A. Barg, Complexity issues in coding theory, Handbook of coding theory, Vol. I, II, p. 649 754, North-Holland, Amsterdam, 1998. 2. A. Barg, E. Krouk, H. C. A. van Tilborg, On the complexity of minimum distance decoding of long linear codes, IEEE Trans. Inform. Theory, vol. 45, 1999, no. 5, p. 1392 1405. 3. M. Caboara, T. Mora, The Chen-Reed-Helleseth-Truong decoding algorithm and the Gianni-Kalkbrenner Groebner shape theorem, Applicable Algebra in Engineering, Communication and Computing, vol. 13, 2002, no. 3, p. 209 232. 4. M. Giorgetti, On some algebraic interpretation of classical codes, PhD Thesis, University of Milan, 2007. 5. M. Giorgetti, M. Sala, A commutative algebra approach to linear codes, UCC- BCRI preprint, www.bcri.ucc.ie, 58, 2006. 6. M. Sala, Groebner basis techniques to compute weight distributions of shortened cyclic codes, Journal of Algebra and Its Applications, vol. 6, no. 2, 2007 7. M. Sala, Groebner bases and distance of cyclic codes, Applicable Algebra in Engineering, Communication and Computing, vol. 13, 2002, no. 2, p. 137 162. 8. E. Orsini, M. Sala, Correcting errors and erasures via the syndrome variety, Journal of Pure and Applied Algebra, vol. 200, 2005, no. 1-2, p. 191 226. 9. E. Orsini, M. Sala, General error locator polynomials for binary cyclic codes with t 2 and n < 63, IEEE Trans. Inform. Theory, vol. 53, 2007, no. 3, p. 1095 1107.