Error Correction in DFT Codes Subject to Low-Level Quantization Noise

Size: px
Start display at page:

Download "Error Correction in DFT Codes Subject to Low-Level Quantization Noise"

Transcription

1 Error Correction in DFT Codes Subject to Low-Level Quantization Noise Georgios Takos, Student Member, IEEE and Christoforos N. Hadjicostis*, Senior Member, IEEE 1 Abstract This paper analyzes the effects of quantization noise on the error correcting capability of a popular class of real-number Bose-Chaudhuri-Hocquenguem (BCH codes known as discrete Fourier transform (DFT codes. Among other uses, DFT codes heve been proposed for joint source-channel coding because their robustness to channel noise is apparently superior to the classical tandem source-channel coding (where the source code is designed without regard to the possibility of channel errors. In order to handle quantization noise, we develop a decoding algorithm that is based on the Peterson-Gorenstein-Zierler (PGZ algorithm and we explore its performance in the presence of quantization noise via analytic means and simulations. As a result of our analysis, we obtain an explicit lower bound on the precision needed to guarantee correct identification of the number of errors; our simulations suggest that this bound can be tight. Finally, we prove that the optimal bit allocation for DFT codes (in terms of minimizing an upper bound used as a threshold in the part of our algorithm that determines the number of errors is the uniform one. Index Terms Real-number codes, Bose-Chaudhuri-Hocquenguem (BCH codes, discrete Fourier transform (DFT codes, quantization noise, Peterson-Gorenstein-Zierler (PGZ algorithm. EDICS NAME: DSP-QUAN (Quantization effects and roundoff analysis This material is based upon work supported in part by the National Science Foundation under NSF Career Award and NSF ITR Award , and in part by the Air Force Office of Scientific Research under URI Award No F URI. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of NSF or AFOSR. G. Takos and C. N. Hadjicostis are with the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA ( chadjic@uiuc.edu, takos@uiuc.edu.

2 I. INTRODUCTION ERROR-CORRECTING codes in a real-number setting were first introduced in [1] which described block codes based on discrete transforms, including the discrete Fourier transform (DFT code. The author of [1] showed that DFT codes are cyclic and belong to the class of Bose-Chaudhuri-Hocquenguem (BCH codes. Moreover, DFT codes have a minimum Hamming distance that depends on the construction of the code; in particular, for an (N,K DFT code with u N K, the minimum Hamming distance was proven to be u + 1, so that one can correct up to d = u/ errors. During the last few years, discrete transform codes have also been used for joint sourcechannel coding (see, for example, [] and [3]. In these settings, one normally considers two types of errors: channel errors which have large amplitude and quantization errors of smaller amplitude. The authors of [] proposed a joint source-channel encoding algorithm and compared it to the standard tandem source-channel coding (which first designs the source code without any consideration to the channel errors and then obtains the channel code without regard to the nature of the source. They showed through simulations that joint source-channel coding results in about 3dB improvement in signal-to-noise ratio (SNR when compared to tandem source-channel coding. The performance of different decoding algorithms for DFT codes in terms of the ability to accurately localize errors and as a function of the SNR was studied in [3]. In this paper we consider the problem of how quantization noise affects our ability to perform error detection, identification and correction for a DFT code. Essentially, in order to deal with quantization noise we need to develop a decoding algorithm for real BCH codes. We consider a decoding scheme that is a translation of the well-known Peterson-Gorenstein-Zeirler algorithm to a real-number setting and focus on making the first part of the decoding algorithm, which is responsible for determining the number of large errors, robust to quantization noise. As it will become evident from our development, in order to achieve this robustness we need first to understand how finite precision affects the eigenvalues of a matrix whose rank (when no quantization noise is present is exactly equal to the number of the large errors. Assuming no particular distribution on the quantization noise but simply a bound on its maximum magnitude, we are able to obtain upper and lower bounds on each eigenvalue of this associated matrix. Our analysis can be used to handle not only quantization noise (which is bounded in most cases of

3 3 interest, but also any type of noise that takes relatively small values and affects the real-number DFT code-words. Our proposed techniques are reminiscent of the techniques used in [] and [3]; unlike [3], however, we also consider the problem of determining the number of channel errors and, unlike [], we make no assumptions on the type of channel errors. In particular, [3] used the Akaike criterion [4] to determine the number of large errors; whereas [] relied on an empirically computed threshold in order to estimate the number of large errors. In contrast, our analysis provides mathematical insight as to how finite precision affects our ability to determine the number of large errors. Among other things, we use this insight to obtain a lower bound on the required precision (i.e., the required number of fractional bits in the representation of real numbers in fixed point arithmetic in order to guarantee the correct identification of the number of errors. More generally, our approach can be used to obtain an upper bound on the maximum magnitude of the quantization noise, in order to guarantee the correct identification of the number of errors. Moreover, we prove that if we are allowed to assign a different number of bits for the representation of each entry in the code-words, then the optimal bit allocation for DFT codes is the uniform one. This uniform bit allocation is optimal in terms of minimizing the threshold used for determining the number of large errors in our algorithm. This paper is organized as follows. Section II introduces notation and presents the necessary background on DFT codes. Section III develops an algorithm for correcting faults in the presence of quantization noise. In Section IV we obtain the lower bound on the required precision for guaranteed correct identification of the number of errors. Section V proves that the optimal bit allocation for the representation of each codeword is the uniform one. Section VI contains experimental results for this algorithm, an analysis of its performance and comparisons with previous approaches. Finally, in Section VII we conclude our work and give future research directions. II. REVIEW OF DFT CODES In [1] an (N,K DFT code is produced from the N N DFT matrix W N defined as W N (m, n = 1 ( exp i π N N (m 1(n 1,

4 4 x W K X zero Y * W N y (DFT padding (IDFT Fig. 1. DFT encoding scheme. where m and n range from 1 to N. The generator matrix G of a DFT code has as columns any K of the rows of W N, while the parity check matrix H consists of the remaining rows. A codeword y is generated by the data vector x as y = Gx. Clearly, since every row of W N is orthogonal to all other rows, the product Hy (referred to as the syndrome will give zero. If, however, some entries of the code-vector are corrupted by errors, the syndrome will no longer be zero. In fact, the syndrome allows us to not only detect errors but also to correct up to a certain number of errors. In this paper we consider a slightly different DFT code defined by the generator matrix [], [3]: G = WNΣW K, with K odd and N even and Σ is an N K zero-padding matrix (see Fig. 1. In particular, x ǫ R K is the source word and y ǫ R N the code-vector, with y = Gx (N > K. In this case, Σ N K, m = is the N K zero-padding matrix whose only nonzero elements are: Σ(m, m = 1,,..., K+1 and Σ(N m, K m = N K, m = 0, 1,..., K 3. If we let, X and Y be the (K-point and N-point respectively DFTs of x and y, Σ then adds N K consecutive zeros to X so that Hermitian symmetry is preserved (see Chapter 4.5 in [5]. Decoding algorithms when the DFT of the codeword has (N K consecutive zeros were studied in [6]. As an example of a DFT code, let N = 6 and K = 3 so that W K = i i i i

5 5 6/ / Σ = /3 and G = In order to obtain the (N K N parity check matrix H for this code we take the N K columns of W N K+3 that correspond to the zeros of Σ, i.e., the columns with indexes,..., (N K 1 to form the conjugate transpose of H. In particular, for the (6,3 DFT code above H = i i i i i i i i With this choice of parity check matrix the syndrome, defined as s Hy is equal to the zero vector, provided that no errors corrupt the code-vector y (since HG = 0.

6 6 If an error vector e corrupts the code-vector, so that we receive y + e, the syndrome becomes s = H(y + e = He. III. DECODING ALGORITHM The original Peterson-Gorenstein-Zeirler (PGZ decoder was developed for BCH codes over a finite field [5]. In that setting the decoding is a three-step algorithm. First, it determines the number of errors by constructing a matrix whose rank is equal to the number of errors. Then, it determines the locations of the errors through an error locator polynomial whose roots are the inverses of the error locations (the roots are determined by substituting all values that are possible roots into the polynomial, choosing the roots that give zero, and therefore obtaining the error locations. Finally, with the error locations known, the magnitudes of the errors are determined by solving a set of linear equations. Our algorithm follows the same structure, but is modified to function in a real number setting and in the presence of quantization (or finite precision noise. In [7] a similar algorithm was presented for real number DFT codes, but no quantization noise was considered. When finite precision is not an issue, the real number PGZ decoder can find the coefficients of the error locator polynomial Λ = x d + Λ 1 x d Λ d, where d = (N K 1/, by solving the following set of equations s m Λ d + s m+1 Λ d s m+d 1 Λ 1 = s m+d, (1 for m = 1,,..., (d + 1. This is equivalent to solving the matrix equation [5]: s d+1 s d+ s d Λ d s d+1 s d s d+1 s d 1 Λ d 1 s d. =, } s s 3 {{ s d+1 } Λ 1 s d+ Q where s H(y+e [s 1 s... s (d+1 ]. If less than d errors occurred (i.e., if the error vector e has less than d nonzero entries, matrix Q is singular (Q has rank equal to the number of errors but the decoder can solve for a length d 1 polynomial by eliminating the last row and column

7 7 of Q; this process can continue until we have a nonsingular Q. With the (low-level quantization noise that we consider in this paper, the received vector is y +e+q, where q is the quantization noise vector that corrupts the code-vector. The syndrome is affected by both large errors and quantization noise so that it is given by H(e+q. Our analysis does not assume a distribution on the quantization noise because we are primarily interested in bounding the performance of our decoding algorithm. Thus, our analysis considers as given the maximum possible absolute value of the quantization noise, namely q max = q. In the presence of quantization noise, matrix Q needs to be replaced by s r(d+1 s r(d+ s r(d s r s r(d+1 s r(d 1 Q r =.,..... s r s r3 s r(d+1 where s rl, l = 1,,..., (d + 1, are the entries of a new syndrome s r [s r1 s r... s r(d+1 ] T = H(e + q. The main difference is that the above syndrome comes from two error vectors: the channel error vector e which contains only the (presumably large d or less errors and the vector q which consists only of ( presumably smaller quantization noise components. Note that one can also write Q r as the sum of two components, i.e., Q r = Q+Q q, where Q q is the d d matrix produced in the same fashion as Q but using the components of the vector s q [s q1 s q... s q(d+1 ] T = Hq. Note that s q is not available to the decoder (only s r is. Step 1: Determining the Number of Large Errors With quantization noise, matrix Q r is unlikely to be singular, so its rank is not necessarily equal to the number of errors (note that when quantization noise is absent the rank of Q is equal to the number of errors. In the sequel, we study the nature of the eigenvalues of Q r in an attempt to determine the number of errors. Note that we can write the entries of Q r as s rl = 1 N N m=1 ( exp iπ( K 1 + l(m 1/N (e m + q m,

8 8 for l = 1,,..., (d + 1, where e m and q m are the m-th entries of e and q respectively. We see that s r(d+1 ǫ R (recall that d = (N K 1/ and s r(d+1 n = s r(d+1+n, for n = d, d + 1,..., d (recall that N is even. This means that Q r is a Hermitian matrix. ( If we let V be the N d matrix with elements V(m, n = exp iπ(m 1(n 1/N, and W r be the N N diagonal matrix with elements W r (m, m = 1 N (e m + q m ( 1 (m 1, we can see that we can write [7] Q r = V W r V. Clearly, the decomposition of the error vector in its two components e and q, results in a similar decomposition for the W r matrix. In particular, we can decompose Q r as a sum of Q and Q q, and if we write W r = W +W q (where W depends only on elements of e, while W q depends only on elements of q, we see that Q = V WV and Q q = V W q V. Note that V is a d N matrix which can be written in block form as V = [V 1 V V m], ( where m = N d, V j, j = 1,,, m 1 are d d matrices and V m is a d (N d(m 1 matrix. Note that, as long as we rearrange the entries on the diagonals of W and W q appropriately V 1 does not necessarily have to be formed from the first d columns of V, but from any d columns of V. All the columns of Vj, j = 1,,, m, however, have to be distinct. For simplicity, in the sequel we assume N to be a multiple of d. If N is not a multiple of d, in particular if N = md n, for some nonnegative integer n (strictly less than d, then we can add n zero columns to V and get similar results with slightly more complicated analysis. If we apply a matching block decomposition to matrix W q, we can express it in terms of d d matrices as W q = W q W q W qm Thus, we we can write Q q as Q q = VjW qj V j.

9 9 A similar decomposition can be obtained for Q but, since W has rank up to d (there can only be at most d large errors, we can erase the N d zero columns and rows, (whose exact place we do not know only their existence to get a reduced d d matrix W e. By erasing the corresponding columns and rows of matrices V and V, respectively, we can write Q = V ew e V e, where Ve and V e are the reduced V and V matrices, respectively, of size d d. If the actual number of large errors is smaller than d we can pack any columns/rows for Ve and V e and set the corresponding values at the diagonal of W e to zero. Therefore, we can write Q r as a sum of d d matrices Q r = VeW e V e + VjW qj V j. (3 We denote the (real eigenvalues of a d d Hermitian matrix Q by λ 1 (Q λ (Q... λ d (Q. We are now ready to prove the following theorem for the eigenvalues of Q r. Theorem 1: Let the Hermitian matrix Q r be written as in (3. Then, for the kth smallest eigenvalue of Q r, namely λ k (Q r, we have the following upper and lower bounds: l k λ k (Q r u k, where l k = λ k (W e λ 1 (VeV e + λ 1 (W qj λ 1 (VjV j, u k = λ k (W e λ d (VeV e + λ d (W qj λ d (VjV j. Proof: For the proof we make use of the following theorems from [10]. Theorem : Let Q and Q q be n n Hermitian matrices. Let λ i (A denote the ith smallest eigenvalue of an n n Hermitian matrix (i.e., λ 1 (A λ (A... λ n (A. Then, for k = {1,,..., n}, λ k (Q + λ 1 (Q q λ k (Q + Q q λ k (Q + λ n (Q q. Theorem 3: Let W 0 and V 0 be n n matrices with W 0 Hermitian and V 0 nonsingular and let the eigenvalues of W 0 and V 0 V 0 be arranged in increasing order as in the previous theorem.

10 10 Then, for k = {1,,..., n}, λ k (W 0 λ 1 (V0V 0 λ k (V0W 0 V 0 λ k (W 0 λ n (V0V 0. By applying Theorem above to (3, we get λ k (VeW e V e + λ 1 ( VjW qj V j λ k (Q r λ k (VeW e V e + λ d ( VjW qj V j. Moreover, by repeated application of Theorem, we get λ k (VeW e V e + λ 1 (VjW qj V j λ k (Q r λ k (VeW e V e + Finally, by applying Theorem 3, we get λ d (VjW qj V j. λ k (W e λ 1 (VeV e + λ 1 (W qj λ 1 (VjV j λ k (Q r λ k (W e λ d (V ev e + λ d (W qj λ d (VjV j, (4 which completes our proof. Since [5] has shown that Q has rank equal to the number of errors, say d, the d nonzero eigenvalues of Q correspond to the d nonzero values of W e. If the quantization noise is small enough, then we expect the eigenvalues of the full-rank matrix Q r to be close to the eigenvalues of Q and hope that d of the eigenvalues of Q r will be significantly larger in magnitude than the remaining d d eigenvalues. The latter would correspond to the zero eigenvalues of Q, while the former would correspond to the d nonzero eigenvalues of Q. Therefore, we can associate the eigenvalues of Q r that have large magnitude with the nonzero entries in W e (i.e., the errors. We formalize this intuition in the approach that we describe next for determining the number of large errors. From (4 we can see that the eigenvalues of Q r that do not correspond to errors, i.e., the ones that correspond to an eigenvalue of W e in (4 that is zero, can be bounded by

11 11 λ 1 (W qj λ 1 (VjV j λ(q r λ d (W qj λ d (VjV j. (5 This equation (5 leads to the following theorem, based on which we can determine the number of errors in the system. Theorem 4: The magnitude of all eigenvalues of Q r that do not correspond to errors (as explained earlier, satisfy λ(q r q max m λ d (VjV j, (6 N where q max is the maximum absolute value the quantization noise can get and N is the size of the codeword. Proof: We know that if l λ u, then λ max ( l, u. Therefore, from equation (5, since λ d (Vj V j are positive, an upper bound on the magnitude of the eigenvalues of Q r that do not correspond to large errors is λ(q r max λ l(w qj λ d (VjV j. 1 l d Since W qj is a diagonal matrix with elements ( 1 k q k / N for some k = 1,..., N, and q max is the maximum value of the quantization noise, this results in the overall upper bound on the particular eigenvalues as given in (6. The eigenvalues λ d (Vj V j were the subject of study in [11], from where we know that if V j has size d d (and is of the form considered in this paper then d λ d (V jv j N. This inequality suggests that the best possible bound for λ(q r in (6 can be qmax N md = q max N. We now show that this minimal upper bound is indeed attainable. As we mentioned earlier, when we decompose V into square submatrices V j, as shown in (, any reordering of the columns of V before the formation of the submatrices is allowed (as long as we appropriately reorder the elements on the diagonal matrix W. We now prove that there is a grouping of columns such that λ d (V j V j = d, for all j = 1,..., m. Combining the latter with the fact that d λ d (V j V j, we see that there is a grouping of columns that makes m λ d(v j V j =

12 1 md (= N, which by [11] is the minimum value this sum can take. The (x, y element of V is ( V (x, y = exp iπ(x 1(y 1/N. If we choose the l-th column of V j to be the ( (l 1m + j -th column of V then, ( Vj(x, y = exp iπ(x 1{(y 1m + j}/n, ( V j (y, z = exp iπ(z 1{(y 1m + j}/n. Therefore, the (x, z element of V j V j is V jv j (x, z = = d Vj(x, yv j (y, z y=1 d y=1 = exp d y=1 ( exp iπ(x z{(y 1m + j}/n ( iπ(x zj/n ( exp iπ(x z(y 1m/N But N = md, and if we denote z d = exp(iπ/d (the d-th root of unity, we have ( VjV j (x, z = exp iπ(x zj/n d y=1 z (y 1(x z d. We know that the sum of all powers of the roots of unity zd i, i = 1,,..., d is zero, and that ( i, i = 1,,..., d is just a permutation of the roots of unity when x z, as long as z (x z d (x z and d are relative prime numbers. Therefore, 0 if x z, VjV j (x, z = d if x = z. ( If (x z and d have a common divisor, namely f, then z (x z d (7 i, i = 1,,..., d are the d/f roots of unity, with each one appearing f times in the sum. Thus, (7 still holds for any choice

13 13 of (x z and d. This means that by choosing the l-th column of Vj to be the ( (l 1m+j -th column of V then, Vj V j = di d, where I d is the identity matrix of size d, and λ d (Vj V j = d for all j in {1,,..., m}. We can now prove the following theorem. Theorem 5: The magnitude of all eigenvalues of Q r that do not correspond to errors (as explained earlier, satisfy λ(q r q max N, (8 where q max is the maximum absolute value the quantization noise can get and N is the size of the code-vector. Proof: From (6 we know that λ(q r qmax m N λ d(vj V j. But we have also seen that λ d (Vj V j are lower bounded by d and that there exists a choice of V j so that all these eigenvalues are equal to d. Therefore, since N = md, an improved upper bound for the magnitude of the eigenvalues of Q r that do not correspond to errors is given by (8. A popular technique for determining the number of large errors in a DFT code is to find the eigenvalues of Q r and use an (empirical or otherwise obtained threshold to determine how many of them correspond to errors. For example, in [] an empirical threshold was used, while the authors of [3] employed the well-known Akaike Information Criterion (AIC [4]. In particular, the AIC looks at the ordered eigenvalues of S r S r, where s r1 s r s r(d+1 s r s r3 s r(d+ S r = s r(d+1 s r(d+ s r(d+1 ; if these ordered eigenvalues are denoted by λ 1, λ,..., λ d+1, the AIC decides that the number of large errors is ( arg min (d + 1 kln( AM k + k ( (d + 1 k, 1 k d GM k where AM k and GM k are the arithmetic and geometric means of λ k+1,..., λ d+1. Note that the matrix S r is similar to the matrix Q r we have considered in our analysis. In Section VI, we compare the performance our scheme and the AIC through simulations.

14 14 As a result of our analysis above, one way to compute a threshold for the eigenvalues of Q r is the following: compute the eigenvalues of Q r, take their absolute values and sort them. Compare all these values with the threshold given in (8 and choose the number of large errors as the number of eigenvalues of Q r with magnitude greater than the threshold. This is a nonconservative threshold since we know that all eigenvalues that do not correspond to large errors will be below the threshold (therefore, we can never detect more than d errors, where d is the actual number of errors. In Section IV, we discuss conditions under which we can choose the threshold to guarantee that we correctly determine the number of large errors. Step : Determining the Error Locations Once the number of errors is known, the second part of the PGZ algorithm focuses on determining the error locations. In the finite field case, this is accomplished by substituting all possible values into the error locator polynomial and choosing those that give a result of zero. In the real-number setup that we consider here, if there is no quantization noise, then ( Λ = x d + Λ 1 x d Λ d will be zero for exactly d different x k = exp iπk/n, k = 1,,..., N (for the k s that correspond to the error locations. The presence of quantization noise will prevent the error locator polynomial from taking on a value that is exactly zero. Instead, we choose the d x k s that result in the d smallest magnitudes when substituted into the error locator polynomial (d is the number of errors that were determined to have occurred in the first step of the algorithm. These d x k s determine, of course, the error locations. Step 3: Determining the Error Values Once our error locations are known, we proceed to the final step and calculate the error values. This can be formulated as an estimation problem because we can write the syndrome vector as s r = Te 0 + Hq, where T is a d d Vandermonde matrix (a reduced version of the parity check matrix H whose columns correspond to the error locations and e 0 is a reduced version of e where all zeros have been eliminated. If we have knowledge of the distribution of the quantization noise (but no prior knowledge on the statistics of the large errors, then we can use least square estimation [1]. For instance,

15 15 a very common model for the quantization noise has each component drawn from a uniform distribution over ( B, + B ], where B is the number of fractional bits in the representation of real numbers in our scenario [9]. Under this model, the noise added to the codeword y is an N-dimensional random vector q with zero mean and covariance matrix K q = B 1 I N σ I N. Taking all of the above into account, the least squares estimate for the vector e 0 of the error magnitudes is ê 0,LS (s r = (T Q 1 T 1 T Q 1 s r with Q = HK r H and K r = σ I N. If we have more information on the statistics of the large errors, such as σe, i.e., the a priori variance of the errors, we can use linear minimum mean square error (LMMSE estimation [1]. In this case, the corresponding estimator is ê 0,LMMSE (s r = σ et (σ ett + K r 1 s r. IV. CONDITIONS FOR CORRECTLY DETERMINING THE NUMBER OF ERRORS In the absence of quantization noise, the original PGZ algorithm presented in [8] would correctly identify the number of channel errors that occurred in the system as the rank of matrix Q. When quantization noise is considered, then Q becomes Q r, which is generally a full-rank matrix. For small quantization noise, however, we expect that the eigenvalues of Q r which correspond to quantization noise (and that were the zero eigenvalues of Q will still be small, so that their magnitude will be smaller than the magnitude of the other eigenvalues (that correspond to large errors. In this section we find a lower bound on the number of fractional bits (in the representation of real numbers needed to ensure that the magnitude of the eigenvalues corresponding to quantization noise will always be smaller than that of the eigenvalues that correspond to large errors. Under these conditions, we can find a threshold that separates the two groups of eigenvalues, so that the modified PGZ algorithm is guaranteed to determine the correct number of errors. We have seen in the previous section that all the eigenvalues that correspond to quantization

16 16 noise will be smaller than the threshold in (8. From equation (4 and the fact that if l λ u then λ min( l, u, we see that, in our scenario, u = λ k (W e λ d (VeV e + λ d (W qj λ d (VjV j and l = λ k (W e λ 1 (VeV e + λ 1 (W qj λ d (VjV j. Clearly, we can lower bound u and l as u λ k (W e λ d (VeV e λ d (W qj λ d (VjV j and l λ k (W e λ 1 (VeV e λ 1 (W qj λ 1 (VjV j. Note that in this case λ k (W e will be nonzero, since these are the eigenvalues that correspond to large errors. Using the fact that V ev e and V j V j, j = {1,,..., m}, are positive definite matrices, so that λ d (V ev e λ 1 (V ev e 0, then min( l, u λ k (W e λ 1 (VeV e λ d (W qj λ d (VjV j. We have already seen (in the proof of Theorem 5 that λ d (W qj q max / N, and that λ d (VjV j = N (for a certain choice of V j. Therefore, by assuming that ε min is the minimum value a large error can take, we see that λ k (W e ε min / N,

17 17 for all 1 k d. Combining this with the above, we conclude that the eigenvalues that correspond to large errors have magnitude at least as large as ε min N λ 1 (V ev e q max N. Clearly if ε min N λ 1 (V ev e q max N qmax N, which is the bound for all eigenvalues that correspond to quantization noise in (8, then we can always separate the eigenvalues that correspond to large errors from the ones that correspond to quantization noise. We can then solve for q max to get q max λ 1(VeV e ε min. N Note that all the parameters involved in the last inequality are known, except for the matrix V e which can take different values depending on the number and the locations of the large errors. Therefore, we can offline compute the smallest λ 1 (V ev e, for all possible V e, namely λ min, so that q max λ min N ε min. In the computation of λ min, we need to consider all different V e with 1,,..., d nonzero columns, i.e., there are ( N 1 + ( N ( N d possibilities for V e V e. Note that in reality one only has to check ( N d possible matrices Ve, because it can be proved that the minimum eigenvalue of V e V e is larger than the minimum eigenvalue of V ev e when V e consists of a subset of columns from V e [10]. If we have B fractional bits available for the representation of real numbers, as we have seen in Section III, then q max = B. We now only need to find a sufficiently large B so that B λ min N ε min, or, equivalently,

18 18 ( N B log. (9 λ min ε min V. OPTIMAL BIT ALLOCATION FOR DFT CODES In obtaining an upper bound on the eigenvalues corresponding to quantization noise we used equation (5. If we assume that our quantization noise vector has entries uniform in ( B, + B ], where B is the number of fractional bits in the representation of real numbers in our scenario, and that we assign a different number of bits in each entry of the codevector y, then the magnitude of the nonzero entries of W qj (for all j will be upper bounded by W qj (k, k 1 Bjk, (10 N where B jk is the number of fractional bits available for the representation of the entry of y that corresponds to the k-th entry of W qj. Using equation (10 and the fact that λ d (W qj is bounded by ( 1 Bjk max 1 k d N we can rewrite the second part of (5 as λ(q r = 1 (mink Bjk N (mink 1 Bjk λ d (V N jv j. (11 Our goal in this section is to determine which bit allocation will minimize the upper bound in (11. In other words, we need to find B jk, j = 1,,..., m and k = 1,,..., d such that the right part of (11 is minimized 1 given that we have B bits available in total for all N entries of y. The latter is equivalent to d B jk = B. k=1 We now prove that for DFT codes the optimal bit allocation is the uniform one, i.e., B jk = B N, for all j and k. 1 As we will see, the bound in (11 is fairly tight, so minimizing the bound is also expected to minimize the actual value.

19 19 Our minimization problem is formulated as min m 1 N (min k B jk λ d (V j V j s.t. m d k=1 B jk = B Note that for each W qj we are only interested in its largest (in magnitude eigenvalue, or, equivalently, in its entry with the least available fractional bits. Therefore, it will be a waste of bits if we assign more bits to the remaining entries of W qj. Hence, we assume that all the entries in each W qj are assigned the same number of bits, namely B j j = {1,,..., m}. Our minimization problem becomes min s.t. I = m α j B j m B j = B d where α j 1 N λ d (V j V j. Note that there are two parameters that affect the value of the quantity in our minimization problem: a the bits B j assigned in each location and, b the decomposition of V into V j (different decompositions will give different α j s and different overall upper bounds on the small eigenvalues. First, we deal with the optimal bit allocation problem (assuming a specific but not necessarily known decomposition into α j s and, then, we incorporate some of our earlier results to solve the optimal grouping problem. We introduce the Lagrange multiplier L α j Bj ( m λ B j B d and calculate L Bj = α j ln λ. B j L ( λ = 0 B j = log B. (1 j α j ln Since m B j = B d, we have (( λ m 1 log ln m α = B j d.

20 0 If we denote GM the geometric mean of the α j s then (( λ m 1 log ln GM m = B d, or (N = md Hence, and (1 becomes ( λ log = B GM ln N. λ ln = B N GM, B j = log ( B N GM α j = B N log (GM/α j. (13 Notice that the optimal bit allocation for a specific decomposition V j is the uniform one plus a correction term that depends on the decomposition. We know try to solve the second part of the problem, which is to find the optimal decomposition V j in order to minimize I = m α j B j. We substitute the values B j found in (13 and we see that Bj = B/N GM α j or I = B/N GM = B/N mgm. Therefore, the optimal grouping will be the one that minimizes the geometric mean GM of the α j s. We have already seen in Section III that α j d/ N, hence, GM d/ N. However, we have proved that there exists a grouping for which all α j s are equal to d/ N so that GM = d/ N. Therefore, the minimum value for I is obtained when GM = d/ N which means I = md/ N B/N = N B/N (recall N = md. Note that B/N = q max and that, eventually, I is the threshold we used in our analysis earlier. Finally, since GM = α j, then B j = B N, for all j. We have, therefore, proved that the optimal bit allocation for minimizing the upper bound used in the first part of our decoding algorithm is the uniform one.

21 1 VI. EXPERIMENTAL RESULTS AND ANALYSIS We now evaluate the performance of our decoding algorithm in the presence of quantization noise. Note that our problem consists of two subproblems: (i determining the number of channel errors and (ii determining their effects (so as to correct them. For this reason, we run simulations in MATLAB for a (0,9 DFT code. The entries of the input vector x were selected independently from a uniform probability distribution over the range [ 1, 1]. Locations and values of errors are determined randomly as well, with error values being uniformly distributed over [ 1, q max ] [q max, 1], where q max is taken to be B / (if the upper bound on the magnitude of the large noise is larger than 1 things are actually easier. Since our encoding allows us to correct up to 5 errors we add one, two, three, four or five random errors at the encoded vector y. With the above choices, we simulate the encoding and decoding procedure with quantization noise generated due to different finite precisions ranging from 1 to 16 fractional bits. The first part of our algorithm identifies the number of large errors. In Figures and 3 we plot the probability of correctly identifying the number of large errors as a function of precision (measured in the number of bits available. Figure provides plots for one, two and three errors and Figure 3 provides plots for four and five errors. For comparison purposes we also plot the probability of correctly determining the number of errors using the AIC (see our discussion in Section III. We run 10,000 simulations for each precision level. We see that our proposed scheme works better that the AIC for most cases, except for one large error, where the AIC seems to work perfectly well. In Figure 4 we plot the probability of correctly identifying both the number of errors and the error locations for one, two, four and five errors. We run 10,000 simulations for each point. For the same example, we also compute the lower bound on the precision needed to achieve guaranteed correct identification of the number of errors as given in (9 for different levels of error (0.1 < ε min < We then run 10,000 simulations (for one, two, three, four and five errors and for each different large error level and find the actual precision where 100% correct identification of the number of errors is achieved. In Figure 5 we compare these plots for one, two and three large errors and in Figure 6 for four and five large errors. Note that in Fig. 5 negative fractional bits appear, which correspond to the last bits of the integer part of the real number representation. Also note that if we know the exact number of large errors that have

22 1 0.9 probability of correct identification of number of errors large error proposed scheme large errors proposed scheme 3 large errors proposed scheme large error AIC large errors AIC 3 large error AIC Fractional bits available Fig.. Effect of finite precision on ability to determine the number of errors for one, two and three large errors ,4 large errors proposed scheme 5 large errors proposed scheme 4 large errors AIC 5 large errors AIC probability of correctly identifying the number of errors Fractional bits available Fig. 3. Effect of finite precision on ability to determine the number of errors for four and five large errors. corrupted our code-vector, then we can compute more accurate values for λ min. For example, if we know that 3 large errors have occurred, then we calculate all the ( N 3 possible Ve matrices, the corresponding minimum eigenvalues of V ev e and choose as λ min the minimum of all. Notice that our bound is relatively tight (since it is a worst-case scenario, the comparison should really be made against the case when there are exactly 5 errors present. Also notice that the curves shown in the figure are actually conservative estimates (they only actually shift upwards if more than 10,000 simulations are performed. VII. CONCLUSIONS - FUTURE WORK In this paper we have developed a decoding algorithm that allows us to detect, localize and correct channel errors in the presence of quantization noise for a class of real number BCH codes. In addition, we have obtained a lower bound on the precision needed to guarantee correct

23 large error large errors 3 large errors 4 large errors 5 large errors probability of correct localization fractional bits available Fig. 4. Effect of finite precision on ability to determine the number and location of errors Theoretical Bound 1 large error Theoretical Bound large errors Theoretical Bound 3 large errors Actual Bound 1 large error Actual Bound large errors Actual Bound 3 large errors 8 fractional bits log (ε 10 min Fig. 5. Precision needed for guaranteed correct identification of number of errors for one, two and three large errors. determination of the number of channel errors and proved that the optimal bit allocation for DFT codes is the uniform one. Note that our analysis can also be applied to any real-number Bose- Chaudhuri-Hocquengen (BCH code because of the Vandermonde structure of our parity-check matrix. The well-known Akaike Information Criterion (AIC has been used in the signal processing society for identifying the number of signals in array processing, where multiple observations (equivalent to the syndrome in our setup are available. It would be interesting to extend the first part of our decoding algorithm to the case when more than one syndrome is available at the decoder. This setup, of course, is not applicable in coding theory, but any results can be directly applied to the array processing problem. Moreover, in our analysis we assumed an upper bound for quantization noise. One possible

24 4 4 Theoretical Bound 4 large errors Theoretical Bound 5 large errors Actual Bound 4 large errors Actual Bound 5 large errors 0 18 fractional bits log (ε 10 min Fig. 6. Precision needed for guaranteed correct identification of number of errors for four and five large errors. future research direction is to investigate how our decoding algorithm would be affected by assuming a distribution on the quantization noise. Finally, we are looking into developing real BCH encoding algorithms that take advantage of the analysis in this paper and have applications to image processing. REFERENCES [1] T. G. Marshall Jr., Coding of real-number sequences for error-correction: a digital signal processing problem, IEEE Journal on Selected Areas in Communications, vol. SAC-, no., pp , March [] A. Gabay, P. Duhamel and O. Rioul, Spectral interpolation coder for impulse noise cancellation over a binary symmetric channel, European Signal Processing Conference 000, Tampere, Finland, Sept [3] G. Rath and C. Guillemot, Subspace algorithms for error localization with quantized DFT codes, IEEE Transactions on Communications, vol. 5, no. 1, pp , December 004. [4] S. M. Kay, Modern Spectral Estimation Theory and Application. Englewood Cliffs, NJ: Prentice Hall, [5] R. E. Blahut, Algebraic Methods for Signal Processing and Communications Coding. New York, USA: Springer- Verlag, 199. [6] J. K. Wolf, Redundancy, the discrete Fourier transform, and impulse noise cancellation, IEEE Transactions on Communications, vol. COM-31, no. 3, pp , March [7] P. J. S. G. Ferreira and J. M. N. Vieira, Locating and correcting errors in images, Proceedings of the 4th IEEE International Conference on Image Processing, pp , Santa Barbara, CA, USA, October [8] R. E. Blahut, Algebraic Codes for Data Transmission. Cambridge, UK: Cambridge Univ. Press, 00. [9] A. V. Oppenheim, R. W. Schafer, and J. R. Buck, Discrete-Time Signal Processing. Englewood Cliffs, NJ: Prentice- Hall, [10] R. G. Horn and C. R. Johnson, Matrix Analysis. Cambridge, UK:Cambridge University Press, [11] P. J. S. G. Ferreira, The eigenvalues of matrices that occur in certain interpolation problems, IEEE Transactions on Signal Processing, vol. 45, no. 8, pp , August 1997.

25 5 [1] H. Stark and J. W. Woods, Probability and Random Processes with Applications to Signal Processing. Upper Saddle River, NJ: Prentice Hall, 00. Georgios Takos Biography text here. PLACE PHOTO HERE Christoforos N. Hadjicostis Biography text here. PLACE PHOTO HERE

Extended Subspace Error Localization for Rate-Adaptive Distributed Source Coding

Extended Subspace Error Localization for Rate-Adaptive Distributed Source Coding Introduction Error Correction Extended Subspace Simulation Extended Subspace Error Localization for Rate-Adaptive Distributed Source Coding Mojtaba Vaezi and Fabrice Labeau McGill University International

More information

5.0 BCH and Reed-Solomon Codes 5.1 Introduction

5.0 BCH and Reed-Solomon Codes 5.1 Introduction 5.0 BCH and Reed-Solomon Codes 5.1 Introduction A. Hocquenghem (1959), Codes correcteur d erreurs; Bose and Ray-Chaudhuri (1960), Error Correcting Binary Group Codes; First general family of algebraic

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction BCH Codes Outline Binary Primitive BCH Codes Decoding of the BCH Codes Implementation of Galois Field Arithmetic Implementation of Error Correction Nonbinary BCH Codes and Reed-Solomon Codes Preface The

More information

Cyclic Redundancy Check Codes

Cyclic Redundancy Check Codes Cyclic Redundancy Check Codes Lectures No. 17 and 18 Dr. Aoife Moloney School of Electronics and Communications Dublin Institute of Technology Overview These lectures will look at the following: Cyclic

More information

Vector spaces. EE 387, Notes 8, Handout #12

Vector spaces. EE 387, Notes 8, Handout #12 Vector spaces EE 387, Notes 8, Handout #12 A vector space V of vectors over a field F of scalars is a set with a binary operator + on V and a scalar-vector product satisfying these axioms: 1. (V, +) is

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................

More information

IN this paper, we will introduce a new class of codes,

IN this paper, we will introduce a new class of codes, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 44, NO 5, SEPTEMBER 1998 1861 Subspace Subcodes of Reed Solomon Codes Masayuki Hattori, Member, IEEE, Robert J McEliece, Fellow, IEEE, and Gustave Solomon,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

ERROR CORRECTING CODES

ERROR CORRECTING CODES ERROR CORRECTING CODES To send a message of 0 s and 1 s from my computer on Earth to Mr. Spock s computer on the planet Vulcan we use codes which include redundancy to correct errors. n q Definition. A

More information

Optimal State Estimators for Linear Systems with Unknown Inputs

Optimal State Estimators for Linear Systems with Unknown Inputs Optimal tate Estimators for Linear ystems with Unknown Inputs hreyas undaram and Christoforos N Hadjicostis Abstract We present a method for constructing linear minimum-variance unbiased state estimators

More information

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet

More information

Designing Stable Inverters and State Observers for Switched Linear Systems with Unknown Inputs

Designing Stable Inverters and State Observers for Switched Linear Systems with Unknown Inputs Designing Stable Inverters and State Observers for Switched Linear Systems with Unknown Inputs Shreyas Sundaram and Christoforos N. Hadjicostis Abstract We present a method for estimating the inputs and

More information

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes S-723410 BCH and Reed-Solomon Codes 1 S-723410 BCH and Reed-Solomon Codes 3 Background The algebraic structure of linear codes and, in particular, cyclic linear codes, enables efficient encoding and decoding

More information

PAPER A Low-Complexity Step-by-Step Decoding Algorithm for Binary BCH Codes

PAPER A Low-Complexity Step-by-Step Decoding Algorithm for Binary BCH Codes 359 PAPER A Low-Complexity Step-by-Step Decoding Algorithm for Binary BCH Codes Ching-Lung CHR a),szu-linsu, Members, and Shao-Wei WU, Nonmember SUMMARY A low-complexity step-by-step decoding algorithm

More information

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps 2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical

More information

Decoding Algorithm and Architecture for BCH Codes under the Lee Metric

Decoding Algorithm and Architecture for BCH Codes under the Lee Metric Decoding Algorithm and Architecture for BCH Codes under the Lee Metric Yingquan Wu and Christoforos N. Hadjicostis Coordinated Science Laboratory and Department of Electrical and Computer Engineering University

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Part III Advanced Coding Techniques

Part III Advanced Coding Techniques Part III Advanced Coding Techniques José Vieira SPL Signal Processing Laboratory Departamento de Electrónica, Telecomunicações e Informática / IEETA Universidade de Aveiro, Portugal 2010 José Vieira (IEETA,

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with

More information

Chapter 6. BCH Codes

Chapter 6. BCH Codes Chapter 6 BCH Codes Description of the Codes Decoding of the BCH Codes Outline Implementation of Galois Field Arithmetic Implementation of Error Correction Nonbinary BCH Codes and Reed-Solomon Codes Weight

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Error control of line codes generated by finite Coxeter groups

Error control of line codes generated by finite Coxeter groups Error control of line codes generated by finite Coxeter groups Ezio Biglieri Universitat Pompeu Fabra, Barcelona, Spain Email: e.biglieri@ieee.org Emanuele Viterbo Monash University, Melbourne, Australia

More information

Coset Decomposition Method for Decoding Linear Codes

Coset Decomposition Method for Decoding Linear Codes International Journal of Algebra, Vol. 5, 2011, no. 28, 1395-1404 Coset Decomposition Method for Decoding Linear Codes Mohamed Sayed Faculty of Computer Studies Arab Open University P.O. Box: 830 Ardeya

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise 9 THEORY OF CODES Chapter 9 Theory of Codes After studying this chapter you should understand what is meant by noise, error detection and correction; be able to find and use the Hamming distance for a

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Pranav Dayal, Member, IEEE, and Mahesh K. Varanasi, Senior Member, IEEE

4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Pranav Dayal, Member, IEEE, and Mahesh K. Varanasi, Senior Member, IEEE 4184 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER 2005 An Algebraic Family of Complex Lattices for Fading Channels With Application to Space Time Codes Pranav Dayal, Member, IEEE,

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

Lecture 12: November 6, 2017

Lecture 12: November 6, 2017 Information and Coding Theory Autumn 017 Lecturer: Madhur Tulsiani Lecture 1: November 6, 017 Recall: We were looking at codes of the form C : F k p F n p, where p is prime, k is the message length, and

More information

Error Detection and Correction: Hamming Code; Reed-Muller Code

Error Detection and Correction: Hamming Code; Reed-Muller Code Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials Outline MSRI-UP 2009 Coding Theory Seminar, Week 2 John B. Little Department of Mathematics and Computer Science College of the Holy Cross Cyclic Codes Polynomial Algebra More on cyclic codes Finite fields

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Finite-State Machine Embeddings for Nonconcurrent Error Detection and Identification

Finite-State Machine Embeddings for Nonconcurrent Error Detection and Identification JOURNAL OF L A TEX CLASS FILES, VOL. 1, NO. 11, NOVEMBER 2002 1 Finite-State Machine Embeddings for Nonconcurrent Error Detection and Identification Christoforos N. Hadjicostis, Member, IEEE, Abstract

More information

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming

More information

Arrangements, matroids and codes

Arrangements, matroids and codes Arrangements, matroids and codes first lecture Ruud Pellikaan joint work with Relinde Jurrius ACAGM summer school Leuven Belgium, 18 July 2011 References 2/43 1. Codes, arrangements and matroids by Relinde

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Coding for Memory with Stuck-at Defects

Coding for Memory with Stuck-at Defects Coding for Memory with Stuck-at Defects Yongjune Kim B. V. K. Vijaya Kumar Electrical Computer Engineering, Data Storage Systems Center (DSSC) Carnegie Mellon University Pittsburgh, USA yongjunekim@cmu.edu,

More information

sine wave fit algorithm

sine wave fit algorithm TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for

More information

1 Vandermonde matrices

1 Vandermonde matrices ECE 771 Lecture 6 BCH and RS codes: Designer cyclic codes Objective: We will begin with a result from linear algebra regarding Vandermonde matrices This result is used to prove the BCH distance properties,

More information

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding Chapter 6 Reed-Solomon Codes 6. Finite Field Algebra 6. Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding 6. Finite Field Algebra Nonbinary codes: message and codeword symbols

More information

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1) Cyclic codes: review EE 387, Notes 15, Handout #26 A cyclic code is a LBC such that every cyclic shift of a codeword is a codeword. A cyclic code has generator polynomial g(x) that is a divisor of every

More information

A Systematic Description of Source Significance Information

A Systematic Description of Source Significance Information A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh

More information

General Properties for Determining Power Loss and Efficiency of Passive Multi-Port Microwave Networks

General Properties for Determining Power Loss and Efficiency of Passive Multi-Port Microwave Networks University of Massachusetts Amherst From the SelectedWorks of Ramakrishna Janaswamy 015 General Properties for Determining Power Loss and Efficiency of Passive Multi-Port Microwave Networks Ramakrishna

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

New Algebraic Decoding of (17,9,5) Quadratic Residue Code by using Inverse Free Berlekamp-Massey Algorithm (IFBM)

New Algebraic Decoding of (17,9,5) Quadratic Residue Code by using Inverse Free Berlekamp-Massey Algorithm (IFBM) International Journal of Computational Intelligence Research (IJCIR). ISSN: 097-87 Volume, Number 8 (207), pp. 205 2027 Research India Publications http://www.ripublication.com/ijcir.htm New Algebraic

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x]. The second exam will be on Friday, October 28, 2. It will cover Sections.7,.8, 3., 3.2, 3.4 (except 3.4.), 4. and 4.2 plus the handout on calculation of high powers of an integer modulo n via successive

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

The Fast Fourier Transform

The Fast Fourier Transform The Fast Fourier Transform 1 Motivation: digital signal processing The fast Fourier transform (FFT) is the workhorse of digital signal processing To understand how it is used, consider any signal: any

More information

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,

More information

Proof: Let the check matrix be

Proof: Let the check matrix be Review/Outline Recall: Looking for good codes High info rate vs. high min distance Want simple description, too Linear, even cyclic, plausible Gilbert-Varshamov bound for linear codes Check matrix criterion

More information

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

MATH 433 Applied Algebra Lecture 22: Review for Exam 2. MATH 433 Applied Algebra Lecture 22: Review for Exam 2. Topics for Exam 2 Permutations Cycles, transpositions Cycle decomposition of a permutation Order of a permutation Sign of a permutation Symmetric

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

1.6: Solutions 17. Solution to exercise 1.6 (p.13). 1.6: Solutions 17 A slightly more careful answer (short of explicit computation) goes as follows. Taking the approximation for ( N K) to the next order, we find: ( N N/2 ) 2 N 1 2πN/4. (1.40) This approximation

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE 4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 List Decoding of Biorthogonal Codes the Hadamard Transform With Linear Complexity Ilya Dumer, Fellow, IEEE, Grigory Kabatiansky,

More information

EE512: Error Control Coding

EE512: Error Control Coding EE51: Error Control Coding Solution for Assignment on BCH and RS Codes March, 007 1. To determine the dimension and generator polynomial of all narrow sense binary BCH codes of length n = 31, we have to

More information

GENERALIZED FINITE ALGORITHMS FOR CONSTRUCTING HERMITIAN MATRICES WITH PRESCRIBED DIAGONAL AND SPECTRUM

GENERALIZED FINITE ALGORITHMS FOR CONSTRUCTING HERMITIAN MATRICES WITH PRESCRIBED DIAGONAL AND SPECTRUM SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 1, pp. 61 71 c 2005 Society for Industrial and Applied Mathematics GENERALIZED FINITE ALGORITHMS FOR CONSTRUCTING HERMITIAN MATRICES WITH PRESCRIBED DIAGONAL AND

More information

Quantum Error Correcting Codes and Quantum Cryptography. Peter Shor M.I.T. Cambridge, MA 02139

Quantum Error Correcting Codes and Quantum Cryptography. Peter Shor M.I.T. Cambridge, MA 02139 Quantum Error Correcting Codes and Quantum Cryptography Peter Shor M.I.T. Cambridge, MA 02139 1 We start out with two processes which are fundamentally quantum: superdense coding and teleportation. Superdense

More information

arxiv: v1 [cs.sc] 17 Apr 2013

arxiv: v1 [cs.sc] 17 Apr 2013 EFFICIENT CALCULATION OF DETERMINANTS OF SYMBOLIC MATRICES WITH MANY VARIABLES TANYA KHOVANOVA 1 AND ZIV SCULLY 2 arxiv:1304.4691v1 [cs.sc] 17 Apr 2013 Abstract. Efficient matrix determinant calculations

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

MATH32031: Coding Theory Part 15: Summary

MATH32031: Coding Theory Part 15: Summary MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,

More information

On the Frequency-Domain Properties of Savitzky-Golay Filters

On the Frequency-Domain Properties of Savitzky-Golay Filters On the Frequency-Domain Properties of Savitzky-Golay Filters Ronald W Schafer HP Laboratories HPL-2-9 Keyword(s): Savitzky-Golay filter, least-squares polynomial approximation, smoothing Abstract: This

More information

U Logo Use Guidelines

U Logo Use Guidelines COMP2610/6261 - Information Theory Lecture 22: Hamming Codes U Logo Use Guidelines Mark Reid and Aditya Menon logo is a contemporary n of our heritage. presents our name, d and our motto: arn the nature

More information

QM and Angular Momentum

QM and Angular Momentum Chapter 5 QM and Angular Momentum 5. Angular Momentum Operators In your Introductory Quantum Mechanics (QM) course you learned about the basic properties of low spin systems. Here we want to review that

More information

An Analytic Approach to the Problem of Matroid Representibility: Summer REU 2015

An Analytic Approach to the Problem of Matroid Representibility: Summer REU 2015 An Analytic Approach to the Problem of Matroid Representibility: Summer REU 2015 D. Capodilupo 1, S. Freedman 1, M. Hua 1, and J. Sun 1 1 Department of Mathematics, University of Michigan Abstract A central

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x) Outline Recall: For integers Euclidean algorithm for finding gcd s Extended Euclid for finding multiplicative inverses Extended Euclid for computing Sun-Ze Test for primitive roots And for polynomials

More information

On the Behavior of Information Theoretic Criteria for Model Order Selection

On the Behavior of Information Theoretic Criteria for Model Order Selection IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 8, AUGUST 2001 1689 On the Behavior of Information Theoretic Criteria for Model Order Selection Athanasios P. Liavas, Member, IEEE, and Phillip A. Regalia,

More information

Soft-Decision Decoding Using Punctured Codes

Soft-Decision Decoding Using Punctured Codes IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY 2001 59 Soft-Decision Decoding Using Punctured Codes Ilya Dumer, Member, IEEE Abstract Let a -ary linear ( )-code be used over a memoryless

More information

Image and Multidimensional Signal Processing

Image and Multidimensional Signal Processing Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Theorems. Least squares regression

Theorems. Least squares regression Theorems In this assignment we are trying to classify AML and ALL samples by use of penalized logistic regression. Before we indulge on the adventure of classification we should first explain the most

More information

Error-Correcting Codes

Error-Correcting Codes Error-Correcting Codes HMC Algebraic Geometry Final Project Dmitri Skjorshammer December 14, 2010 1 Introduction Transmission of information takes place over noisy signals. This is the case in satellite

More information

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach Shu Lin, Shumei Song, Lan Lan, Lingqi Zeng and Ying Y Tai Department of Electrical & Computer Engineering University of California,

More information

Tight Lower Bounds on the Ergodic Capacity of Rayleigh Fading MIMO Channels

Tight Lower Bounds on the Ergodic Capacity of Rayleigh Fading MIMO Channels Tight Lower Bounds on the Ergodic Capacity of Rayleigh Fading MIMO Channels Özgür Oyman ), Rohit U. Nabar ), Helmut Bölcskei 2), and Arogyaswami J. Paulraj ) ) Information Systems Laboratory, Stanford

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Implementation of Galois Field Arithmetic. Nonbinary BCH Codes and Reed-Solomon Codes

Implementation of Galois Field Arithmetic. Nonbinary BCH Codes and Reed-Solomon Codes BCH Codes Wireless Information Transmission System Lab Institute of Communications Engineering g National Sun Yat-sen University Outline Binary Primitive BCH Codes Decoding of the BCH Codes Implementation

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Binary Convolutional Codes of High Rate Øyvind Ytrehus

Binary Convolutional Codes of High Rate Øyvind Ytrehus Binary Convolutional Codes of High Rate Øyvind Ytrehus Abstract The function N(r; ; d free ), defined as the maximum n such that there exists a binary convolutional code of block length n, dimension n

More information

Robust Turbo Analog Error Correcting Codes Based on Analog CRC Verification

Robust Turbo Analog Error Correcting Codes Based on Analog CRC Verification 1 Robust Turbo Analog Error Correcting Codes Based on Analog CRC Verification Avi Zanko,, Amir Leshem (Senior Member, IEEE), Ephraim Zehavi (Fellow, IEEE) Abstract In this paper, a new analog error correcting

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information