Lecture 28: Generalized Minimum Distance Decoding

Similar documents
Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes

Lecture 4: Codes based on Concatenation

Lecture 12: November 6, 2017

Lecture 4: Proof of Shannon s theorem and an explicit code

Notes 7: Justesen codes, Reed-Solomon and concatenated codes decoding. 1 Review - Concatenated codes and Zyablov s tradeoff

Lecture 12: Reed-Solomon Codes

2 Completing the Hardness of approximation of Set Cover

Lecture 03: Polynomial Based Codes

Lecture 19: Elias-Bassalygo Bound

Notes for the Hong Kong Lectures on Algorithmic Coding Theory. Luca Trevisan. January 7, 2007

Berlekamp-Massey decoding of RS code

Linear time list recovery via expander codes

Error Correcting Codes Questions Pool

Lecture 8: Shannon s Noise Models

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory

Lecture 12: November 6, 2013

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Decoding Reed-Muller codes over product sets

Decoding Concatenated Codes using Soft Information

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

Notes for Lecture 18

Lecture 6: Expander Codes

Algebraic Soft-Decision Decoding of Reed Solomon Codes

Great Theoretical Ideas in Computer Science

atri/courses/coding-theory/book/

1 Approximate Counting by Random Sampling

Lecture 7: Passive Learning

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Complexity of Decoding Positive-Rate Reed-Solomon Codes

Topics in Probabilistic Combinatorics and Algorithms Winter, Basic Derandomization Techniques

Lecture 04: Secret Sharing Schemes (2) Secret Sharing

Last time, we described a pseudorandom generator that stretched its truly random input by one. If f is ( 1 2

Lecture Notes CS:5360 Randomized Algorithms Lecture 20 and 21: Nov 6th and 8th, 2018 Scribe: Qianhang Sun

Lecture B04 : Linear codes and singleton bound

VLSI Architecture of Euclideanized BM Algorithm for Reed-Solomon Code

Randomness and Computation March 13, Lecture 3

IITM-CS6845: Theory Toolkit February 3, 2012

Lecture 3: Error Correcting Codes

Lecture 9: List decoding Reed-Solomon and Folded Reed-Solomon codes

Lecture 5: Two-point Sampling

On Asymptotic Strategies for GMD Decoding with Arbitrary Error-Erasure Tradeoff

Linear-algebraic pseudorandomness: Subspace Designs & Dimension Expanders

Z Algorithmic Superpower Randomization October 15th, Lecture 12

Lecture Semidefinite Programming and Graph Partitioning

On the List-Decodability of Random Linear Codes

for some error exponent E( R) as a function R,

Non-Malleable Coding Against Bit-wise and Split-State Tampering

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Lecture 7: ɛ-biased and almost k-wise independent spaces

Deterministic Approximation Algorithms for the Nearest Codeword Problem

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

A list-decodable code with local encoding and decoding

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Lecture 6: Deterministic Primality Testing

Lecture 7: Pseudo Random Generators

Reed-Solomon codes. Chapter Linear codes over finite fields

SPECIAL POINTS AND LINES OF ALGEBRAIC SURFACES

Newton s Method and Linear Approximations

ORIE 6334 Spectral Graph Theory November 22, Lecture 25

Lecture 15 and 16: BCH Codes: Error Correction

Math 421, Homework #7 Solutions. We can then us the triangle inequality to find for k N that (x k + y k ) (L + M) = (x k L) + (y k M) x k L + y k M

Linear-algebraic list decoding for variants of Reed-Solomon codes

HARDNESS AMPLIFICATION VIA SPACE-EFFICIENT DIRECT PRODUCTS

Explicit Capacity Approaching Coding for Interactive Communication

Lecture 4: Two-point Sampling, Coupon Collector s problem

A Combinatorial Bound on the List Size

Lecture 1 : Data Compression and Entropy

Lecture Introduction. 2 Linear codes. CS CTT Current Topics in Theoretical CS Oct 4, 2012

Explicit Capacity-Achieving List-Decodable Codes or Decoding Folded Reed-Solomon Codes up to their Distance

Limits to List Decoding Random Codes

Lecture 5. 1 Review (Pairwise Independence and Derandomization)

Approximating the Gaussian multiple description rate region under symmetric distortion constraints

List and local error-correction

Hardness Amplification

Lecture 14 February 28

Lecture 3: Lower Bounds for Bandit Algorithms

List Decoding in Average-Case Complexity and Pseudorandomness

1 Dimension Reduction in Euclidean Space

Lecture 11: Polar codes construction

Lecture 7: Schwartz-Zippel Lemma, Perfect Matching. 1.1 Polynomial Identity Testing and Schwartz-Zippel Lemma

Bridging Shannon and Hamming: List Error-Correction with Optimal Rate

The Complexity of the Matroid-Greedoid Partition Problem

Towards Optimal Deterministic Coding for Interactive Communication

Newton s Method and Linear Approximations

16.1 Min-Cut as an LP

Section 3 Error Correcting Codes (ECC): Fundamentals

Lecture 10 Oct. 3, 2017

Lecture 21: HSP via the Pretty Good Measurement

Lecture 23: Alternation vs. Counting

: Error Correcting Codes. November 2017 Lecture 2

atri/courses/coding-theory/book/

Correcting Bursty and Localized Deletions Using Guess & Check Codes

Lecture 13: Lower Bounds using the Adversary Method. 2 The Super-Basic Adversary Method [Amb02]

Lecture 6: September 22

Lecture notes on statistical decision theory Econ 2110, fall 2013

Linear Algebra and Dirac Notation, Pt. 1

1 The Knapsack Problem

Decoding Algorithm and Architecture for BCH Codes under the Lee Metric

Transcription:

Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 007) Lecture 8: Generalized Minimum Distance Decoding November 5, 007 Lecturer: Atri Rudra Scribe: Sandipan Kundu & Atri Rudra 1 Decoding From Errors and Erasures So far, we have seen the following result concerning decoding of RS codes: Theorem 11 An [n, k] q RS code can be corrected from e errors (or s erasures) as long as e < n k+1 (or s < n k + 1) in O(n 3 ) time Next, we show that we can get the best of the errors and erasures worlds simultaneously: Theorem 1 An [n, k] q RS code can be corrected from e errors and s erasures in O(n 3 ) time as long as e + s < n k + 1 (1) Proof Given a received word y (F n q {?}) n with s erasures and e errors, let y be the sub-vector with no erasures This implies y F n s q, which is a valid received word for an [n s, k] q RS code Now run the Berlekamp-Welch algorithm on y It can correct y as long as e < (n s) k + 1 This condition is implied by (1) Thus, we have proved one can correct e errors under (1) Now we have to prove that one can correct the s erasures under (1) Let z be the output after correcting e errors Now we extend z to z (F {?}) n in the natural way Finally, run the erasure decoding algorithm on z This works as long as s < (n k + 1), which in turn is true by (1) The time complexity of the algorithm above is O(n 3 ) as both the Berlekamp-Welch algorithm and the erasure decoding algorithm can be implemented in cubic time Next, we will use the errors and erasure decoding algorithm above to design decoding algorithms for certain concatenated codes that can be decoded up to half their design distance Generalized Minimum Distance Decoding In the last lecture, we studied the natural decoding algorithm for concatenated codes In particular, we performed MLD on the inner code and then fed the resulting vector to a unique decoding 1

algorithm for the outer code As was mentioned last time, a drawback of this algorithm is that it does not take into account the information that MLD can offer Eg, the situations where a given inner code received word has a Hamming distance of one vs (almost) half the inner code distance are treated the same by the algorithm It seems natural to try and make use of this information Forney in 1966 devised such an algorithm, which is called Generalized Minimum Distance (or GMD) decoding [1] We study this algorithm next and our treatment follows that of [] In the sequel, let C out be an [N, K, D] q k code that can be decoded from e errors and s erasures in polynomial time as long as e + s < D Let C in be an [n, k, d] q code with k = O(log N) We will in fact look at three versions of the GMD decoding algorithm The first two will be randomized algorithms while the last will be a deterministic algorithm We begin with the randomized version as it presents most of the ideas in the final algorithm 1 GMD algorithm (Version 1) Before we state the algorithm we look at a special case of the problem to build up some intuition Consider the received word y = (y 1,, y N ) [q n ] N with the following special property: for every 1 i N, either y i = C in (y i) or (y i, C in (y i)) d/, where for every 1 i N, define y i = MLD Cin (y i ) Now we claim that if (y, C out C in ) < dd/, then there are < D positions i such that (y i, C in (y i)) d/ (call such a position bad) This is because for every bad position i, by the definition of y i, (y i, C in ) d/ Now if there are at least D bad positions, this will imply that (y, C out C in ) dd/, which is a contradiction Now note that we can decode y by just declaring an erasure at every bad position and running the erasure decoding algorithm for C out on the resulting vector The GMD algorithm below generalizes these observations to the general case: Input: y = (y 1,, y N ) [q n ] N Step 1: For every 1 i N: (a) Compute y i = MLD Cin (y i ) (b) Compute w i = min ( (C in (y i), y i ), d ) (c) With probability w i, set y d i?, otherwise set y i = y i Step : Run errors and erasure algorithm for C out on y = (y 1,, y N ) Note that if for every 1 i N, either C in (y i) = y i or (C in (y i), y i ) d/, then the GMD algorithm above does exactly the same as in the discussion above By our choice of C out and C in, it is easy to see that the algorithm above runs in polynomial time More importantly, we will show that the final (deterministic) version of the algorithm above can do unique decoding of C out C in up to half its design distance Theorem 1 Let y be a received word such that there exists a codeword c = (c 1,, c N ) C out C in [q n ] N such that (c, y) < Dd Then the deterministic GMD algorithm outputs c

As a first step, we will show that in expectation the randomized GMD algorithm above works Lemma Let the assumption in Theorem 1 hold Further, if y has e errors and s erasures (when compared with c) after Step 1, then E[e + s ] < D Note that if e + s < D, then the algorithm in Step will output c The lemma above says that in expectation, this is indeed the case Proof of lemma For every 1 i N, define e i = (y i, c i ) Note that this implies that and Next for every 1 i N, we define two indicator variables: N i=1 e i < Dd () X? i = 1 iff y i =?, X e i = 1 iff C in (y i ) c i and y i? We claim that we are done if we can show that for every 1 i N: E[X e i + X? i ] e i d (3) Indeed, by definition e = i X e i and s = i X? i Further, by the linearity of expectation, we get E[e + s ] e i < D, d where the second inequality follows from () To complete the proof, we will show (3) Towards this end, fix an arbitrary 1 i N We prove (3) by a case analysis Case 1: (c i = C in (y i)) First, we note that if y i =? then Xi e = 0 This along with the fact that P r[y i =?] = w i /d implies and E[X? i ] = P r[x? i = 1] = w i d, E[X e i ] = P r[x e i = 1] = 0 i 3

Further, by definition we have ( w i = min (C in (y i), y i ), d ) (C in (y i), y i ) = (c i, y i ) = e i The three relations above prove (3) for this case Case : (c i C in (y i)) As in the previous case, we still have E[X? i ] = w i d Now in this case, if an erasure is not declared at position i, then X e i = 1 Thus, we have Next, we claim that as c i C in (y i), which implies E[X e i ] = P r[x e i = 1] = 1 w i d e i + w i d, (4) E[X e i + X? i ] = w i d e i d, as desired To complete the proof, we show (4) via yet another case analysis Case 1: (w i = (C in (y i), y i ) < d/) By definition of e i, we have e i + w i = (y i, c i ) + (C in (y i), y i ) (c i, C in (y i)) d, where the first inequality follows from the triangle inequality and the second inequality follows from the fact that C in has distance d Case : (w i = d (C in(y i), y i )) As y i is obtained from MLD, we have (C in (y i), y i ) (c i, y i ) This along with the assumption on (C in (y i), y i ), we get e i = (c i, y i ) (C in (y i), y i ) d This in turn implies that as desired e i + w i d, 4

Version of the GMD algorithm In the first version of the GMD algorithm in Step 1(c), we used fresh randomness for each i Next we look at another randomized version of the GMD algorithm that uses the same randomness for every i In particular, consider the following algorithm: Input: y = (y 1,, y N ) [q n ] N Step 1: Pick θ [0, 1] at random Then for every 1 i N: (a) Compute y i = MLD cin (y i ) (b) Compute w i = min ( (C in (y i), y i ), d ) (c) If θ < w i, set y d i?, otherwise set y i = y i Step : Run errors and erasure algorithm for C out on y = (y 1,, y N ) We note that in the proof of Lemma, we only use the randomness to show that P r[y i =?] = w i d In the current version of the GMD algorithm, we note that [ [ P r[y i =?] = P r θ 0, w ]] i = w i d d, as before (the last equality follows from the our choice of θ) One can verify that the proof of Lemma can be used to show that even for version of GMD, E[e + s ] < D Next lecture, we will see how to derandomize the above version of the GMD algorithm by choosing θ from a polynomially sized set (as opposed to the current infinite set [0, 1]) References [1] G David Forney Generalized Minimum Distance decoding IEEE Transactions on Information Theory, 1:15 131, 1966 [] Venkatesan Guruswami Error-Correcting Codes: Constructions and Algorithms, Lecture no 11 Available at http://wwwcswashingtonedu/education/courses/533/06au/, November 006 5