Lecture 12: November 6, 2013

Similar documents
Lecture 12: November 6, 2017

Error Correcting Codes Questions Pool

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

Lecture B04 : Linear codes and singleton bound

Lecture 28: Generalized Minimum Distance Decoding

Lecture 5. Symmetric Shearer s Lemma

Lecture 4: Codes based on Concatenation

Lecture 03: Polynomial Based Codes

Lecture 3: Error Correcting Codes

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Great Theoretical Ideas in Computer Science

Lecture 12: Reed-Solomon Codes

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold

Lecture Introduction. 2 Linear codes. CS CTT Current Topics in Theoretical CS Oct 4, 2012

Linear Algebra. F n = {all vectors of dimension n over field F} Linear algebra is about vectors. Concretely, vectors look like this:

PDE Notes, Lecture #11

Convergence of Random Walks

Function Spaces. 1 Hilbert Spaces

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

: Error Correcting Codes. October 2017 Lecture 1

Approximate Constraint Satisfaction Requires Large LP Relaxations

Arrangements, matroids and codes

Lecture 10: October 30, 2017

Space-time Linear Dispersion Using Coordinate Interleaving

Lecture 8: Shannon s Noise Models

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory

Permanent vs. Determinant

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

1. Aufgabenblatt zur Vorlesung Probability Theory

arxiv: v2 [math.cv] 2 Mar 2018

On non-antipodal binary completely regular codes

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Algebraic Geometry Codes. Shelly Manber. Linear Codes. Algebraic Geometry Codes. Example: Hermitian. Shelly Manber. Codes. Decoding.

Acute sets in Euclidean spaces

Math Circles: Number Theory III

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution

FIRST YEAR PHD REPORT

Berlekamp-Massey decoding of RS code

1 Operations on Polynomials

Analysis IV, Assignment 4

Solutions to Math 41 Second Exam November 4, 2010

Lecture 22. Lecturer: Michel X. Goemans Scribe: Alantha Newman (2004), Ankur Moitra (2009)

Linear First-Order Equations

Lecture 9: List decoding Reed-Solomon and Folded Reed-Solomon codes

1 Locally computable randomized encodings

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes

ERROR-detecting codes and error-correcting codes work

Least-Squares Regression on Sparse Spaces

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

A first order divided difference

Reed-Solomon codes. Chapter Linear codes over finite fields

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes

Step 1. Analytic Properties of the Riemann zeta function [2 lectures]

Lecture 6 : Dimensionality Reduction

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

Lagrangian and Hamiltonian Mechanics

Discrete Mathematics

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lecture 4: Proof of Shannon s theorem and an explicit code

Ma/CS 6b Class 25: Error Correcting Codes 2

1 Definition of the derivative

Lecture 14: Hamming and Hadamard Codes

The Advantage Testing Foundation Olympiad Solutions

1 Lecture 8: Interpolating polynomials.

MT5821 Advanced Combinatorics

Linear and quadratic approximation

7.1 Support Vector Machine

2.1 Derivatives and Rates of Change

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

Differentiation ( , 9.5)

MATH3302. Coding and Cryptography. Coding Theory

3.6. Let s write out the sample space for this random experiment:

Hamming codes and simplex codes ( )

Rotation by 90 degree counterclockwise

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

Error-Correcting Codes

Some Examples. Uniform motion. Poisson processes on the real line

Lecture notes on coding theory

Lecture 2 Linear Codes

Section 2.7 Derivatives of powers of functions

Lower bounds on Locality Sensitive Hashing

Low-Power Cooling Codes with Efficient Encoding and Decoding

ERROR CORRECTING CODES

Discrete Mathematics and Probability Theory Fall 2013 Vazirani Note 6

The Ehrenfest Theorems

CS 573: Algorithmic Game Theory Lecture date: January 23rd, 2008

15.1 Upper bound via Sudakov minorization

Lecture 3: Oct 7, 2014

Lecture 6: Expander Codes

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

Lecture 8: Channel Capacity, Continuous Random Variables

REAL ANALYSIS I HOMEWORK 5

Math 209B Homework 2

1 Lecture 13: The derivative as a function.

Transcription:

Information an Coing Theory Autumn 204 Lecturer: Mahur Tulsiani Lecture 2: November 6, 203 Scribe: Davi Kim Recall: We were looking at coes of the form C : F k q F n q, where q is prime, k is the message length, an n is the block length of the coe We also saw C (its range) as a set in F n q an efine the istance of the coe as (C) := min { (x, y)} x,y C,x y where (x, y) is the Hamming istance between x an y We showe that a coe C can correct t errors iff (C) 2t + Hamming Coe The following is an example of the Hamming Coe from F 4 2 to F7 2 : Example Let C : F 4 2 F7 2, where C(x, x 2, x 3, x 4 ) = (x, x 2, x 3, x 4, x 2 + x 3 + x 4, x + x 3 + x 4, x + x 2 + x 4 ) Note that each element of the image is a linear function of the x i s, ie, one can express C with matrix multiplication as follows: 0 0 0 0 0 0 0 0 0 x C(x, x 2, x 3, x 4 ) = 0 0 0 x 2 0 x 3 0 x 4 0 Definition 2 (Linear Coes) A coe C : F k q F n q is a linear coe if for all u, v F k q an α F q, C(αu + v) = αc(u) + C(v), an thus the image of C is a subspace of F n q Since a linear coe is a linear map from a finite imensional vector space to another, we can write it as a matrix of finite size That is, there is a corresponing G F n k q st C(x) = Gx for all x F k q If the coe has nonzero istance, then the rank of G must be k (otherwise there exist x, y F k q such that Gx = Gy) Hence, the null space of G T has imension n k, so let b,, b n k be a basis of the null space of G T Definition 3 (Parity Check Matrix) Let b,, b n k be a basis for the null space of G T corresponing to a linear coe C Then H F (n k) n q, efine by H T = [ ] b b 2 b n k is calle the parity check matrix of C

Since G T H T = 0 HG = 0, we have (HG)x = 0 for all x F k q, ie, Hz = 0 for all z C Moreover, since the columns of H T are a basis for the null-space of G T, we have that z C Hz = 0 So the parity check matrix gives us a way to quickly check a coewor, by checking the parities of some bits of z (each row of H gives a parity constraint on z) Also, one can equivalently efine a linear coe by either giving G or the parity check matrix H Example 4 The parity check matrix of our example Hamming Coe is: 0 0 0 H = 0 0 0 0 0 0 Note that the i th column is the integer i in binary One can easily check that HG = 0 Now suppose z = (z,, z 7 ) T is our coewor an we make a single error in the i th entry Then the output coewor with the error is z 0 z + e i = z i + z 7 0 an H(z + e i ) = Hz + He i = He i = H i, the i th column of H, which reas i in binary So this is a very efficient ecoing algorithm just base on parity checking Since the Hamming coe (C) can correct at least t = errors, we must have that (C) 2t + = 3 Verify that the istance is exactly 3 using the following characterization of istance for linear coes Exercise 5 For z F n q, let wt(z) = {i [n] z i 0} Prove that for a linear coe C (C) = min wt(z) z C One can generalize the Hamming coe to larger message an block lengths, we can create a parity matrix H F (n k) n q, where the i th column reas i in binary 2 Hamming Boun We now show an optimality boun on the size of the coe, starting with the case of istance-3 coes an then generalizing to istance- coes 2

Theorem 2 Let C : F k 2 Fn 2 be any istance-3 coe, ie, (C) = 3 Then C = 2 k 2n n + Proof: For each z C, let B(z) be the ball of size n + consisting of z an the n elements in F n 2 (not in C), each at istance from z Then the balls forme by the coewors in C are isjoint, if B(z) an B(z ) intersect, then (z, z ) 2 by triangle inequality For each coewor z C, we have B(z) = n + coes, so C (n + ) 2 n Note that our example hamming coe from F 4 2 to F7 2 satisfie C = 24 = 27, so it was an optimal 8 istance-3 coe Generalizing to istance- coes, we have: Theorem 22 (Hamming Boun) Let C : F k 2 Fn 2 be any istance- coe, ie, (C) = Then C = 2 k 2 n ( ) vol B 2 ( where vol B z C 2 ) = 2 ( n ) i= i is the number of coes at istance at most 2 from any fixe coewor Remark 23 The Hamming boun also gives us a boun on the rate of the coe in terms of entropy (recall: the rate of the coe is k n ) Let = δn for δ 2 Since l ( n i= i) 2 nh( l n ) for l n 2, we have: k H(δ/2) + o() n 3 Ree-Solomon Coe We now look at Ree-Solomon coes over F q These are optimal coes which can achieve a very large istance However, they have a rawback that they nee q n Definition 3 (Ree-Solomon Coe) Assume q n an fix S = {a,, a n } F q, istinct st S = n For each message (m 0,, m k ) F k q, consier the polynomial P (x) = m 0 + m x + + m k x k Then the Ree-Solomon Coe is efine as: C(m 0,, m k ) = (P (a ),, P (a n )) Remark 32 Ree-Solomon Coes can again be encoe using inner coes to create concatenate coes, which can work with smaller q However, we will not iscuss these Let s compute the istance of the Ree-Solomon Coe: Claim 33 (C) n k + 3

Proof: Consier C(m 0,, m k ) with P (x) = m 0 +m x+ +m k x k an C(m 0,, m k ) with P (x) = m 0 + m x + + m k xk, where we assume (m 0,, m k ) (m 0,, m k ) an hence P P Then P P is a non-zero polynomial of egree at most k an has at most k roots, ie, P an P agree on at most k points This implies that C(m 0,, m k ) = (P (a ),, P (a n )) an C(m 0,, m k ) = (P (a ),, P (a n )) iffer on at least n k + places Since our choice of the messages was arbitrary, (C) n k + We now show that this is optimal: Theorem 34 (Singleton Boun) Let C : F k q F n q be a istance- coe Then n k + Proof: Consier the map Γ : F k q F n + q, where Γ(x) is the first n + coorinates of C(x) Then for all x y, we have Γ(x) Γ(y), since (C(x), C(y)) Therefore, img(γ) = F k q F n + q Remark 35 If n = 2k, then for = k +, we can correct k 2 errors using the Ree-Solomon Coe, an this is optimal Moreover, the Ree-Solomon Coe is a linear coe: a a 2 a k C(m 0,, m k ) = a n a 2 n a k n 4 Berlekamp-Welch Decoing Algorithm m 0 m m k If the coewors output by the Ree-Solomon coe i not contain any errors, we coul simply use Lagrange interpolation to recover the messages However, we must be able to hanle noise, an the trick is to work with a hypothetical error-locator polynomial telling us where the error is Definition 4 (Error-locator Polynomial) An error-locator polynomial, E(x), is a polynomial which satisfies i [n] E(a i ) = 0 y i P (a i ) for all i =,, n, where y i is the i th element of the output Ree-Solomon Coe obtaine with noise Note that this irectly implies that y i E(a i ) = P (a i )E(a i ) for all a i S Let s enote Q as Q(x) := P (x)e(x) In the next lecture, we will iscuss the following algorithm by Berelekemp an Welch [], which uses the above intuition to ecoe a message with at most (n k + )/2 errors Algorithm 42 Fin Q, E with eg(e) t an eg(q) k + t st y i E(a i ) = Q(a i ) for all i =,, n Output Q E 4

References [] LR Welch an ER Berlekamp Error correction for algebraic block coes, December 30 986 US Patent 4,633,470 5