Polynomial Codes over Certain Finite Fields

Similar documents
Algebraic Codes for Error Control

16.36 Communication Systems Engineering


Lecture 12. Block Diagram

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

Notes 10: Public-key cryptography

The Pennsylvania State University. The Graduate School. Department of Computer Science and Engineering

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

Graph-based codes for flash memory

An Introduction to (Network) Coding Theory

Information redundancy

Error Correction Review

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

Theory: Find good (i.e. high error correction at low redundancy) codes.

Optical Storage Technology. Error Correction

An introduction to basic information theory. Hampus Wessman

Communication Theory II

Example: sending one bit of information across noisy channel. Effects of the noise: flip the bit with probability p.

Reed-Solomon codes. Chapter Linear codes over finite fields

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution

Ma/CS 6b Class 24: Error Correcting Codes

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

LMS Popular Lectures. Codes. Peter J. Cameron

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

VHDL Implementation of Reed Solomon Improved Encoding Algorithm

PAPER A Low-Complexity Step-by-Step Decoding Algorithm for Binary BCH Codes

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

An Introduction to (Network) Coding Theory

6.02 Fall 2011 Lecture #9

Error-correcting codes and Cryptography

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Math 512 Syllabus Spring 2017, LIU Post

Error Detection & Correction

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Lecture 4: Codes based on Concatenation

Some error-correcting codes and their applications

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

Error-correcting codes and applications

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

A Brief Encounter with Linear Codes

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

List Decoding of Reed Solomon Codes

Decoding Reed-Muller codes over product sets

Assume that the follow string of bits constitutes one of the segments we which to transmit.

CSCI 2570 Introduction to Nanocomputing

ABriefReviewof CodingTheory

Lecture 8: Shannon s Noise Models

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Cyclic Redundancy Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Reed-Solomon Error-correcting Codes

Hadamard Codes. A Hadamard matrix of order n is a matrix H n with elements 1 or 1 such that H n H t n = n I n. For example,

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

Berlekamp-Massey decoding of RS code

The Golay codes. Mario de Boer and Ruud Pellikaan

Making Error Correcting Codes Work for Flash Memory

Lecture 4 Noisy Channel Coding

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

MATH3302. Coding and Cryptography. Coding Theory

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

Top Ten Algorithms Class 6

The number of message symbols encoded into a

Chapter 6. BCH Codes

CS 70 Discrete Mathematics and Probability Theory Summer 2016 Dinh, Psomas, and Ye Discussion 8A Sol

Combinatorics in Space. The Mariner 9 Telemetry System

Convolutional Coding LECTURE Overview

Part III. Cyclic codes

Shannon s noisy-channel theorem

MATH 291T CODING THEORY

Error Correction Methods

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps

2. Polynomials. 19 points. 3/3/3/3/3/4 Clearly indicate your correctly formatted answer: this is what is to be graded. No need to justify!

4 An Introduction to Channel Coding and Decoding over BSC

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

Part I. Cyclic codes and channel codes. CODING, CRYPTOGRAPHY and CRYPTOGRAPHIC PROTOCOLS

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Physical Layer and Coding

Error Detection, Correction and Erasure Codes for Implementation in a Cluster File-system

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Coding theory: Applications

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Implementation of Galois Field Arithmetic. Nonbinary BCH Codes and Reed-Solomon Codes

CSE 123: Computer Networks

5.0 BCH and Reed-Solomon Codes 5.1 Introduction

Chapter 7 Reed Solomon Codes and Binary Transmission

Error Correction. Satellite. GPS device. 3 packet message. Send 5. 3 D 2 B 1 A 1 E. Corrupts 1 packets. 3 D 2 B 1 E 1 A

MATH 291T CODING THEORY

PERFECTLY secure key agreement has been studied recently

Lecture 15 and 16: BCH Codes: Error Correction

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

Lecture B04 : Linear codes and singleton bound

REED-SOLOMON ENCODING AND DECODING

Channel Coding I. Exercises SS 2017

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

Optimum Soft Decision Decoding of Linear Block Codes

Transcription:

Polynomial Codes over Certain Finite Fields A paper by: Irving Reed and Gustave Solomon presented by Kim Hamilton March 31, 2000

Significance of this paper: Introduced ideas that form the core of current commonly-used error-correcting techniques. R-S codes used for: Data Storage. Ex: Compact Discs. Encoding/Decoding system allows scratched CDs to sound perfect. Data Transmission. Ex: Allowed high quality pictures to be sent back from Voyager II. Data transmitted over tens of millions of miles.

Irving S. Reed 1923: Born in Seattle, WA 1944: B.S. Mathematics, Cal Tech 1949: Ph.D. Mathematics, Cal Tech 1949-50: Northrop Aircraft Company 1950-51: Computer Research Corp. 1951-1960: M.I.T. Lincoln Labs; paper published in 1960 1960-63: Rand Corporation 1963-present: Professor of E.E. and C.S. at University of Southern California * Helped design one of the world s first digital computers

Gustave Solomon 1931: Born in Brooklyn, NY B.S. Mathematics, Yeshiva University 1956: Ph.D. Mathematics, M.I.T. late 1950 s: taught math at Boston and Johns Hopkins Universities 1957-61: M.I.T. Lincoln Labs after 1960: Jet Propulsion Laboratory and TRW Systems; visiting/adjunct professorships at U.C. Berkeley, UCLA, Cal Tech Jan. 31, 1996: died at his home in Beverly Hills, CA * Spent free time composing popular songs and folks songs!

Coding Theory Introduction Main problem of information and coding theory: A stream of source data, in the form of 0 s and 1 s, is being transmitted over a communications channel, such as a telephone line. Occasionally, disruptions can occur in the channel, causing 0 s to turn into 1 s and vice versa. Question: How can we tell when the original data has been changed, and when it has, how can we recover the original data?

Coding Theory Introduction (cont d) Easy things to try: Do nothing. If a channel error occurs with probability p, then the probability of making a decision error is p. Send each bit 3 times in succession. The bit that occurs the majority of the time gets picked. (E.g. 010 => 0) If errors occur independently, then the probability of making a decision error is 3*p 2-2*p 3, which is less than p for p<1/2. Generalize above: Send each bit n times, choose majority bit. In this way, we can make the probability of making a decision error arbitrarily small, but inefficient in terms of transmission rate.

Coding Theory Introduction (cont d) From the previous suggestions, we see the basic elements of an encoding of data: Encode source information, by adding additional information, sometimes referred to as redundancy, that can be used to detect, and perhaps correct, errors in transmission. The more redundancy we add, the more reliably we can detect and correct errors, but the less efficient we become at transmitting the source data.

Source sends source message Components in Communications 1011 Receiver receives source message Encoder Process encodes source message into codeword 1011 1011010 Decoder corrects errors and reclaims source message Channel may introduce errors 1010010

Shannon s Theorem Noisy Coding Theorem due to Shannon (1948) Roughly: Consider channel with capacity C. If we are willing to settle for a rate of transmission that is strictly below C, then there is an encoding scheme for the source data that will reduce the probability of a decision error to any desired level. Problem: Proof not constructive! To this day, no one has found a way to construct the coding schemes promised by Shannon s theorem. Additional concerns: Is the coding scheme easy to implement, both in encoding and decoding? May require extremely long codes.

Polynomial Codes Over Certain Finite Fields Code = mapping from vector space of dimension m over a finite field K (denote V m (K)) into a vector space of higher dimension n>m over the same field (V n (K)). Let K = Z 2 (α), α is root of primitive irreducible polynomial over Z 2. Can represent elements of K as 0, β, β 2,, β 2^n-1 = 1, where β is the generator of the multiplicative cyclic group. If the code we want to send is (a 0, a 1,, a m-1 ), where a i Z 2, define P(x) = a 0 + a 1 x + a 2 x 2 + a m-1 x m-1 Code maps: (a 0, a 1,, a m-1 ) (P(0), P(β), P(β 2 ),, P(1))

Receiving End Receive the message (P(0), P(β), P(β 2 ),, P(1)). If no transmission errors, we could solve for the original message by solving simultaneously any m of the following 2 n equations: P(0) = a 0 P(β) = a 0 + a 1 β + a 2 β2 + +a m-1 β m-1 P(β 2 ) = a 0 + a 1 β 2 + a 2 β 4 + +a m-1 β 2m-2.. P(1) = a 0 + a 1 + a 2 + + a m-1 Any m of these are linearly independent, so we get a unique solution.

Transmission Errors But, what if errors occur during transmission? Lemma (p. 302): For s errors, we can get at most (s+m-1)- choose-m determinations for a single wrong m-tuple. Proof: Since the equations are independent, any m of them have exactly one solution vector a. To obtain more than one vote, a must be the solution of more than m equations. An incorrect a can be the solution of at most s+m-1 equations, consisting of s incorrect equations and m-1 correct equations. Therefore, an incorrect a can be the solution to at most (s+m-1)-choose-m sets of m equations.

Transmission Errors (cont d) Thus, we have: Correct solution: a = (a 0, a 1,, a m-1 ) * gets at least (2^n-s)-choose-m votes. Incorrect solutions: b 1, b 2,, b t, (all m-tuples), where t <= (2^n-choose-m) - ((2^n-s)-choose-m) * each get at most (s+m-1)-choose-m votes.

Transmission Errors (cont d) Thus, if we have: (2^n-s)-choose-m > (s+m-1)-choose-m In other words: s < (2^n-m+1)/2 Then by taking the majority vote, we will determine the correct m-tuple (the original message). The code will then correct errors of order less than (2 n -m+1)/2

Features of Reed-Solomon Codes: Maximum Distance Separable Burst Errors * Current implementations in CD technology can deal with error bursts as long as 4000 consecutive bits Erasures Redundancy occurs naturally

Some Notes on Decoding One decoding scheme is mentioned in the paper: Majority vote: Take all 2 n choose m combinations of equations. Solve for coefficients. The set of coefficients that gets the most votes wins. Obvious Problem: Intractable; need more efficient

Some Notes on Decoding (cont d) More efficient decoding attempts: 1960 - Peterson provided first explicit description of a decoding algorithm for binary BCH codes. Complexity increases with square of number of errors corrected. 1967 - Berlekamp introduced first truly efficient algorithm for both binary and nonbinary BCH codes. Complexity increases linearly with number of errors. 1975 - Sugiyama, et al. Showed that Euclid s algorithm can be used to decode BCH and R-S codes.

Discussion Questions Complexity Issues in decoding Why didn t they incorporate notion of distance? Any other major applications that I didn t mention? Guilty pleasures: Any Solomon stories? - singing opera at parties - Feldenkreis - didn t believe in doctors, died of a stroke