Optimum Soft Decision Decoding of Linear Block Codes
|
|
- Simon Morton
- 5 years ago
- Views:
Transcription
1 Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is binary. The channel encoder maps the K information bits m 0,,m k-1 into a codeword C =(C n-1,,c 0 ) The transmitted waveform s(t) is 2εc s( t) = (2Cj 1) cos 2 π fct, t [( n 1 j) Tb, ( n j) Tb ], j = 0, n 1 T The vector representation of s(t) b s = ε [(2C 1),,(2C 1)]. c n 1 0 The receiver is optimal with respect to the coded signals
2 Optimum Soft Decision Decoding of Linear Block Codes Assume that the transmitted information are equally likely. The optimal decision rule is the minimum distance decision rule. Let s i be the vector representation of the coded signal waveform s i (t) corresponding to the codeword C i =(C i(n-1),,c i0 ) s ε C C K i = c [(2 i( n 1) 1),,(2 i0 1)], 0 i 2 1 Let us compute the block error prob. P e : Since C is linear, P e (error C i ) is the same for all codewords C i. Therefore, P e 2 1 k = 2 Pe (error C i ) i= 0 P e = P e (error C 0 ) K
3 Optimum Soft Decision Decoding of Linear Block Codes Applying the union bound K K s 2 ( ) i s0 εcwtc i Pe Q = Q i= 1 2N 0 i= 1 2N 0 K εcwtc ( i) = Q i= 1 N 0 K 2 1 nεc ε c 2E b As Eb = =,we have Pe Q RwtC c ( i) K R c i= 1 N 0 The above upper bound depends on the weight distribution of C A simpler bound is as follows K 2E b Pe 2 Q Rd c N 0 d = the minimum distance of C
4 Hard Decision Decoding of Linear Block Codes In soft decision, we need to compute M distance metric. When M is large, the computation complexity is very high. To reduce the computation burden, we can quantize the analog signals, and the decoding is performed digitally. {m i } (n,k,d) linear C=(C n-1,,c 0 ) BPSK S(t) block codec modulator Cˆ or mˆ, mˆ K 0 1 Channel decoder Hard decision decoding diagram AWGN BPSK demodulator
5 Hard Decision Decoding of Linear Block Codes Equivalent diagram {m i } (n,k,d) linear C=(C n-1,,c 0 ) S(t) block codec BSC Channel decoder ˆ C The error prob. of the BSC is 2ε c p = Q N 0 Assume that p<1/2. Assume that the transmitted information are equally likely. We want to design an optimal decoder and analyze its performance.
6 Hard Decision Decoding of Linear Block Codes Because of noise, the received vector y =(y n-1,, y 0 ) may be different from the transmitted codeword C. The difference vector e = y C = (e n-1,, e 0 ) is called an error vector. e j = 0 with probability 1-p, and e j = 1 with probability p. Since there are 2 K possible error vectors, the decoder can not decide which codeword was actually transmitted. Since the transmitted information is equally likely, the MAP rule is equivalent to the ML decision rule. Thus the optimal decoder decode y as c=c ˆ iff p( y C ) = max p( y C ) j 0 i 2 1 wt( e ) n wt ( e ) wt( e ) n wt( e ) p (1 p) = max p (1 p) j j i i 0 i 2 1 wt( e ) = min wt( e ) d( y,c ) = min d( y,c ) j i i j k j k i j i
7 Minimum Hamming Distance Decoding Rule The optimal decision rule now becomes the minimum Hamming distance decision rule. The optimal decoder decodes y as the nearest codeword d( y,c ) = min d( y,c ) j i i In other words, the decoder picks the error vector that has the least weight and then forms an estimate Cˆ = y eˆ ê Ĉ
8 Error Detection Capability Let C be a block code with block length n and minimum Hamming distance d. Suppose C is used only for error detection. That is, the receiver just tests if the received vector is codeword or not. If it is not, the receiver detects that an error has occurred, and ask for a retransmission of the codeword. The code C can detect up to d-1 errors. On the other hand, when more than d-1 errors occur, the receiver may be fooled since it is possible for an error vector and weight d to transform one codeword into another codeword. Therefore C is capable of detecting up to d-1 errors.
9 Error Detection and Error Correction Capability 1 ( 1) 2 d When C is used only for error correction, C can correct up to errors. t C t 1 C 2 t C3 C i t We associate with each codeword C i a ball of radius t and center C i, where 1 t= ( d 1) 2 All these balls are non-overlapping. If C i is transmitted and t or fewer errors occur, then the received vector y is inside the ball centered at C i, and is closer to C i than any other codeword. Thus the nearest neighbor decoding will correct these errors. On the other hand, if more than t errors occur, the received vector y may be closer to some other codeword. If this is the case, then the decoder will be fooled.
10 Error Detection and Error Correction The block code can also be used for both error correction and error detection. Suppose d=7. Then C can correct/detect 3 errors. We may take a different decoding strategy to increase the error detection capability at the expense of the error correction capability. For example, C can be used to correct up to 2 errors and at the same time detect 4 possible errors. t c d t c C i C j t d In general, a block code C with the minimum distance d can simultaneously correct t c errors and detect td errors as long as t c +t d d-1. For any two disjoint codewords C i and C j, the ball of radius t d and centre C i and the ball of radius t c and centre C j are disjoint.
11 Syndrome Decoding A brute force method for performing the nearest neighbor decoding will involve 2 k possible comparisons. More efficient method is syndrome decoding: Given the received vector y, one first computes the vector s = Hy Where H is a fixed parity check matrix for C. The (n-k)-dimensional vector s=(s n-k-1, s n-k-2,, s 0 ) is called the syndrome of y. The syndrome provides some information about the possible error vector e. There is one to one correspondence between syndromes and sets of all possible error vectors.
12 Syndrome Decoding Given the received vector y, the set of all possible error vectors is { y C : C C} y+ C i i The set y + C is called a coset of C. Note that y C. Thus A coset containing y = the set of all possible error vectors with respect to y. Property 1: Two cosets are either disjoint and coincide. Property 2: Two vectors y 1 and y 2 have the same syndrome iff y 1 and y 2 are in the same coset. = ( )' = 0 ' ' Hy Hy H y1 y2 1 2 ( y y ) C y y + C
13 Syndrome Decoding There is a one to one correspondence between syndrome and cosets. Each coset contains 2 K vectors. This implies that there are 2 n-k cosets and hence 2 n-k syndromes. In view of the nearest neighbor decoding rule, we get the following decoding algorithm: Step 1: Given a received vector y, compute the syndrome s of y. Step 2: Find the least weight vector in the coset given by the syndrome s. Step 3: Decode y as ˆ c= y eˆ The least weight vector in a coset is called the coset leader of the coset. ê
14 Syndrome Decoding Standard array for syndrome decoding ( n -k is small ) Coset leaders Syndrome Codewords 0 C 1 C 2 k-1 s 0 Coset e 1 C 1 +e 1 C 2 k-1 + e 1 s 1 Coset e 2 n-k-1 C 1 + e 2n-k-1 C 2k-1 + e 2n-k-1 s 2 n-k -1 When y is received, its position in the standard array is located. The decoder the decides that the error vector is the left-most vector in the row containing y, and y is decoded as the codeword at the top of the column containing y.
15 Example G = Coset leaders Syndrome Codewords d min = How about the actual error is (10100)?
16 Syndrome Computing in the Case of Cyclic Codes Let g(x) = x n-k +g n-k-1 x n-k-1 + g 1 x+g 0 be the generator polynomial of C. Let y=(y n-1, y 0 ) be the received vector. Associate with y a polynomial y(x) = y n-1 x n-1 + +y 1 x + y 0 Assume that we compute the syndrome s=(s n-k-1, s 0 ) of y by using the systematic parity check matrix H. Associate with s a polynomial s(x) = s n-1 x n-1 + +s 1 x + s 0 One can show that s(x) is the remainder obtained by dividing y(x) by g(x) y(x) = f(x)g(x) + s(x)
17 Example The [7,4,3] binary Hamming code revisited g(x) = x 3 +x+1 G = H=? Suppose y = [ ] What is syndrome s 1. Use matrix multiplication 2. Use polynomial division
18 Shift Register Implementation y 0 y 1 y Quotient
19 Performance of Hard Decision Decoding The error probability of BSC 2ε c p= Q < 1/ 2 N 0 Block error probability K 2 1 P = P (error C ) = P(error C ) = P(error C ) e e i i M i= 0 n-k 2 1 i = 1- p (1 p) i=0 wt( e ) n wt( e ) e i s are the coset leaders, and e 0 is the zero vector. i 0
20 Let t Performance of Hard Decision Decoding 1 = ( d 1) 2 The balls of radius t and centers C i are all disjoint. Thus any vector with weight t is a coset leader. Therefore, n k 2 1 n i i p (1 p) p (1 p) l t wt ( e ) n wt ( e ) l n l i= 0 l= 0 t n n n Pe 1 p (1 p) = p (1 p) l= 0 l l= t+ 1 l l n l l n l The equality holds if the coset leaders consist of all vectors with weight t. In this case, we have t k n 2 = 2 l= 0 l The linear code is called a perfect code. n
21 Performance of Hard Decision Decoding In general, t l= 0 n l n k 2 -Hamming bound Hamming bound gives another relationship among n, k, d. An [n,k,d] linear binary code is called quasi-perfect if 2 n k t+ 1 l= 0 n l For an [n,k,d] perfect code, the balls of radius t are disjoint and together contain all vectors of length n. For an [n,k,d] quasi-perfect code, the balls of radius t+1 may overlap and together contain all vectors of length n.
22 Error Probability for Quasi-Perfect Codes The coset leaders have weight t+1, The number of the cost leaders having weight t+1 is nk 2 1 n k 2 - n t l= 0 l n n i i p (1 p) = p (1 p) + 2 p (1 p) l l t t wt ( e ) n wt ( e ) l nl n k t+ 1 n t 1 i= 0 l= 0 l= 0 n n n Pe = p (1 p) 2 p (1 p) l=+ t 1 l l= 0 l t l n l n k t+ 1 n t 1 This formula is also a lower bound to P e for any [n, k, d] linear binary block code.
23 Other Bounds Consider the communication of two equally likely n-dimensional vectors a = (a n-1,, a 0 ), b =(b n-1,, b 0 ) over the BSC. The minimum decoding error prob. depends only on the Hamming distance d(a, b) between a and b. Denote this minimum error prob. as 2u+ 1 2u+ 1 p p l= u+ 1 l p2 ( dab (, )) = 2u+ 1 2u l 2u l 1 2u u u p (1 p) + p (1- p) if dab (, ) = 2u l= u+ 1 l 2 u l 2u+ 1 l (1 ) if dab (, ) = 2u+ 1 p 2 (.) is strictly decreasing on the set of possible integers p 2 (l+1) < p 2 (l)
24 Other Bounds (cntd) In terms of union bounds i max p ( d( C, C )) P (error C ) p ( d( C, C )) 2 0 i e i i= That is, p ( d) P p ( wt( C )) 2 e 2 i= 1 K d = the minimum distance of C i K The upper bound depends on the weight distribution of C. Since p 2 (l+1) < p 2 (l), we have p 2 (d) p e (2 K -1) p 2 (d)
25 Other Bounds (cntd) We can also show that p 2 (l) p( x 1 + +x l l/2) [4p(1-p)] l/2 For any l 1, where x 1 x l are i.i.d. random variables and X 1 1 with prob. p = 0 with prob. 1-p 2 1 p ( d ) P p ( wt( C )) 2 e 2 i= 1 2 K 2 1 p ( d ) P [4 p(1 p)] e K i= 1 i wt ( C ) / 2 i
26 Comparison of Performance Between Hard- Decision and Soft-Decision Decoding Method 1: use the bounds developed in the last two sections to evaluate the hard decision performance and soft decision performance of specific linear block codes. Method 2: Hard decision results in the BSC channel 0 1 p p 1-p 1-p 0 1 2E p= Q b N On the other hand, soft decision gives rise to the discrete input, continuous output channel: Y =± E+ n n is a Gaussian random variable. Compute the channel capacity of these two channels and compare them. c 0
27 Comparison of Performance Between Hard- Decision and Soft-Decision Decoding Method 3: Use the random coding argument to find the hard decision and soft decision random coding rates. Then compare these two rates. All these methods reveal that soft decision performance is roughly 2dB better than hard decision performance.
28 Concatenated Block Codes Most of the codes discussed so far are designed for correcting and detecting random errors, but not suitable for correcting burst errors. Definition: An error burst of length b in an n-bit received vector is a contiguous sequence of b bits in which the first and the last bits and any number of intermediate bits are received in error. An error vector containing a single error burst of length 7 looks like this: 00 1xxxxx10 0, Where x may be 0 or 1.
29 Concatenated Block Codes Fact: Binary codes obtained from nonbinary codes, particularly from Reed-Solumn (RS) codes are particularly suited to correcting burst errors. Consider a [255, 249, 7] RS code C over GF(2 8 ). Each code symbol is an element in GF(2 8 ) and hence represent 8 bits. Replace each code symbol in every codeword in C by its binary representation. We then gets a binary code C. The resulting binary code C has parameters: n = 255 x 8 =2040, k = 249 x 8 =1992, d 7 The original nonbinary RS code C can correct 3 symbol errors. The binary code C can correct an error burst of length 17.
30 Concatenated Block Codes In general, if C is an [N, K, D] RS code over GF(2 m ) with N = 2 m -1, K = N-2t, D = 2t + 1 Then the binary code C obtained from C can correct an error burst of length m(t-1)+1. The binary code C obtained from [N, K, D] RS code C over GF(2 m ) has good burst error correction capability, but it does not help correct random errors. To improve its capability of correcting random errors, we may use a linear [n, m, d] block code to further encode the output sequence of the binary code C. The resulting overall code is called a concatenated code.
31 Concatenated Block Codes Input data Outer encoder (N,K,D] Inner Encoder [n,m,d] Modulator Channel Output data Outer Decoder Inner Decoder Demodulator A concatenated code
32 Concatenated Block Codes K x m bits N x m bits N x n bits Thus the concatenated code has parameters block length = N x n # of information bits = k x m minimum distance d x D
33 Interleavers Another effective method for dealing with error bursts is to interleave the block coded sequence so that a whole codeword is not transmitted in consecutive time intervals. As a result, error bursts are spread out among many codewords so that errors within a codeword appear to be random.
34 Interleavers Read out bits to modulator Read in coded bits from the encoder Mn-4 Mn-3 Mn-2 Mn-1 M rows C 0,C 1, C n,c n Mn n-k parity bits k inform. If the original [n, k, d] block code can correct an error burst of length b, then the combination of the original block and the block interleaver of degree m can correct an error burst of length up to mb
Lecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More information16.36 Communication Systems Engineering
MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication
More informationChapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005
Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each
More informationUTA EE5362 PhD Diagnosis Exam (Spring 2011)
EE5362 Spring 2 PhD Diagnosis Exam ID: UTA EE5362 PhD Diagnosis Exam (Spring 2) Instructions: Verify that your exam contains pages (including the cover shee. Some space is provided for you to show your
More informationDr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems
Prof. Shu Lin Dr. Cathy Liu Dr. Michael Steinberger U.C.Davis Avago SiSoft A Brief Tour of FEC for Serial Link Systems Outline Introduction Finite Fields and Vector Spaces Linear Block Codes Cyclic Codes
More informationECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes
Kevin Buckley - 2010 109 ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes m GF(2 ) adder m GF(2 ) multiplier
More informationError Correction Methods
Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.
More informationB. Cyclic Codes. Primitive polynomials are the generator polynomials of cyclic codes.
B. Cyclic Codes A cyclic code is a linear block code with the further property that a shift of a codeword results in another codeword. These are based on polynomials whose elements are coefficients from
More informationMATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.
MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet
More informationChapter 3 Linear Block Codes
Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and
More informationELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes
ELEC 519A Selected Topics in Digital Communications: Information Theory Hamming Codes and Bounds on Codes Single Error Correcting Codes 2 Hamming Codes (7,4,3) Hamming code 1 0 0 0 0 1 1 0 1 0 0 1 0 1
More informationPhysical Layer and Coding
Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:
More informationMATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q
MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination
More informationEE 229B ERROR CONTROL CODING Spring 2005
EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with
More informationMATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9
Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received
More informationCS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability
More informationChapter 7: Channel coding:convolutional codes
Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication
More informationCommunications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved
Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming
More informationCyclic codes: overview
Cyclic codes: overview EE 387, Notes 14, Handout #22 A linear block code is cyclic if the cyclic shift of a codeword is a codeword. Cyclic codes have many advantages. Elegant algebraic descriptions: c(x)
More informationIntroduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.
Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information
More informationChannel Coding for Secure Transmissions
Channel Coding for Secure Transmissions March 27, 2017 1 / 51 McEliece Cryptosystem Coding Approach: Noiseless Main Channel Coding Approach: Noisy Main Channel 2 / 51 Outline We present an overiew of linear
More informationMATH32031: Coding Theory Part 15: Summary
MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,
More information6.1.1 What is channel coding and why do we use it?
Chapter 6 Channel Coding 6.1 Introduction 6.1.1 What is channel coding and why do we use it? Channel coding is the art of adding redundancy to a message in order to make it more robust against noise. It
More information3. Coding theory 3.1. Basic concepts
3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,
More informationCoding Techniques for Data Storage Systems
Coding Techniques for Data Storage Systems Thomas Mittelholzer IBM Zurich Research Laboratory /8 Göttingen Agenda. Channel Coding and Practical Coding Constraints. Linear Codes 3. Weight Enumerators and
More informationERROR CORRECTING CODES
ERROR CORRECTING CODES To send a message of 0 s and 1 s from my computer on Earth to Mr. Spock s computer on the planet Vulcan we use codes which include redundancy to correct errors. n q Definition. A
More informationCSCI 2570 Introduction to Nanocomputing
CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication
More informationSolutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16
Solutions of Exam Coding Theory (2MMC30), 23 June 2016 (1.a) Consider the 4 4 matrices as words in F 16 2, the binary vector space of dimension 16. C is the code of all binary 4 4 matrices such that the
More informationECEN 604: Channel Coding for Communications
ECEN 604: Channel Coding for Communications Lecture: Introduction to Cyclic Codes Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 604: Channel Coding for Communications
More informationAn Introduction to Low Density Parity Check (LDPC) Codes
An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,
More informationThe E8 Lattice and Error Correction in Multi-Level Flash Memory
The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M. Kurkoski kurkoski@ice.uec.ac.jp University of Electro-Communications Tokyo, Japan ICC 2011 IEEE International Conference on Communications
More informationx n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)
Cyclic codes: review EE 387, Notes 15, Handout #26 A cyclic code is a LBC such that every cyclic shift of a codeword is a codeword. A cyclic code has generator polynomial g(x) that is a divisor of every
More informationLinear Codes and Syndrome Decoding
Linear Codes and Syndrome Decoding These notes are intended to be used as supplementary reading to Sections 6.7 9 of Grimaldi s Discrete and Combinatorial Mathematics. The proofs of the theorems are left
More informationAnswers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics
Answers and Solutions to (Even Numbered) Suggested Exercises in Sections 6.5-6.9 of Grimaldi s Discrete and Combinatorial Mathematics Section 6.5 6.5.2. a. r = = + = c + e. So the error pattern is e =.
More informationMath 512 Syllabus Spring 2017, LIU Post
Week Class Date Material Math 512 Syllabus Spring 2017, LIU Post 1 1/23 ISBN, error-detecting codes HW: Exercises 1.1, 1.3, 1.5, 1.8, 1.14, 1.15 If x, y satisfy ISBN-10 check, then so does x + y. 2 1/30
More informationSIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land
SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:
More informationThe E8 Lattice and Error Correction in Multi-Level Flash Memory
The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon
More informationReed-Solomon codes. Chapter Linear codes over finite fields
Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime
More informationCyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay
1 / 25 Cyclic Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 26, 2014 2 / 25 Cyclic Codes Definition A cyclic shift
More informationChapter 6. BCH Codes
Chapter 6 BCH Codes Description of the Codes Decoding of the BCH Codes Outline Implementation of Galois Field Arithmetic Implementation of Error Correction Nonbinary BCH Codes and Reed-Solomon Codes Weight
More informationFault Tolerance & Reliability CDA Chapter 2 Cyclic Polynomial Codes
Fault Tolerance & Reliability CDA 5140 Chapter 2 Cyclic Polynomial Codes - cylic code: special type of parity check code such that every cyclic shift of codeword is a codeword - for example, if (c n-1,
More informationCoding theory: Applications
INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts
More informationBinary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction
BCH Codes Outline Binary Primitive BCH Codes Decoding of the BCH Codes Implementation of Galois Field Arithmetic Implementation of Error Correction Nonbinary BCH Codes and Reed-Solomon Codes Preface The
More informationGenerator Matrix. Theorem 6: If the generator polynomial g(x) of C has degree n-k then C is an [n,k]-cyclic code. If g(x) = a 0. a 1 a n k 1.
Cyclic Codes II Generator Matrix We would now like to consider how the ideas we have previously discussed for linear codes are interpreted in this polynomial version of cyclic codes. Theorem 6: If the
More informationChapter 7 Reed Solomon Codes and Binary Transmission
Chapter 7 Reed Solomon Codes and Binary Transmission 7.1 Introduction Reed Solomon codes named after Reed and Solomon [9] following their publication in 1960 have been used together with hard decision
More informationChapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding
Chapter 6 Reed-Solomon Codes 6. Finite Field Algebra 6. Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding 6. Finite Field Algebra Nonbinary codes: message and codeword symbols
More informationChannel Coding I. Exercises SS 2017
Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut
More informationError Detection & Correction
Error Detection & Correction Error detection & correction noisy channels techniques in networking error detection error detection capability retransmition error correction reconstruction checksums redundancy
More informationTHIS paper is aimed at designing efficient decoding algorithms
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used
More informationCoding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013
Coding Theory and Applications Linear Codes Enes Pasalic University of Primorska Koper, 2013 2 Contents 1 Preface 5 2 Shannon theory and coding 7 3 Coding theory 31 4 Decoding of linear codes and MacWilliams
More informationExercise 1. = P(y a 1)P(a 1 )
Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a
More informationSolutions to problems from Chapter 3
Solutions to problems from Chapter 3 Manjunatha. P manjup.jnnce@gmail.com Professor Dept. of ECE J.N.N. College of Engineering, Shimoga February 28, 2016 For a systematic (7,4) linear block code, the parity
More informationELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound
ELEC 45/ELEC 5 Error Control Coding and Sequences Hamming Codes and the Hamming Bound Single Error Correcting Codes ELEC 45 2 Hamming Codes One form of the (7,4,3) Hamming code is generated by This is
More informationMATH 291T CODING THEORY
California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt
More informationError Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class
Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done
More informationMa/CS 6b Class 25: Error Correcting Codes 2
Ma/CS 6b Class 25: Error Correcting Codes 2 By Adam Sheffer Recall: Codes V n the set of binary sequences of length n. For example, V 3 = 000,001,010,011,100,101,110,111. Codes of length n are subsets
More informationLinear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),
Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 + + a n 1 x n 1 + a n x n, where the coefficients a 1, a 2,, a n are
More informationInformation redundancy
Information redundancy Information redundancy add information to date to tolerate faults error detecting codes error correcting codes data applications communication memory p. 2 - Design of Fault Tolerant
More informationBounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes
Bounds on the Error Probability of ML Decoding for Block and Turbo-Block Codes Igal Sason and Shlomo Shamai (Shitz) Department of Electrical Engineering Technion Israel Institute of Technology Haifa 3000,
More informationChannel Coding I. Exercises SS 2017
Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut
More informationMT361/461/5461 Error Correcting Codes: Preliminary Sheet
MT361/461/5461 Error Correcting Codes: Preliminary Sheet Solutions to this sheet will be posted on Moodle on 14th January so you can check your answers. Please do Question 2 by 14th January. You are welcome
More informationexercise in the previous class (1)
exercise in the previous class () Consider an odd parity check code C whose codewords are (x,, x k, p) with p = x + +x k +. Is C a linear code? No. x =, x 2 =x =...=x k = p =, and... is a codeword x 2
More informationMATH3302. Coding and Cryptography. Coding Theory
MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................
More informationThe extended coset leader weight enumerator
The extended coset leader weight enumerator Relinde Jurrius Ruud Pellikaan Eindhoven University of Technology, The Netherlands Symposium on Information Theory in the Benelux, 2009 1/14 Outline Codes, weights
More informationA First Course in Digital Communications
A First Course in Digital Communications Ha H. Nguyen and E. Shwedyk February 9 A First Course in Digital Communications 1/46 Introduction There are benefits to be gained when M-ary (M = 4 signaling methods
More informationELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes
ELEC 405/ELEC 511 Error Control Coding Hamming Codes and Bounds on Codes Single Error Correcting Codes (3,1,3) code (5,2,3) code (6,3,3) code G = rate R=1/3 n-k=2 [ 1 1 1] rate R=2/5 n-k=3 1 0 1 1 0 G
More informationChapter 5. Cyclic Codes
Wireless Information Transmission System Lab. Chapter 5 Cyclic Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Description of Cyclic Codes Generator and Parity-Check
More informationLecture B04 : Linear codes and singleton bound
IITM-CS6845: Theory Toolkit February 1, 2012 Lecture B04 : Linear codes and singleton bound Lecturer: Jayalal Sarma Scribe: T Devanathan We start by proving a generalization of Hamming Bound, which we
More informationThe BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes
S-723410 BCH and Reed-Solomon Codes 1 S-723410 BCH and Reed-Solomon Codes 3 Background The algebraic structure of linear codes and, in particular, cyclic linear codes, enables efficient encoding and decoding
More informationError Correction Review
Error Correction Review A single overall parity-check equation detects single errors. Hamming codes used m equations to correct one error in 2 m 1 bits. We can use nonbinary equations if we create symbols
More informationList Decoding: Geometrical Aspects and Performance Bounds
List Decoding: Geometrical Aspects and Performance Bounds Maja Lončar Department of Information Technology Lund University, Sweden Summer Academy: Progress in Mathematics for Communication Systems Bremen,
More informationError Correction and Trellis Coding
Advanced Signal Processing Winter Term 2001/2002 Digital Subscriber Lines (xdsl): Broadband Communication over Twisted Wire Pairs Error Correction and Trellis Coding Thomas Brandtner brandt@sbox.tugraz.at
More informationError-Correcting Codes
Error-Correcting Codes HMC Algebraic Geometry Final Project Dmitri Skjorshammer December 14, 2010 1 Introduction Transmission of information takes place over noisy signals. This is the case in satellite
More informationLinear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay
1 / 26 Linear Block Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay July 28, 2014 Binary Block Codes 3 / 26 Let F 2 be the set
More informationOne Lesson of Information Theory
Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/
More informationMATH 291T CODING THEORY
California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................
More informationLecture 4: Linear Codes. Copyright G. Caire 88
Lecture 4: Linear Codes Copyright G. Caire 88 Linear codes over F q We let X = F q for some prime power q. Most important case: q =2(binary codes). Without loss of generality, we may represent the information
More informationVHDL Implementation of Reed Solomon Improved Encoding Algorithm
VHDL Implementation of Reed Solomon Improved Encoding Algorithm P.Ravi Tej 1, Smt.K.Jhansi Rani 2 1 Project Associate, Department of ECE, UCEK, JNTUK, Kakinada A.P. 2 Assistant Professor, Department of
More informationKnow the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].
The second exam will be on Friday, October 28, 2. It will cover Sections.7,.8, 3., 3.2, 3.4 (except 3.4.), 4. and 4.2 plus the handout on calculation of high powers of an integer modulo n via successive
More informationLinear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x
Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 x + + a n 1 x n 1 + a n x n, where the coefficients a 0, a 1, a 2,,
More informationCombinatória e Teoria de Códigos Exercises from the notes. Chapter 1
Combinatória e Teoria de Códigos Exercises from the notes Chapter 1 1.1. The following binary word 01111000000?001110000?00110011001010111000000000?01110 encodes a date. The encoding method used consisted
More informationDefinition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}.
2.4. Coset Decoding i 2.4 Coset Decoding To apply MLD decoding, what we must do, given a received word w, is search through all the codewords to find the codeword c closest to w. This can be a slow and
More informationDigital Transmission Methods S
Digital ransmission ethods S-7.5 Second Exercise Session Hypothesis esting Decision aking Gram-Schmidt method Detection.K.K. Communication Laboratory 5//6 Konstantinos.koufos@tkk.fi Exercise We assume
More informationMATH 433 Applied Algebra Lecture 22: Review for Exam 2.
MATH 433 Applied Algebra Lecture 22: Review for Exam 2. Topics for Exam 2 Permutations Cycles, transpositions Cycle decomposition of a permutation Order of a permutation Sign of a permutation Symmetric
More informationMathematics Department
Mathematics Department Matthew Pressland Room 7.355 V57 WT 27/8 Advanced Higher Mathematics for INFOTECH Exercise Sheet 2. Let C F 6 3 be the linear code defined by the generator matrix G = 2 2 (a) Find
More informationOrthogonal Arrays & Codes
Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible
More information1 Reed Solomon Decoder Final Project. Group 3 Abhinav Agarwal S Branavan Grant Elliott. 14 th May 2007
1 Reed Solomon Decoder 6.375 Final Project Group 3 Abhinav Agarwal S Branavan Grant Elliott 14 th May 2007 2 Outline Error Correcting Codes Mathematical Foundation of Reed Solomon Codes Decoder Architecture
More information1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g
Exercise Generator polynomials of a convolutional code, given in binary form, are g 0, g 2 0 ja g 3. a) Sketch the encoding circuit. b) Sketch the state diagram. c) Find the transfer function TD. d) What
More informationMTAT : Introduction to Coding Theory. Lecture 1
MTAT05082: Introduction to Coding Theory Instructor: Dr Vitaly Skachek Lecture 1 University of Tartu Scribe: Saad Usman Khan Introduction Information theory studies reliable information transmission over
More informationCoding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014
Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often
More informationLecture 4: Codes based on Concatenation
Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More informationECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)
ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling
More informationReed-Muller Codes. These codes were discovered by Muller and the decoding by Reed in Code length: n = 2 m, Dimension: Minimum Distance
Reed-Muller Codes Ammar Abh-Hhdrohss Islamic University -Gaza ١ Reed-Muller Codes These codes were discovered by Muller and the decoding by Reed in 954. Code length: n = 2 m, Dimension: Minimum Distance
More informationPolar Code Construction for List Decoding
1 Polar Code Construction for List Decoding Peihong Yuan, Tobias Prinz, Georg Böcherer arxiv:1707.09753v1 [cs.it] 31 Jul 2017 Abstract A heuristic construction of polar codes for successive cancellation
More informationMaking Error Correcting Codes Work for Flash Memory
Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging
More informationTopic 3. Design of Sequences with Low Correlation
Topic 3. Design of Sequences with Low Correlation M-sequences and Quadratic Residue Sequences 2 Multiple Trace Term Sequences and WG Sequences 3 Gold-pair, Kasami Sequences, and Interleaved Sequences 4
More informationSection 3 Error Correcting Codes (ECC): Fundamentals
Section 3 Error Correcting Codes (ECC): Fundamentals Communication systems and channel models Definition and examples of ECCs Distance For the contents relevant to distance, Lin & Xing s book, Chapter
More informationCoding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013
Coding Theory and Applications Solved Exercises and Problems of Cyclic Codes Enes Pasalic University of Primorska Koper, 2013 Contents 1 Preface 3 2 Problems 4 2 1 Preface This is a collection of solved
More informationThe Viterbi Algorithm EECS 869: Error Control Coding Fall 2009
1 Bacground Material 1.1 Organization of the Trellis The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 The Viterbi algorithm (VA) processes the (noisy) output sequence from a state machine
More information