Richard A. Brualdi W. Cary Human Vera S. Pless. January 24, Origins of Coding Theory 2. 2 Basic Concepts Bounds Finite Fields 39

Size: px
Start display at page:

Download "Richard A. Brualdi W. Cary Human Vera S. Pless. January 24, Origins of Coding Theory 2. 2 Basic Concepts Bounds Finite Fields 39"

Transcription

1 An Introduction to Algebraic Codes Richard A. Brualdi W. Cary Human Vera S. Pless January 24, 1996 Contents 1 Origins of Coding Theory 2 2 Basic Concepts 13 3 Bounds 26 4 Finite Fields 39 5 Cyclic Codes 46 6 Minimum Distance of Cyclic Codes 60 7 BCH Codes 65 8 Reed{Solomon Codes 69 9 Duadic Codes Weight Distributions Designs QR, Golay, and Hamming Codes Reed{Muller Codes Nonlinear Codes 127 1

2 1 Origins of Coding Theory In 1948 Claude Shannon published his remarkable paper A mathematical theory of communication [75] which marked the beginning of coding theory. Given a communication channel which may corrupt information sent over it, Shannon identied a number called the capacity ofthechannel and proved, in a nonconstructive way, that arbitrarily reliable communication is possible at any rate below thechannel capacity. Examples of communication channels include telephone, telegraph, and magnetic storage devices or any kind of electronic communication device. The common feature of communication channels is that information is emanating from a source and is sent over the channel to a receiver at the other end. The channel is `noisy' in the sense that what is received is not always the same as what was sent. The fundamental problem is to determine what message was sent on the basis of the approximation (because of the noise) that was received. The basic idea is to embellish the message by adding some redundancy to it so that `usually' the received message is a `good approximation' to the message that was sent. One obvious way toembellish a message is to repeat each symbol in it a large number of times. Then provided the channel is not `too noisy' we can decide on the basis of majority vote (assuming that only two symbols are used in our messages, otherwise we need to use plurality vote) what the symbol at the source was. Of course, sometimes we will be wrong, but our condence (and rightly so) in our conclusions increases with increasing repetition of the source symbols. However, we pay a price for this increase in condence, namely the length of time it takes to transmit our message. If every time we want to send a certain symbol over the channel we senditmany times, say 101 times, then we are using the channel ineciently: every 101 symbols received contain only one bit of information giving a rate of 1=101. In fact, every 101 symbols received contain less than one bit of information, since even with 101 repetitions of a symbol we may still conclude the wrong symbol was sent. The more repetitions, the higher our condence level and the lower the rate. Thus there is an inverse relationship between our condence level and the rate of transmission. An increase in condence is at the expense of a decrease in rate of transmission. Must an increase in our condence level come at the expense of a decrease in rate? The answer is `yes' if we demand a rate that is too high but `no' by Shannon's theorem provided the rate is below channel capacity. Wecan be as sure as we want (but not absolutely sure!) of the information in a received message while transmitting at any rate below channel capacity. The conclusions of Shannon's theorem are not without some tradeo. In order to communicate reliably at a rate close to channel capacity, wemust increase the complexity of our communication scheme. We nowgive the necessary background to state and understand Shannon's theorem. While we shall not give a proof of the theorem, we shall try to give anintuitive understanding of its validity. The mathematical model for a communication system that we use (the dis- 2

3 cussion above is to be interpreted with respect to this model) is the following. We are given a random variable X on the nite set f1 2 ::: mg with probability distribution function p. The elements X(1) = x 1 X(2) = x 2 ::: X(m) =x m are distinct and p(x 1 ) p(x 2 ) ::: p(x m ) are nonnegative real numbers with p(x 1 )+p(x 2 )++ p(x m )=1: Here we are using p(x i ) as an abbreviation for prob(x = x i ). The random variable X denes an m-ary source consisting of independent observations of X: X 1 X 2 X 3 :::: where X 1 X 2 X 3 :::are independent random variables with the same distribution as X. Thus prob(x j = x i )=p(x i ) (j =1 2 3 :::) and prob(x 1 = x i1 X 2 = x i2 ::: X k = x ik )=p(x i1 )p(x i2 ) p(x ik ) (k =1 2 3 :::): The smaller the probability ofx i the more uncertain we shouldbethatan 1 observation of the random variable X will result in x i.thus we can regard p(x i) 1 as a measure of the uncertainty ofx i. Note that if p(x i )=1,then p(x i) =1, that is, we are sure that an observation of X will result in x i.ifp(x i )=0,then 1 p(x i) = 1 and we are as uncertain as we can be that an observation of X will result in x i (equivalently, certain that the observation will not be x i ). We use a logarithmic scale and dene the uncertainty of x i to be log 2 1 p(x i ) = ; log 2 p(x i ): Thus uncertainty is measured in bits. The entropy of the random variable X is dened to be the expected value H(X) = mx i=1 1 X m p(x i ) log 2 p(x i ) = ; p(x i )log 2 p(x i ) i=1 of the uncertainty in an observation of X. (Here we are using the convention that 0 log 2 0=0which is motivated by the fact that lim p!0 p log 2 p = 0.) Let X b be the binary random variable which takes on the value 0 with probability p and 1 with probability 1; p, wherep is a real number with 0 p 1. Then the entropy ofx b is H(p) :=;p log 2 p ; (1 ; p)log 2 (1 ; p): 3

4 The function H(p) 0 p 1 is called the binary entropy function. straightforward to verify that H(0) = H(1) = 0 0 H(p) 1 and H(p) = 1 if and only if p = 1 2 : It is The case of p =1=2 is clearly the most uncertain case since either of the two outcomes are equally likely, there is no way tochoose one over the other (as there is when the two probabilities dier). Example 1.1 Let X be a random variable on a set of 16 elements with a uniform probability distribution. Then H(X) = 16X i= log = 16 4=4: That the entropy ofx equals 4 in this case is quite natural in the following sense. Sixteen elements can be described by means of four bits (our chosen unit of measurement). In an observation of X the bits are independent random variables with probability distribution p(0) = 1=2 and p(1) = 1=2. The entropy of this bit random variable is H( 1 2 )=1 2 log log 2 2=1 which is the maximum uncertainty for a binary random variable. Since X is decribed by four independent bit random variables each with uncertainty equal to 1, the uncertainty ofx is 4. In sending information over a communication channel, we have not one but two random variables. The rst is the random variable X whose observations are giving the information to be sent over the channel. The second is the random variable Y which is the corruption of X that results from the noise in the channel. In order to deal with this situation we introduce the concepts of conditional entropy andmutual information. Let Y be a second random variable dened on the set f1 2 ::: mg. 1 We denote the values of Y by Y (1) = y 1 Y(2) = y 2 ::: Y(m) =y m with probabilities p(y 1 ) p(y 2 ) ::: p(y m ). Assume that there is a joint probability distribution function whose values are given by p(x i y j )(1 i j m). The probability distribution of the conditional random variable Y j X=xk is given by p(y j j x k )= p(x k y j ) p(x k ) (1 j m): 1 We could take a set dierent from the set on which the random variable X is dened on, even one of dierent cardinality. But this generality is not needed for Shannon's theorem. 4

5 Here p(y j j x k ) is the probability thaty = y j given that X = x k. The entropy of Y j X=xk is H(Y j X=xk )=; mx j=1 The conditional entropy H(Y j X) is dened by H(Y j X) = mx k=1 = ; = ; p(y j j x k ) log 2 p(y j j x k ): p(x k )H(Y j X=xk ) mx k=1 mx p(x k ) mx k=1 j=1 mx j=1 p(y j j x k )log 2 p(y j j x k ) p(x k y j ) log 2 p(y j j x k ): It is a straightforward exercise to prove that H(X Y )=H(X)+H(Y j X) =H(Y )+H(X j Y ): The mutual information I(X Y ) of the random variables X and Y is dened by I(X Y )= mx mx i=1 j=1 p(x i y j )log 2 p(x i y j ) p(x i )p(y j ) : The mutual information is nonnegative and a simple calculation shows that Thus I(X Y )=H(X) ; H(X j Y )=H(Y ) ; H(Y j X): H(XjY )=H(X) ; I(X Y )andh(y jx) =H(Y ) ; I(X Y ): In words, the mutual information I(X Y ) equals the reduction in the uncertainty ofx due to the knowledge of Y. Similarly, I(X Y ) equals the reduction in the uncertainty ofy due to the knowledge of X. The fact that the reduction is the same in both instances means that Y gives as much (namely, I(X Y )) information about X as X gives about Y. We now consider a communication channel with a nite input alphabet fx 1 x 2 ::: x m g and nite output alphabet fy 1 y 2 ::: y m g. The channel statistics are given by an m m probability matrix P =[p ij ] 5

6 where p ij = p(y j j x i ) represents the probability that y j is received whenever x i is sent (1 i j m). We assume that each symbol in the input alphabet is received as one of the symbols in the output alphabet and this implies that mx j=1 p(y j j x i )=1 (1 i m): Thus P is a row stochastic matrix. The channel is memoryless (that is, time invariant) in the sense that this probability matrix depends only on the current input and does not depend on previous channel inputs or outputs. Thus we also call our channel a discrete memoryless channel and abbreviate this as DMC. The channel is noisy provided p(y j j x i ) 6= 0 for at least one i 6= j. In a noisy channel at least one input symbol is received as a dierent symbol with a positive probability. Suppose X is a random variable which takes on values in the input alphabet with probabilities p(x 1 ) p(x 2 ) ::: p(x m ), respectively. Then X and the DMC determine a random variable Y with values in the output alphabet with probabilities p(y j )= mx i=1 p(x i )p(y j j x i ): We have a probability distribution on fy 1 y 2 ::: y m g since mx j=1 p(y j ) = = = mx mx 0 X m j=1 i=1 i=1 mx i=1 j=1 p(x i )p(y j j x i ) p(x i )=1: 1 p(y j j x i ) A p(xi ) The information to be sent over the DMC consists of independent observations X 1 X 2 X 3 :::of the random variable X. Transmitting these observations over the DMC we obtain a sequence of independent observations of the random variable Y. The capacity C of a DMC is dened by C =supi(x Y ) X where the supremum is taken over all probability distributions on the input alphabet fx 1 x 2 ::: x m g, that is, over all random variables with values in 6

7 fx 1 x 2 ::: x m g.thus the channel capacity is the maximum reduction in uncertainty of the input random variable X due to the knowledge of the output random variable Y,taken over all possible probability distributions for the source. The channel capacity C is a function only of the channel statistics, that is, of the matrix P of probabilities dening the channel. Example 1.2 The noiseless binary DMC is dened as follows. This channel has two inputs 0 and 1 and two outputs 0 and 1, and the channel statistics are given by the matrix P = Thus p(0 j 0) = p(1 j 1) = 1, and hence 0 and 1 are always received correctly. Consider a probability distribution on the input alphabet given by a random variable X where p(0) = p and p(1) = 1 ; p. Since the channel is noiseless, we have the same probability distribution on the output alphabet. A simple calculation shows that H(X j Y ) = 0, and thus as to be expected, there is no uncertainty about the value of X when a value of Y is received. Thus and : I(X Y )=H(X) ; H(X j Y )=H(X) =H(p) C = sup H(p) =H( 1 0p1 2 )=1: Therefore the channel capacity equals 1 bit. This is no surprise because the channel is noiseless, and hence one error-free bit can be transmitted per use of the channel. Example 1.3 The m-ary symmetric DMC is dened as follows. There are m input symbols x 1 x 2 ::: x m and m output symbols y 1 y 2 ::: y m. (Here we really want to think of y i as being the same as x i, but the notation is less ambiguous if we use dierent symbols to denote the input and output alphabets.) The channel statistics are given by the matrix P = p 1;p 1;p 1;p m;1 m;1 1;p p m;1 m; ;p 1;p p m;1 m;1 Thus the probability of correct transmission of each symbol is p(y i j x i )=p for 1 i m while the probability of incorrect transmission of each symbol is 1 ; p with each ofthem ; 1 incorrect symbols having the same probability, that is, p(y j j x i )=(1; p)=(m ; 1) for 1 i j m i 6= j. Consider a probability distribution on the input alphabet given by a random variable X : 7

8 where p(x i )=p i for 1 i m and p 1 + p p m = 1. An elementary calculation reveals that 1 ; p H(Y j X) =; p log 2 p +(1; p)log 2 m ; 1 and hence Therefore I(X Y )=H(Y ) ; H(Y j X) =H(Y )+p log 2 p +(1; p) log 2 sup X I(X Y ) sup H(Y )+plog 2 p +(1; p) log 2 Y 1 ; p m ; 1 : 1 ; p m ; 1 : It can be shown that sup Y H(Y ) occurs when Y has the uniform distribution where p(y i )=1=m for each i =1 2 ::: m. Hence implying that sup H(Y ) = log 2 m Y 1 ; p C = sup I(X Y ) log 2 m + p log 2 p +(1; p) log 2 X m ; 1 : But if X has the uniform distribution where p(x i )=1=m for all i =1 2 ::: m, then X X p(y j )= p(x i )p(y j j x i )= 1 p(y j j x i )= 1 m m i i and hence Y has the uniform distribution. We conclude that C =log 2 m + p log 2 p +(1; p) log 2 1 ; p m ; 1 : The 2-ary symmetric DMC is also called the binary symmetric channel, abbreviated as BSC. The capacity of the BSC is C =1+p log 2 p +(1; p)log 2 (1 ; p) =1; H(p): Note that, as to be expected, the capacity of the BSC is zero when p =1=2. The proof of Shannon's channel coding theorem rests on the weak law of large numbers of probability theory. This law asserts that if Z 1 Z 2 Z 3 ::: are independent, identically distributed real random variables with the distribution of a real random variable X, then 1 n nx i=1 8 Z i

9 is close to the expected value of Z with high probability. More precisely, 1 lim n!1 n nx i=1 Z i = Exp(Z) (in probability): Let X 1 X 2 X 3 :::be independent, identically distributed random variables with the distribution of a random variable X. These random variables are not necessarily real random variables but their probability distributions, and hence the logarithms of their probability distributions, are independent, identically distributed real random variables with the same distribution as log 2 prob(x). Hence bytheweak law of large numbers applied to log 2 prob(x 1 ) log 2 prob(x 2 ) log 2 prob(x 3 ) ::: with high probability 1 n nx i=1 log 2 prob(x i )= 1 n log 2 prob(x 1 X 2 ::: X n ) 1 is close to the expected value of log 2 prob(x) =; log 2 prob(x) Hence with high probability, for large n. ; 1 n log 2 prob(x 1 X 2 ::: X n ) is close to the entropy H(X) for large n. This implies that for any >0, we can choose n large enough so that for almost all outcomes u 1 u 2 ::: u n of the random variables X 1 X 2 ::: X n,wehave 2 ;n(h(x)+) p(u 1 u 2 ::: u n ) 2 ;n(h(x);) (1) where p(u 1 u 2 ::: u n ) is an abbreviation for p(x 1 = u 1 X 2 = u 2 ::: X n = u n ). The set of all outcomes u 1 u 2 ::: u n satisfying (1) is the -typical set of length n, less precisely the typical set. As a consquence of the weak law oflarge numbers, we have that the probability of the typical set is close to 1 for large n and hence is greater than 1; for large n. The size of the typical set is between 2 n(h(x);) and 2 (nh(x)+). In summary, the typical set has the following three remarkable properties: (i) the typical set has probability nearly equal to 1, (ii) all elements of the typical set are nearly equally probable, and (iii) the number of elements in the typical set is nearly equal to 2 nh(x). Consider again a DMC with input symbols fx 1 x 2 ::: x m g and output symbols fy 1 y 2 ::: y m g, and with channel statistics given by them m matrix P =[p ij ]: Using this channel we can dene a new channel, called the nth extension of the DMC and abbreviated by DMC (n). The set of input symbols is the set f(u 1 u 2 ::: u n )g (2) 9

10 where each u i is one of x 1 x 2 ::: x m.thenumberofsuch symbols is m n. The set of output symbols is the set f(v 1 v 2 ::: v n )g (3) where each v i is one of y 1 y 2 ::: y m. The channel statistics are obtained by dening p((v 1 v 2 ::: v n ) j (u 1 u 2 ::: u n )) = ny i=1 p(v i j u i ): An (n M)-code for a DMC is dened to be a pair C g consisting of a subset C, called the codebook, of the input symbols for DMC (n) and a decoding function g with domain space equal to the set (3) of output symbols for DMC (n),and target space equal to the set (2) of input symbols. The integer n is the (block) length of the code. The function g is a deterministic rule which assigns to each output (v 1 v 2 ::: v n ) a `guess' g(v 1 v 2 ::: v n ) in the codebook. The rate of an (n M)-code is dened to be log 2 M n the number of codewords measured in bits divided by the length of the code. The eectiveness of the (n M)-code C g is measured by the maximum probability of a decoding error, that is, by max (u 1 u 2 ::: u n)2c prob(g(v 1 v 2 ::: v n ) 6= (u 1 u 2 ::: u n ) j (u 1 u 2 ::: u n )): A rate R is achievable by the DMC provided there exists an innite sequence C n g n of (n 2 nr )-codes such that the maximum probability of error tends to 0asn!1. We can now give a formal statement of Shannon's channel coding theorem. Theorem 1.4 The supremum of achievable rates for a discrete memoryless channel equals the channel capacity C. Thus arbitrarily reliable communication is possible at any rate below channel capacity but at no rate above channel capacity. A proof of Theorem 1.4 can be found in [15]. The basic steps in the proof are the following: (i) For large n the number of input sequences (u 1 u 2 ::: u n )tothe nth extension DMC (n) of the DMC is about 2 nh(x). (ii) For each such input sequence, the number of possible output sequences (v 1 v 2 ::: v n ) (the typical sequences) is about 2 nh(y jx) and each of them occurs with nearly the same probability. 10

11 (iii) We wish to select as codewords for an (n M) codec g a set of input sequences with the property that with high probability,no two codewords are received as the same output sequence, thereby determining a decoding function g. (iv) The total number of output sequences is about 2 nh(y ).We partition them into sets of size 2 nh(y jx) one for each of the input sequences. The number of possible codewords is then about (v) The rate of this code is about 2 nh(y ) 2 nh(y jx) =2n(H(Y );H(Y jx)) =2 ni(x Y ) : log 2 2 ni(x Y ) n = I(X Y ) of which the supremum is the channel capacity C of the DMC. Practically speaking, the codes which Theorem 1.4 asserts exist are not very useful. In order to achieve a rate R which is nearly equal to channel capacity, the length n of the code may be so long that the amount of information contained in a codeword, namely Rn, exceeds the amount of information that one wishes to send over the channel. As should be clear from the steps above, the proof of Theorem 1.4 is non-constructive. The theorem merely guarantees the existence of good codes. To a large extent, the pages that follow describe the results of the search since 1948 to construct these codes. By choosing as the set of symbols the q elements of a nite eld F q,the symbol set acquires an algebraic structure. The codewords can then be regarded as vectors in the n-dimensional vector space Fq n over F q, and it is now possible to impose an algebraic structure on the codebook by taking C to be a linear subspace of Fq n.suchlinear codes were rst introduced in 1950 by Hamming2 who is generally recognized as the rst coding theorist. After reading Shannon's paper, which included the reference to the length 7 code of Hamming, Golay published his paper in 1949 in which he elaborated on Hamming's construction and also constructed the famous binary Golay code of length 23 and dimension 12. In 1954 Elias showed that the conclusions of Shannon's theorem continue to hold for the binary symmetric channel even if the codebook is required to be linear. Thus even with linear codebooks, arbitrary reliable communication is possible at any ratebelowthechannel capacity C =1+p log 2 p +(1; p)log 2 (1 ; p) =1; H(p) 2 Shannon includes the famous binary Hamming code of length 7 and dimension 4 (see the next section) in his 1948 paper with attribution to Hamming, who was his colleague at Bell Telephone Laboratories. Apparently because of patent considerations, Hamming's paper was not published until

12 of a binary symmetric channel. For further information about the origins of coding theory, we refer the reader to [6] and [7]. In the next section we begin to explore linear codes over arbitrary nite elds. The mathematical foundations of linear codes were rst set forth by Slepian in Since it is dicult in general to compute the probability of decoding error, a combinatorial measure for the goodness of a code is usually used. This measure, the distance function on Fq n in which the distance between two vectors is dened to be the number of positions in which they disagree, was introduced by Hamming in 1950 and now bears his name. ADD Material on Erasure Codes somewhere in this section. 12

13 2 Basic Concepts Among all types of block codes, linear codes are the most studied. Because of their algebraic structure, they are easier to describe, encode, and decode than nonlinear codes. Let V denote the linear space Fq n of all n-tuples over the nite eld F q = GF (q). An [n k] linear code C over F q is a k-dimensional subspace of V. We usually write the vectors in V as words a 1 a 2 a n over the alphabet F q and call the vectors in C codewords. The eld F 2 is very special in coding theory, and codes over F 2 are called binary codes. Codes over F 3 are called ternary codes, and codes over F 4 are called quaternary codes. The two most common ways to present a linear code are by a generator matrix and by a parity check matrix. A generator matrix foracodec is any k n matrix G whose rows form a basis for C. For any set of k independent columns of a generator matrix G, the corresponding set of coordinates forms an information set for C. If the rst k coordinates form an information set, the code has a unique generator matrix of the form [I k j A] where I k is the k k identity matrix. Such a generator matrix is in standard form. Recall that the ordinary inner product of vectors u = u 1 u n, v = v 1 v n 2 V is u v = nx i=1 u i v i : The dual of C is the [n n ; k] linearcodec? dened by C? = fv 2Vju v =0g: An (n ; k) n generator matrix H of C? is called a parity check matrix for C. So C = fx 2 F n q j Hx T = 0g: Theorem 2.1 If G =[I k j A] is a generator matrix for C in standard form, then H =[;A T j I n;k ] isaparity check matrix for C. Proof: We have HG T = ;A T + A T = O and the result follows. 2 Example 2.2 The simplest way to encode information so as to be able to recover it in the presence of noise is to repeat each message symbol a xed number of times. Suppose that our information is given in terms of 0's and 1's, the elements of F 2, and we repeat each symbol n times. If for instance n =7, then whenever we want to send a 0 we send and whenever we want to send a 1 we send If at most three errors are made in transmission and if we decodeby `majorityvote', then we can correctly determine the information symbol, 0 or 1. In general, our code C is the [n 1] binary linear code consisting 13

14 of the two codewords 0 =000and1 =111, and is called the binary repetition code of length n. A generator matrix for this code is G = and a parity check matrix is 2 H = 6 4 I n; : The dual code C? is the [n n ; 1] code with generator matrix H and thus consists of all binary n-tuples a 1 a 2 :::a n;1 b where b = a 1 + a a n;1 (addition in F 2 ). The n th coordinate b is an overall parity check for the rst n; 1 coordinates, chosen therefore so that the sum of all the coordinates equals 0. G is a parity check matrix for C?. The code C? has the property thata single transmission error can be detected (since the sum of the coordinates will not be 0) but not corrected. AcodeC is self-orthogonal provided CC? and self-dual provided C = C?. The length n of a self-dual code is even and the dimension is n=2. If C is a binary self-orthogonal code, each codeword has even weight if C is a ternary self-orthogonal code, the weight of each codeword is divisible by 3. Example 2.3 The matrix G =[I 4 j A] where A = is a generator matrix for a binary code that we denote by H 7. A parity check matrix for H 7 is H =[A T j I 3 ]. Let H 8 be the code of length 8 and dimension 4 obtained from H 7 by adding an overall parity check coordinatetoeachvector of G and thus to each codeword of H 7. Then ^G =[I 4 j B] is a generator matrix for H 8 where B = It is easy to verify that H 8 is a self-dual code : An important invariant of a code is the minimum distance between codewords. The (Hamming) distance d(x y) between two vectors x y 2Vis dened 14

15 to be the number of coordinates in which x and y dier. Distance is a metric on the linear space V. The(minimum) distance of a code C is the smallest distance between distinct codewords and is important in determining its error correcting capabilities. The (Hamming) weight of a vector x 2Vis the number wt(x) of its nonzero coordinates. Clearly, d(x y)=wt(x ; y). Thus if C is a linear code, the minimum distance d is the same as the minimum weight of a nonzero codeword. If the minimum weight d of an [n k] codeisknown, then we refer to the code as an [n k d] code. Let A i, also denoted A i (C), be the number of codewords of weight i in C. The list A i for 0 i n is called the weight distribution of C. The weight distribution of a code is also an important invariant, and the weight distributions of many codes have been computed. Example 2.4 H 7 has weight distribution A 0 = A 7 = 1 and A 3 = A 4 =7. (In general, we omit those A i 's which are 0.) The minimum weight isd = 3 and thus we have a[7 4 3] code. The following theorem gives an elementary relationship between the weight of a codeword and a parity check matrix for a linear code. Theorem 2.5 Let C be a linear code with parity check matrix H. If c 2C, the columns of H corresponding to the nonzero coordinates of c are linearly dependent. Conversely, if a linear dependence relation with nonzero coecients exists among w columns of H, thenthereisacodeword inc of weight w whose nonzero coordinates correspond to these columns. One way to nd the minimum weight d of a linear code is to examine all the nonzero codewords. The following corollary shows how to use the parity check matrix to nd d. Corollary 2.6 A linear code has minimum weight d if and only if its parity check matrix has a set of d linearly dependent columns but no set of d ; 1 linearly dependent columns. The minimum weight is also characterized in the following theorem. Theorem 2.7 If C is an [n k d] code, then every n;d+1 coordinate positions contain an information set. Furthermore, d is the largest number with this property. Proof: Let G be a generator matrix for C, and consider a set X of s coordinate positions. Without loss of generality we assume X is the set of the last s positions. Suppose X does not contain an information set. Let G =[AjB] where A is k(n;s) andb is ks. Then the column rank of B, and hence the row rankofb, islessthank. Hence there exists a nontrivial linear combination of the rows of B which equals 0, and hence a codeword c which is 0 in the last 15

16 s positions. Since the rank of G is k, v 6= 0 and hence d n ; s, equivalently, s n ; d. The theorem now follows. 2 Two linear codes C 1 and C 2 are permutation equivalent provided there is a permutation of coordinates which sends C 1 to C 2.Thus C 1 and C 2 are permutation equivalent if and only if there is a permutation matrix P such that G 1 is a generator matrix of C 1 if and only if G 1 P is a generator matrix of C 2. The rst assertion in the next theorem follows by applying elementary row operations to any generator matrix of a linear code the second assertion is immediate from Theorem 2.1. Theorem 2.8 Any linear code C is permutation equivalent to a code which has generator matrix in standard form. Moreover, if k coordinates are an information set for C, the complementary coordinates are an information set for C?. Let C 1 and C 2 be codes of the same length, and let G 1 be a generator matrix for C 1. ThenC 1 and C 2 are monomially equivalent provided there is a monomial matrix M so that G 1 M is a generator matrix of C 2. Here a monomial matrix is a matrix of the form M = DP where P isapermutation matrix and D is a diagonal matrix each of whose main diagonal entries is nonzero. Monomial equivalence and permutation equivalence are the same for binary codes. Permutation equivalence preserves self-orthogonality. So does monomial equivalence if the diagonal elements of M are restricted to be 1. Two monomially equivalent codes have the same weight distribution. However two codes with the same weight distribution need not be equivalent as the following example shows. Example 2.9 Let C 1 and C 2 be binary codes with generator matrices G 1 = and G2 = respectively. BothC 1 and C 2 have weight distribution A 0 = A 6 =1andA 2 = A 4 = 3. Since only C 1 is self-dual, C 1 and C 2 are not equivalent. The set of coordinate permutations that map a code C to itself forms a group called the permutation automorphism group of C. The set of monomial maps that send C to itself also forms a group called the monomial automorphism group and is denoted by Aut(C). Clearly Aut(C? )=fd ;1 P j DP 2 Aut(C)g. Knowledge of these groups can give important theoretical and practical information about a code. While the automorphism groups of some codes have been determined, they are in general dicult to nd. For example, Aut(H 7 ) has order 168 and is isomorphic to the projective special linear group PSL 2 (7). For monomially equivalent codes we have the following relation between their automorphism groups

17 Theorem 2.10 Let C 1 and C 2 be monomially equivalent codes. Then Aut(C 1 ) and Aut(C 2 ) are conjugate by a monomial map. Let C be a linear [n k] code with generator matrix G. The simplest way and to encode x by to use C is to partition a message into k-tuples x in Fq k the codeword c = xg. If G is in standard form, the rst k coordinates of the codeword c are the information symbols x the remaining n;k symbols are the check symbols, that is, the redundancy added to x in order to help recover x if errors occur. G may not be in standard form. If however there exist column indices i 1 i 2 ::: i k such that the k k matrix consisting of these k columns of G is the k k identity matrix, then the message is found in k coordinates of the codeword scrambled but otherwise unchanged that is, message symbol x j is in component i j of the codeword. If this occurs, we say that the encoder is systematic. The process of decoding, that is, determining which codeword (and thus x) was sent whenavector y is received, is more complex. Maximum likelihood decoding refers to any method for decoding in which a received vector is decoded as one of the codewords which is most likelytohave been sent. Suppose the communication channel is the q-ary symmetric channel in which each symbol is more likely to be received correctly rather than incorrectly. Then any codeword whose distance to y is minimum, that is, is a nearest neighbor of y in the Hamming metric, is a codeword which ismostlikely to have been sent. Thus in this case, maximum likelihood decoding is the same as nearest neighbor decoding. Let e = y ; c so that y = x + e. Then we can think of the communication channel as adding an error vector e to the codeword c, and the goal of decoding is to determine e. Nearest neighbor decoding is equivalent tondingavector e of smallest weight such that y ; e is in the code. This error vector need not be unique, since there may be more than one codeword nearest to y. This discussion motivates consideration of spheres about codewords. The sphere of radius r about a vector u is dened to be the set S r (u) =fv 2Vjd(u v) rg of all vectors whose distance from u is less than or equal to r. Thenumber of vectors in S r (u) equals rx n (q ; 1) i : i i=0 Theorem 2.11 If d is the minimum distance ofacode C (linear or nonlinear) and t = b d;1 c, then spheres of radius t about codewords are disjoint. 2 Proof: If z 2 S r (c 1 ) \ S r (c 2 ) where c 1 and c 2 are codewords, then by the triangle inequality, d(c 1 c 2 ) d(c 1 z)+d(z c 2 ) 2t <d implying that c 1 = c

18 Corollary 2.12 With the notation of the previous theorem, if a codeword c is sent and y is received where t or fewer errors have occurred, then c is the unique codeword closest to y. A consequence of the corollary is that nearest neighbor decoding decodes correctly any received vector in which atmostt errors have occurred in transmission. Since the minimum distance of C is d, there exist two distinct codewords such that the spheres of radius t+1 about them are not disjoint. Thus the packing radius of C, the maximum radius so that the spheres about the codewords are pairwise disjoint, equals b d;1 2 c.thepacking radius t of a code is characterized by the property that nearest neighbor decoding always decodes correctly a received vector in which t or fewer errors have occurred but will not always decode correctly a received vector in which t + 1 errors have occurred. Thus C is a t-error-correcting code but not a (t + 1)-error correcting code. One way to nd a closest codeword to a received vector y is to examine all codewords until one is found with distance t or less from y. But obviously this is a realistic decoding algorithm only for codes with a small number of vectors. The minimum distance d is a simple measure of the goodness of a code. For a given length and number of codewords (equivalently, dimension in the case of linear codes), a fundamental problem in coding theory is to determine a code with the largest d. Alternatively, given n and d, determine the maximum number A q (n d) ofcodewords in a code (equivalently, dimension in the case of linear codes) over F q of length n and minimum distance at least d. The number A 2 (n d) is also denoted by A(n d). The maximum when restricted to linear codes is denoted by B q (n d) (B(n d) in the binary case). Clearly B q (n d) A q (n d). For modest values of n and d, A(n d) andb(n d) have been determined and tabulated (see Chapter xx). The fact that the spheres of radius t about codewords are pairwise disjoint immediately implies the following elementary inequality. Theorem 2.13 (Sphere Packing Bound) B q (n d) A q (n d) P t i=0 q n ; n i (q ; 1) i where t = b(d ; 1)=2c. The covering radius = (C) is dened to be the smallest integer s such that V is contained in the union of the spheres of radius s centered at codewords. Equivalently, (C) = max x2v min d(x c): c2c Obviously t (C). If t = (C), the code is called perfect. A code with minimum distance d can be perfect only if d is odd. A code is perfect, if and only if the 18

19 spheres of radius t about codewords partition V, equivalently, if and only if equality holds in the sphere packing bound. The codes V = Fq m, the codes consisting of exactly one codeword (the zero vector in case of linear codes), the binary codes of odd length consisting of a vector c and the complementary vector c with 0's and 1's interchanged are all perfect codes, and are called trivial perfect codes. It is easy to see that H 7 is a perfect single error correcting code. H 7 is a member of an innite family of perfect linear single error correcting codes which wenow dene. H 7 is equivalent to the code with parity check matrix H 0 = whose columns are the numbers 1 through 7 written as binary numerals (with leading 0's as necessary to have a 3-tuple). More generally, letn =2 m ; 1 (m 2). Then the m (2 m ; 1) matrix H m whose columns are the numbers 1 2 ::: 2 m ; 1 written as binary numerals is the parity check matrix of an [n =2 m ; 1 k= n ; m] binary code, called the binary Hamming code of length n =2 m ; 1 and denoted by either H n or H n 2. 3 Since the columns of H m are distinct and nonzero, the minimum distance is at least 3. Since the columns corresponding to the numbers 1,2 and 3 are linearly dependent, the minimum distance equals 3. A simple calculation shows that the Hamming codes attain the sphere packing bound and hence are perfect single error correcting binary codes. Similarly, Hamming codes over an arbitrary nite eld F q can be dened by choosing as columns for a parity check matrix a nonzero vector of each 1-dimensional subspace of Fq m, that is, the points of the projective geometry PG(m ; 1 q). These codes which have length n =(q m ; 1)=(q ; 1), dimension n ; m, and minimum distance 3 will be denoted H n q. It is straightforward to check that every perfect single error correcting linear code is equivalent toa Hamming code. The following is a major accomplishment of coding theory and will be discussed further in Chapter xx. Theorem 2.14 (i) There exist perfect single error correcting codes over F q which are notlinear and all such codes have parameters corresponding to the Hamming codes namely, length n =(q m ; 1)=(q ; 1) with q n;m codewords and minimum distance 3. (ii) The only nontrivial multiple error correcting codes are equivalent to either the binary Golay [ ] code or the ternary Golay [11 6 5] code. (iii) Any binary (respectively, ternary) possibly nonlinear code with 2 12 (respectively, 3 6 ) vectors containing the 0 vector with minimum distance 7 3 We shall follow the custom of not distinguishing between equivalent Hamming codes

20 (respectively, 5) is equivalent to the Golay [ ] binary code (respectively, [11 6 5] ternary code.) The Golay codes referred to in (ii) of Theorem 2.14 will be rst encountered in Examples 5.14 and 6.9. Since all binary [ ] codes are equivalent, we refer to every binary [ ] code as the binary Golay code. Similarly, we refer to every ternary [11 6 5] code as the ternary Golay code. If C is a code with packing radius t and covering radius t +1, C is called quasi-perfect. There are many known linear and nonlinear quasi-perfect codes (e.g. certain double error correcting BCH codes and some punctured Preparata codes). However, unlike perfect codes, there is no general classication. We now discuss a general decoding algorithm for linear codes called syndrome decoding. Let C be an [n k d] linear code over F q. Then, in particular, C is an elementary abelian subgroup of the additive group of V, and hence its distinct cosets x + C partition V into q n;k sets of size q k.twovectors x and y belong to the same coset if and only if y ; x 2C. The weight of a coset is the smallest weight ofavector in the coset, and any vector of this smallest weight inthe coset is called a coset leader. The zero vector is the unique leader of the code C. More generally, every coset of weight atmostt = b(d ; 1)=2c has a unique leader, and t is the largest integer with this property. Let H be a parity check matrix of C. The syndrome of a vector x in V (with respect to the parity check matrix H) is the vector in Fq n;k dened by syn(x) =Hx T : The code C consists of all vectors whose syndrome equals 0. More generally, two vectors belong to the same coset if and only if they have the same syndrome, and hence there exists a one to one correspondence between cosets of C and syndromes (i.e. vectors in Fq n;k ). We denote by C s the coset of C consisting of all vectors in V with syndrome s. Let a codeword which issentover a communication channel be received as avector y. Since in nearest neighbor decoding we seekavector e of smallest weight such that y ; e 2C, nearest neighbor decoding is equivalent to nding avector e of smallest weight in the coset containing y, that is, a coset leader of the coset containing y. Syndrome decoding is the following implemention of nearest neighbor decoding: (i) For each syndrome s 2 F n;k q,choose a coset leader e s of the coset C s (ii) After receiving a vector y, compute its syndrome s (iii) y is then decoded as the codeword y ; e s. The naive way to carry out nearest neighbor decoding is to make a table consisting of a nearest codeword for each oftheq n vectors in V and then look 20

21 up each received vector in the table in order to decode it. Syndrome decoding requires a table with only q n;k entries. However, before looking in the table, one must perform a matrix-vector multiplication in order to determine the syndrome of the received vector. Syndrome decoding is particularly simple for the binary Hamming codes H n with parameters [n =2 m ; 1 2 m ; 1 ; m 3]. This is because the coset leaders are unique and are the 2 m vectors of weight at most 1. Let H m be the parity check matrix whose columns are the binary numerals for the numbers 1 2 ::: 2 m ; 1. Since the syndrome of the binary n-tuple of weight 1 whose unique 1 is in position i is the m-tuple representing the binary numeral for i, the syndrome immediately gives the coset leader and no table is required for syndrome decoding. Thus syndrome decoding for binary Hamming codes takes the form: (i) After receiving a vector y, compute its syndrome s (ii) If s = 0, theny is in the code and y is decoded as y otherwise, s is the binary numeral for some positive integer i and y is decoded as the codeword obtained from y by adding 1 to its i th bit. The above procedure is easily modied for Hamming codes over other elds. The denition of the covering radius implies the following characterization of the covering radius of a linear code in terms of coset weights. Assertions (i) and (ii) are easily seen to be equivalent. Theorem 2.15 Let C be a linear code with parity check matrix H. Then (i) (C) is the weight of the coset of largest weight (ii) (C) is the smallest number s such that every syndrome is a combination of s or fewer columns of H. We now discuss some simple ways to modify and combine codes. Let C 1 and C 2 be an [n 1 k 1 d 1 ] code and an [n 2 k 2 d 2 ] code, respectively, over the same nite eld F q. Then their direct sum is the [n 1 + n 2 k 1 + k 2 minfd 1 d 2 g]code C 1 C 2 = f(c 1 c 2 ) j c 1 2C 1 c 2 2C 2 g: If C 1 and C 2 have generator matrices G 1 and G 2, respectively, then G 1 G 2 = G1 O O G 2 is a generator matrix for C 1 C 2.IfC is the [2 1 2] code f0 1g, then the code C 1 of Example 2.9 is CCC. Since the minimum distance of the direct sum of two codes does not exceed the minimum distance of either of the codes, the direct sum of two codes is primarily of theoretical interest. 21

22 Given a code C we canpuncture it by deleting the same coordinate, say the last coordinate, in each codeword. Let C be an [n k d] codewithd>1. Then the resulting punctured code C is an [n ; 1 k d 0 ] code where d 0 is either d ; 1 or d, depending on whether or not there exists a minimum weight vector in C with a nonzero entry in its last coordinate. In the opposite direction, we can extend acodeby including a new coordinate so that the sum of all coordinates is now 0. If C is a binary code, then the resulting extended code ^C contains only even weight vectors and is an [n +1 k d 00 ]codewhered 00 equals d if d is even and equals d +1ifd is odd. This construction is also referred to as adding an overall parity check (a special case of this was discussed earlier in this section). If we extend a code and then puncture the new coordinate, we obtain the original code. However, performing the operations in the other order will in general result in a dierent code. Extending H 7 gives a self-dual [8 4 4] code with covering radius 2. Let G and H be generator and parity check matrices, respectively, forc. Then a generator matrix ^G for ^C can be obtained from G by adding an extra column to G so that the sum of the coordinates of each row of ^G is 0. A parity check matrix for ^C is the matrix ^H = The next lemma for binary vectors follows by a simple calculation. Lemma 2.16 If x y 2 F n 2, then H : wt(x + y) = wt(x)+wt(y) ; 2wt(x \ y) where x \ y is the vector in F n 2 both x and y have 1's. which has 1's precisely in those positions where In the next theorem we collect some facts about covering radius and coset leaders for the above constructions. Theorem 2.17 Let C be an[n k] code over F q. Let ^C be the extension of C, and let C beacode obtained from C by puncturing on some coordinate. The following hold: (i) If C = C 1 C 2,then(C) =(C 1 )+(C 2 ) (ii) (C )=(C) or (C )=(C) ; 1 (iii) (^C) =(C) or (^C) =(C)+1 (iv) If q =2,then(^C) =(C)+1 22

23 (v) Assume that c is a coset leader of C. If c 0 2 F n q all of whose nonzero components agree with the same components of c, then c 0 is also a coset leader of C. In particular, if there isacoset of weight s, there is also a coset of any weight less than s. Proof: Each assertion is easily proved. We present only the proof of (iv). Let x = x 1 x n be a coset leader of C. Let x = x 1 x n 1. By part (iii), it suces to show thatx is a coset leader of ^C. Let c = c 1 c n 2C, and let ^c = c 1 c n c n+1 be its extension. If c has even weight, then wt(^c + x )= wt(c + x) +1 wt(x) + 1. Assume c has odd weight. Then wt(^c + x )= wt(c + x). If x has even [odd] weight, then c + x has odd [even] weight by Lemma 2.16, and so wt(c + x) > wt(x) asx is a coset leader. Thus in all cases, wt(^c + x ) wt(x)+1=wt(x )andx is a coset leader of ^C. 2 The next example illustrates that it is possible to extend or puncture a code and leave the covering radius unchanged. Example 2.18 Let C be the ternary code with generator matrix [1 1 2]. Computing the covering radius, we see that (C) = (^C) = 2. If D = ^C and we puncture D on the last coordinate to obtain D = C, wehave (D) =(D ). In the binary case, whenever we extend a code, we increase the covering radius by 1. But when we puncture a binary code we may not reduce the covering radius. Example 2.19 Let C be the binary code with generator matrix and let C be obtained from C by puncturing on the last coordinate. It is easy to see that (C) =(C ) = 1. Also if D is the extension of C, (D) = 2, consistent with Theorem There is a generalization of the concepts of even and odd weight binary vectors to vectors over arbitrary elds which is useful in the study of cyclic codes. A vector a = a 0 a 1 a n;1 in Fq n is even-like provided that n;1 X i=0 a i =0 and is odd-like otherwise. The even-like codewords of a code C of dimension k form a subcode of dimension k or k ; 1. The minimum weight oftheevenlike codewords, respectively the odd-like codewords, of a code is the minimum odd-like weight, respectively minimum even-like weight, of the code. A binary vector is even-like if and only if it has even weight. To every group G of monomial matrices DP, the set of permutation matrices P form a group P under multiplication if P is transitive, we saythatg is transitive. 23

24 Theorem 2.20 Let C be an[n k d] code. (i) If C has a transitive automorphism group, then the n codes obtained from C by puncturing C on a coordinate are monomially equivalent. (ii) If ^C has a transitive automorphism group, then the minimum weight d of C is its minimum odd-like weight d 0. Proof: Assertion (i) is clear. Now assume that the automorphism group of ^C is transitive. Applying (i) to ^C we conclude that puncturing ^C on any coordinate gives a code monomially equivalent to C. Let c be a minimum weight vector of C and assume that c is even-like. Then wt(^c) =d where ^c 2 ^C is the extended vector. Puncturing ^C on a coordinate where c is nonzero gives a vector of weight d ; 1 in a code monomially equivalent toc, a contradiction. 2 All codewords in a binary self-orthogonal code have even weight. Binary self-orthogonal codes in which all codewords have weight divisible by 4 play a special role. Theorem 2.21 Let C be a binary linear code. (i) If C is self-orthogonal and has a generator matrix each of whose rows has weight divisible by 4, then every codeword of C has weight divisible by 4. (ii) If every codeword of C has weight divisible by 4, then C is self-orthogonal. Proof: Every codeword of C is a sum of a certain number of rows of a generator matrix, and assertion (i) follows from Lemma 2.16 by a simple induction. Assertion (ii) is an immediate consequence of Lemma Binary codes for which all codewords have weight divisible by 4 are called doubly-even. By Theorem 2.21, doubly-even codes are self-orthogonal. 4 Binary codes for which all codewords have even weight are called even. Even codes are not necessarily self-orthogonal. A self-orthogonal even code which is not doubly-even is called singly-even. Another invariant of a code, which includes its weight distribution, is its complete coset weight distribution consisting of the weight distribution of each coset. Example 2.22 The complete coset weight distribution of the binary extended Hamming [8 4 4] code H 8 is given in the following table. 4 Some authors reserve the term `doubly-even' for self-dual codes for which all codewords have weight divisible by 4. The term `even' is sometimes also used in the literature to mean what we have dened to be `doubly-even.' 24

25 number of vectors coset of given weight number weight of cosets Note that the rst line is the weight distribution of H 8. The second line is the weight distribution of each coset of weight one. This code has the special property that all cosets of a given weight have the same weight distribution. Since there are no cosets of weight greater than 2, the covering radius is 2, consistent with Theorem 2.14 and (iv) of Theorem In the preceding terminology, H 8 is a binary, doubly-even self-dual code. 25

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes.

We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes. Chapter 5 Golay Codes Lecture 16, March 10, 2011 We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes. Question. Are there any other nontrivial perfect codes? Answer. Yes,

More information

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet

More information

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q),

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q), Elementary 2-Group Character Codes Cunsheng Ding 1, David Kohel 2, and San Ling Abstract In this correspondence we describe a class of codes over GF (q), where q is a power of an odd prime. These codes

More information

MATH32031: Coding Theory Part 15: Summary

MATH32031: Coding Theory Part 15: Summary MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes ELEC 519A Selected Topics in Digital Communications: Information Theory Hamming Codes and Bounds on Codes Single Error Correcting Codes 2 Hamming Codes (7,4,3) Hamming code 1 0 0 0 0 1 1 0 1 0 0 1 0 1

More information

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Linear Codes Enes Pasalic University of Primorska Koper, 2013 2 Contents 1 Preface 5 2 Shannon theory and coding 7 3 Coding theory 31 4 Decoding of linear codes and MacWilliams

More information

The Hamming Codes and Delsarte s Linear Programming Bound

The Hamming Codes and Delsarte s Linear Programming Bound The Hamming Codes and Delsarte s Linear Programming Bound by Sky McKinley Under the Astute Tutelage of Professor John S. Caughman, IV A thesis submitted in partial fulfillment of the requirements for the

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal Finite Mathematics Nik Ruškuc and Colva M. Roney-Dougal September 19, 2011 Contents 1 Introduction 3 1 About the course............................. 3 2 A review of some algebraic structures.................

More information

FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES. Presenting: Wednesday, June 8. Section 1.6 Problem Set: 35, 40, 41, 43

FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES. Presenting: Wednesday, June 8. Section 1.6 Problem Set: 35, 40, 41, 43 FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES BRIAN BOCKELMAN Monday, June 6 2005 Presenter: J. Walker. Material: 1.1-1.4 For Lecture: 1,4,5,8 Problem Set: 6, 10, 14, 17, 19. 1. Basic concepts of linear

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

MT5821 Advanced Combinatorics

MT5821 Advanced Combinatorics MT5821 Advanced Combinatorics 1 Error-correcting codes In this section of the notes, we have a quick look at coding theory. After a motivating introduction, we discuss the weight enumerator of a code,

More information

Hamming codes and simplex codes ( )

Hamming codes and simplex codes ( ) Chapter 6 Hamming codes and simplex codes (2018-03-17) Synopsis. Hamming codes are essentially the first non-trivial family of codes that we shall meet. We start by proving the Distance Theorem for linear

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane 2301532 : Coding Theory Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, 2006 http://pioneer.chula.ac.th/ upattane Chapter 1 Error detection, correction and decoding 1.1 Basic definitions and

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

Hamming Codes 11/17/04

Hamming Codes 11/17/04 Hamming Codes 11/17/04 History In the late 1940 s Richard Hamming recognized that the further evolution of computers required greater reliability, in particular the ability to not only detect errors, but

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

Arrangements, matroids and codes

Arrangements, matroids and codes Arrangements, matroids and codes first lecture Ruud Pellikaan joint work with Relinde Jurrius ACAGM summer school Leuven Belgium, 18 July 2011 References 2/43 1. Codes, arrangements and matroids by Relinde

More information

Vector spaces. EE 387, Notes 8, Handout #12

Vector spaces. EE 387, Notes 8, Handout #12 Vector spaces EE 387, Notes 8, Handout #12 A vector space V of vectors over a field F of scalars is a set with a binary operator + on V and a scalar-vector product satisfying these axioms: 1. (V, +) is

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

Introduction to binary block codes

Introduction to binary block codes 58 Chapter 6 Introduction to binary block codes In this chapter we begin to study binary signal constellations, which are the Euclidean-space images of binary block codes. Such constellations have nominal

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Sphere Packing and Shannon s Theorem

Sphere Packing and Shannon s Theorem Chapter 2 Sphere Packing and Shannon s Theorem In the first section we discuss the basics of block coding on the m-ary symmetric channel. In the second section we see how the geometry of the codespace

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

Mathematics Department

Mathematics Department Mathematics Department Matthew Pressland Room 7.355 V57 WT 27/8 Advanced Higher Mathematics for INFOTECH Exercise Sheet 2. Let C F 6 3 be the linear code defined by the generator matrix G = 2 2 (a) Find

More information

Investigation of the Elias Product Code Construction for the Binary Erasure Channel

Investigation of the Elias Product Code Construction for the Binary Erasure Channel Investigation of the Elias Product Code Construction for the Binary Erasure Channel by D. P. Varodayan A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF APPLIED

More information

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Solved Exercises and Problems of Cyclic Codes Enes Pasalic University of Primorska Koper, 2013 Contents 1 Preface 3 2 Problems 4 2 1 Preface This is a collection of solved

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 x + + a n 1 x n 1 + a n x n, where the coefficients a 0, a 1, a 2,,

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Error-Correcting Codes

Error-Correcting Codes Error-Correcting Codes HMC Algebraic Geometry Final Project Dmitri Skjorshammer December 14, 2010 1 Introduction Transmission of information takes place over noisy signals. This is the case in satellite

More information

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination

More information

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x) Outline Recall: For integers Euclidean algorithm for finding gcd s Extended Euclid for finding multiplicative inverses Extended Euclid for computing Sun-Ze Test for primitive roots And for polynomials

More information

Support weight enumerators and coset weight distributions of isodual codes

Support weight enumerators and coset weight distributions of isodual codes Support weight enumerators and coset weight distributions of isodual codes Olgica Milenkovic Department of Electrical and Computer Engineering University of Colorado, Boulder March 31, 2003 Abstract In

More information

MAS309 Coding theory

MAS309 Coding theory MAS309 Coding theory Matthew Fayers January March 2008 This is a set of notes which is supposed to augment your own notes for the Coding Theory course They were written by Matthew Fayers, and very lightly

More information

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122

Lecture 5: Channel Capacity. Copyright G. Caire (Sample Lectures) 122 Lecture 5: Channel Capacity Copyright G. Caire (Sample Lectures) 122 M Definitions and Problem Setup 2 X n Y n Encoder p(y x) Decoder ˆM Message Channel Estimate Definition 11. Discrete Memoryless Channel

More information

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Orthogonal Arrays & Codes

Orthogonal Arrays & Codes Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often

More information

Section 3 Error Correcting Codes (ECC): Fundamentals

Section 3 Error Correcting Codes (ECC): Fundamentals Section 3 Error Correcting Codes (ECC): Fundamentals Communication systems and channel models Definition and examples of ECCs Distance For the contents relevant to distance, Lin & Xing s book, Chapter

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

The extended Golay code

The extended Golay code The extended Golay code N. E. Straathof July 6, 2014 Master thesis Mathematics Supervisor: Dr R. R. J. Bocklandt Korteweg-de Vries Instituut voor Wiskunde Faculteit der Natuurwetenschappen, Wiskunde en

More information

Some error-correcting codes and their applications

Some error-correcting codes and their applications Chapter 14 Some error-correcting codes and their applications J. D. Key 1 14.1 Introduction In this chapter we describe three types of error-correcting linear codes that have been used in major applications,

More information

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra, 4th day, Thursday 7/1/04 REU Info: Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector

More information

Linear Codes, Target Function Classes, and Network Computing Capacity

Linear Codes, Target Function Classes, and Network Computing Capacity Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:

More information

Channel Coding for Secure Transmissions

Channel Coding for Secure Transmissions Channel Coding for Secure Transmissions March 27, 2017 1 / 51 McEliece Cryptosystem Coding Approach: Noiseless Main Channel Coding Approach: Noisy Main Channel 2 / 51 Outline We present an overiew of linear

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

Ma/CS 6b Class 24: Error Correcting Codes

Ma/CS 6b Class 24: Error Correcting Codes Ma/CS 6b Class 24: Error Correcting Codes By Adam Sheffer Communicating Over a Noisy Channel Problem. We wish to transmit a message which is composed of 0 s and 1 s, but noise might accidentally flip some

More information

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1) Cyclic codes: review EE 387, Notes 15, Handout #26 A cyclic code is a LBC such that every cyclic shift of a codeword is a codeword. A cyclic code has generator polynomial g(x) that is a divisor of every

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information

The Witt designs, Golay codes and Mathieu groups

The Witt designs, Golay codes and Mathieu groups The Witt designs, Golay codes and Mathieu groups 1 The Golay codes Let V be a vector space over F q with fixed basis e 1,..., e n. A code C is a subset of V. A linear code is a subspace of V. The vector

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x), Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 + + a n 1 x n 1 + a n x n, where the coefficients a 1, a 2,, a n are

More information

Open Questions in Coding Theory

Open Questions in Coding Theory Open Questions in Coding Theory Steven T. Dougherty July 4, 2013 Open Questions The following questions were posed by: S.T. Dougherty J.L. Kim P. Solé J. Wood Hilbert Style Problems Hilbert Style Problems

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0 Coding Theory Massoud Malek Binary Linear Codes Generator and Parity-Check Matrices. A subset C of IK n is called a linear code, if C is a subspace of IK n (i.e., C is closed under addition). A linear

More information

ELEMENT OF INFORMATION THEORY

ELEMENT OF INFORMATION THEORY History Table of Content ELEMENT OF INFORMATION THEORY O. Le Meur olemeur@irisa.fr Univ. of Rennes 1 http://www.irisa.fr/temics/staff/lemeur/ October 2010 1 History Table of Content VERSION: 2009-2010:

More information

7.1 Definitions and Generator Polynomials

7.1 Definitions and Generator Polynomials Chapter 7 Cyclic Codes Lecture 21, March 29, 2011 7.1 Definitions and Generator Polynomials Cyclic codes are an important class of linear codes for which the encoding and decoding can be efficiently implemented

More information

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

MATH 433 Applied Algebra Lecture 22: Review for Exam 2. MATH 433 Applied Algebra Lecture 22: Review for Exam 2. Topics for Exam 2 Permutations Cycles, transpositions Cycle decomposition of a permutation Order of a permutation Sign of a permutation Symmetric

More information

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy

Information Theory CHAPTER. 5.1 Introduction. 5.2 Entropy Haykin_ch05_pp3.fm Page 207 Monday, November 26, 202 2:44 PM CHAPTER 5 Information Theory 5. Introduction As mentioned in Chapter and reiterated along the way, the purpose of a communication system is

More information

Math 512 Syllabus Spring 2017, LIU Post

Math 512 Syllabus Spring 2017, LIU Post Week Class Date Material Math 512 Syllabus Spring 2017, LIU Post 1 1/23 ISBN, error-detecting codes HW: Exercises 1.1, 1.3, 1.5, 1.8, 1.14, 1.15 If x, y satisfy ISBN-10 check, then so does x + y. 2 1/30

More information

Noisy channel communication

Noisy channel communication Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University

More information

Linear Codes and Syndrome Decoding

Linear Codes and Syndrome Decoding Linear Codes and Syndrome Decoding These notes are intended to be used as supplementary reading to Sections 6.7 9 of Grimaldi s Discrete and Combinatorial Mathematics. The proofs of the theorems are left

More information

Duadic Codes over Finite Commutative Rings

Duadic Codes over Finite Commutative Rings The Islamic University of Gaza Faculty of Science Department of Mathematics Duadic Codes over Finite Commutative Rings PRESENTED BY Ikhlas Ibraheem Diab Al-Awar SUPERVISED BY Prof. Mohammed Mahmoud AL-Ashker

More information

ERROR CORRECTING CODES

ERROR CORRECTING CODES ERROR CORRECTING CODES To send a message of 0 s and 1 s from my computer on Earth to Mr. Spock s computer on the planet Vulcan we use codes which include redundancy to correct errors. n q Definition. A

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

Coset Decomposition Method for Decoding Linear Codes

Coset Decomposition Method for Decoding Linear Codes International Journal of Algebra, Vol. 5, 2011, no. 28, 1395-1404 Coset Decomposition Method for Decoding Linear Codes Mohamed Sayed Faculty of Computer Studies Arab Open University P.O. Box: 830 Ardeya

More information

exercise in the previous class (1)

exercise in the previous class (1) exercise in the previous class () Consider an odd parity check code C whose codewords are (x,, x k, p) with p = x + +x k +. Is C a linear code? No. x =, x 2 =x =...=x k = p =, and... is a codeword x 2

More information

Lecture 2 Linear Codes

Lecture 2 Linear Codes Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of

More information

Combinatória e Teoria de Códigos Exercises from the notes. Chapter 1

Combinatória e Teoria de Códigos Exercises from the notes. Chapter 1 Combinatória e Teoria de Códigos Exercises from the notes Chapter 1 1.1. The following binary word 01111000000?001110000?00110011001010111000000000?01110 encodes a date. The encoding method used consisted

More information

EE512: Error Control Coding

EE512: Error Control Coding EE51: Error Control Coding Solution for Assignment on BCH and RS Codes March, 007 1. To determine the dimension and generator polynomial of all narrow sense binary BCH codes of length n = 31, we have to

More information

Lecture 14: Hamming and Hadamard Codes

Lecture 14: Hamming and Hadamard Codes CSCI-B69: A Theorist s Toolkit, Fall 6 Oct 6 Lecture 4: Hamming and Hadamard Codes Lecturer: Yuan Zhou Scribe: Kaiyuan Zhu Recap Recall from the last lecture that error-correcting codes are in fact injective

More information

MTH6108 Coding theory

MTH6108 Coding theory MTH6108 Coding theory Contents 1 Introduction and definitions 2 2 Good codes 6 2.1 The main coding theory problem............................ 6 2.2 The Singleton bound...................................

More information

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming

More information

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes ELEC 405/ELEC 511 Error Control Coding Hamming Codes and Bounds on Codes Single Error Correcting Codes (3,1,3) code (5,2,3) code (6,3,3) code G = rate R=1/3 n-k=2 [ 1 1 1] rate R=2/5 n-k=3 1 0 1 1 0 G

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound ELEC 45/ELEC 5 Error Control Coding and Sequences Hamming Codes and the Hamming Bound Single Error Correcting Codes ELEC 45 2 Hamming Codes One form of the (7,4,3) Hamming code is generated by This is

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

MT361/461/5461 Error Correcting Codes: Preliminary Sheet

MT361/461/5461 Error Correcting Codes: Preliminary Sheet MT361/461/5461 Error Correcting Codes: Preliminary Sheet Solutions to this sheet will be posted on Moodle on 14th January so you can check your answers. Please do Question 2 by 14th January. You are welcome

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

EE512: Error Control Coding

EE512: Error Control Coding EE512: Error Control Coding Solution for Assignment on Linear Block Codes February 14, 2007 1. Code 1: n = 4, n k = 2 Parity Check Equations: x 1 + x 3 = 0, x 1 + x 2 + x 4 = 0 Parity Bits: x 3 = x 1,

More information

Lecture 14 February 28

Lecture 14 February 28 EE/Stats 376A: Information Theory Winter 07 Lecture 4 February 8 Lecturer: David Tse Scribe: Sagnik M, Vivek B 4 Outline Gaussian channel and capacity Information measures for continuous random variables

More information

Lecture 12: November 6, 2017

Lecture 12: November 6, 2017 Information and Coding Theory Autumn 017 Lecturer: Madhur Tulsiani Lecture 1: November 6, 017 Recall: We were looking at codes of the form C : F k p F n p, where p is prime, k is the message length, and

More information