Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal

Size: px
Start display at page:

Download "Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal"

Transcription

1 Finite Mathematics Nik Ruškuc and Colva M. Roney-Dougal September 19, 2011

2 Contents 1 Introduction 3 1 About the course A review of some algebraic structures Coding theory 6 1 Motivation: transmission of messages Hamming distance Linear codes Some more group theory: subgroups and cosets Decoding with coset leaders and syndromes Perfect codes Latin squares 21 1 Definition and existence Counting Latin squares Orthogonality Latin squares from finite fields Direct products Finite geometries 28 1 Finite affine planes Finite fields and affine planes Affine planes and Latin squares Projective planes Designs and Steiner triple systems 39 1 Designs Steiner triple systems Subsystems and a recursive construction Existence Packings and coverings Designs from perfect codes

3 Chapter 1 Introduction 1. About the course Finite mathematics is a very broad and heterogeneous area of mathematics, studying finite sets and configurations. The typical general problems it considers are the existence of such configurations with certain properties, their number and characterisation. Finite mathematics is related to almost all other areas of mathematics, and it also has a wide range of applications. These connections will be illustrated in the course. One underlying theme throughout will be applications of abstract algebra. The definitions and some examples of basic algebraic structures are given in Section 2. The course will not be solely based on a single book. Therefore, the best study source will be the lecture notes. Some useful texts are: 1. P.J. Cameron, Combinatorics: Topics, Techniques, Algorithms, Cambridge University Press, Cambridge, A.P. Street and W.D. Wallis, Combinatorial Theory: an Introduction, CBRC, Manitoba, J.H. van Lint and R.M. Wilson, A Course in Combinatorics, Cambridge University Press, Cambridge, All these books, as well as all tutorial sheets and solutions, will be available in Mathematics/Physics library on short loan. Also, any other book containing in its title the words such as finite mathematics, discrete mathematics, combinatorics is likely to contain material relevant to the course. 2. A review of some algebraic structures In this section we recall definitions and some important examples of groups, fields and vector spaces. Definition 2.1. A group is a non-empty set G with a binary operation, satisfying the following axioms. (G1) xy G for all x, y G (closure). (G2) x(yz) = (xy)z for all x, y, z G (associativity). (G3) There exists an element e G (called the identity element) such that xe = ex = x for all x G. 3

4 4 Nik Ruškuc and Colva M. Roney-Dougal (G4) For each x G there exists an element x 1 G (called the inverse of x) such that xx 1 = x 1 x = e. If, in addition, G satisfies xy = yx for all x, y G then G is said to be an abelian group, and the operation is said to be commutative. Example 2.2. For every n 1, the set Z n = {0, 1,..., n 1} with addition modulo n is an abelian group of order n. The set S n of all permutations of the set {1, 2,..., n} with the composition of mappings is a non-abelian group of order n!. For abelian groups it is customary to use additive notation, with + denoting the operation, 0 denoting the identity element, and x denoting the (additive) inverse of x. One of the main tasks of group theory is to describe all finite groups, but this does not seem to be attainable. Definition 2.3. A field is a set F with two binary operations + and and two distinguished elements 0 and 1, such that the following axioms are satisfied. (F1) F with the operation + is an abelian group, with identity element 0. (F2) F \{0} with the operation is an abelian group, with identity element 1. (F3) x(y + z) = xy + xz for all x, y, z F (distributivity). Example 2.4. The number fields Q, R and C are the main examples of fields. Also, if p is a prime then Z p, with addition and multiplication modulo p, is a field. Unlike groups, one can describe all finite fields relatively easily. Theorem 2.5 (The Fundamental Theorem for Finite Fields) If F is a finite field then its order is a power of a prime. Conversely, if n is a power of a prime, then there exists a unique (up to isomorphism) field of order n. For prime power n, we denote the unique finite field of order n by GF(n). For prime n we often write Z n instead of GF(n). In the following example we show how to construct GF(4) = GF(2 2 ). Example 2.6. Consider the set F = {0, 1, x, x + 1} of all constant and linear polynomials over the field Z 2. Let the addition in F be the ordinary addition of polynomials, and let the multiplication be the ordinary multiplication of polynomials, with the additional condition that x 2 = x + 1. We can construct tables for these two operations: x x x x x + 1 x x x x x + 1 x + 1 x x x x x + 1 x 0 x x x x x Clearly, F with +, and F \{0} with are abelian groups. polynomials is distributive over addition, so F is a field. The multiplication of In fact all finite fields can be constructed in a similar way. To construct a field with p n elements (p prime) one considers all polynomials of degree less than n over the field Z p, and uses a rule of the form f(x) = 0, where f is an irreducible polynomial of degree n, to simplify polynomials of higher degrees.

5 Finite Mathematics 5 Definition 2.7. Let F be a field, let V be an abelian group, and let there be an external multiplication of elements from V by elements from F. Then V is said to be a vector space over F if the following axioms are satisfied: (V1) (α + β)x = αx + βx; (V2) α(x + y) = αx + αy; (V3) (αβ)x = α(βx); (V4) 1x = x; for all α, β F and all x, y V. We shall assume the familiarity with the elementary theory of vector spaces. In particular we shall consider as known the following concepts: subspaces, linear independence, basis, dimension, isomorphism. Example 2.8. Let F be a field. Then the set V = F d = {(x 1,..., x d ) : x i F } is a vector space over F with respect to the component-wise addition and scalar multiplication. The dimension of this space is d. Actually, the above example is generic, as the following theorem shows. Theorem 2.9. (The Fundamental Theorem for Finite-dimensional Vector Spaces) If V is a d-dimensional vector space over a field F, then V is isomorphic to F d. Example Let V be the (unique) 3-dimensional vector space over GF(2) = Z 2. By the fundamental theorem for finite-dimensional vector spaces, V is isomorphic to Z 3 2. The elements of V are (0, 0, 0), (1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 0), (1, 0, 1), (0, 1, 1), (1, 1, 1) and there are 2 3 = 8 of them. To see that Z 3 2 is a vector space, you should note that Z 2 is a field, that (V, +) forms an abelian group (with identity (0, 0, 0) and such that the inverse of (a, b, c) is (a, b, c)), and that axioms V1, V2, V3 and V4 all hold.

6 Chapter 2 Coding theory 1. Motivation: transmission of messages Let us consider the following situation. Person A is in a space-craft somewhere in space. They navigate the space-craft according to the instructions that are received from person B who is on Earth. For simplicity let us assume that there are four possible instructions: go left, go right, go up and go down. These instructions are transmitted as a binary radio-signal; in other words B can transmit either of two types of signals, which we denote by 0 and 1. So we have to encode four messages into words of 0s and 1s. The first, most obvious way is to do something like this: left=00, right=01, up=10, down=11. Once transmitted, the signal is affected by various factors, such as other radiosignals, cosmic rays etc. Thus there is a (small) chance that an emitted 0 will be received as 1, or that an emitted 1 will be received as 0. We assume three things about the possibility that an error occurs: (BSC1) The probability that a 0 is turned into a 1 is the same as the probability that a 1 is turned into a 0. (BSC2) This probability p is the same for each digit, and is less than 0.5. (BSC3) An error occurring in one digit does not affect the probability that an error occurs in another digit. Any communication channel satisfying BSC1 3 is called a Binary Symmetric Channel (BSC). In this course we will always assume that we have a BSC. So assume that B sends the signal 00 (left), but the first 0 is changed into 1, so that A receives 10. Clearly A has no indication that an error has occurred, as 10 is also a valid instruction, and so A will go up, rather than left. Consider another way of encoding our instructions: left=000, right=011, up=101, down=110. Suppose that 000 (left) is sent, and a single error occurs, say changing the first 0 into 1. This time A will be able to detect it, as 100 is not a valid message. However, A will not know what was the original message, even if A knows that only one error has occurred. For 100 might have been obtained from 110, as well as from 000, by changing one symbol. Consider a further example: left=00000, right=01101, up=10110, down=

7 Finite Mathematics 7 MESSAGE: m w!z 2 : Encoding function m n E: Z 2 Z 2 ENCODED MESSAGE: u! Z 2 n BSC DECODED MESSAGE: z! Z 2 m Decoding function D: n m Z 2 Z 2 RECEIVED MESSAGE: v! Z 2 n Figure 2.1: Transmission of messages This time messages are sufficiently different so as to allow detection and correction of a single error. Thus, if A receives the message 10000, and if it is assumed that only one error has occurred, than A will know that the original message was So, in essence, we are considering a scheme shown in Figure 2.1. Definition 1.1. An n-ary code over Z 2 is a subset C Z n 2. The elements of C are called code-words. Given a code C, an encoding function is any bijection E : Z m 2 C. A decoding function is any function D : Z n 2 Z m 2 such that for u Z m 2 we have ued = u. In the ideal situation we would like the following to happen: we take an arbitrary word w Z m 2, encode it, transmit it, then decode it, and we obtain the same word w. This is clearly impossible, since we have no control over what will happen with w in the channel. So we want to ensure that we have a high chance of decoding the message correctly. In other words, we want to be sure that we will decode the received word correctly, under the assumption that only a few errors have occurred. 2. Hamming distance Since we will be dealing with elements of Z n 2 throughout this chapter, let us recall that these elements are n-tuples of 0s and 1s. Usually, instead of (x 1, x 2,..., x n ) we shall simply write x 1 x 2... x n. We shall also frequently refer to these elements as words, rather than vectors. We also recall the addition and multiplication in Z 2 : = = 0, = = 1, 0 0 = 0 1 = 1 0 = 0, 1 1 = 1. Note that for every x we have x = x, as x + x = 0 over Z 2. The Hamming distance provides us with a means of measuring the difference between any two words from Z n 2. Definition 2.1. Let x = x 1 x 2... x n and y = y 1 y 2... y n be two words from Z n 2. The Hamming distance d(x, y) between x and y is the number of places in which they differ: d(x, y) = {i : 1 i n, x i y i }. Closely related to this is the notion of weight.

8 8 Nik Ruškuc and Colva M. Roney-Dougal Definition 2.2. The weight of a word x = x 1 x 2... x n Z n 2, denoted by wt(x), is the number of 1 s in x: wt(x) = {i : 1 i n, x i = 1}. The connection between distance and weight is as follows. Theorem 2.3. For any two x, y Z n 2 we have (i) d(x, y) = wt(x y); (ii) wt(x) = d(x, 0), where 0 denotes the zero-vector in Z n 2. Proof. We have d(x, y) = {i : x i y i } = {i : x i y i 0} = {i : x i y i = 1} = wt(x y). The proof of (ii) is similar. Next we prove that the Hamming distance has the usual properties of a distance function: Theorem 2.4. The set Z n 2 with the Hamming distance d is a metric space. In other words, d has the following properties: (i) d(x, y) 0; (ii) d(x, y) = 0 x = y; (iii) d(x, y) = d(y, x); (iv) d(x, z) d(x, y) + d(y, z) (the triangle inequality); for all x, y, z Z n 2. Proof. Properties (i), (ii) and (iii) are obvious, and we leave the proofs as an exercise. For (iv) note that so that {i : x i z i } {i : x i y i or y i z i } = {i : x i y i } {i : y i z i }, d(x, z) = {i : x i z i } {i : x i y i } + {i : y i z i } = d(x, y) + d(y, z), as required. Definition 2.5. The minimum distance of a code C is the minimum distance between any two codewords of C. Now let us again consider a typical transmission process, where we have a code C Z n 2, and where a word u C has been transmitted through the channel. We know that errors may occur, and so, in general, the received word v will be distinct from u. If we let x = v u(= v + u) we say that x is the error of transmission. From x = v u we clearly have v = u+x, and so we say that the channel has added the error x to the transmitted word u. Let X Z n 2 be arbitrary. We think of X as the collection of errors which are more likely to occur than the others. We say that we can detect errors from X if for every code-word u C we have u + x C. Similarly, we say that we can correct errors from X if for all u, u 1 C and all x, x 1 X the equality u + x = u 1 + x 1 implies u = u 1 (and x = x 1 ). This means that no received word could have been produced by adding errors in X to two different code-words. There is a strong connection between the Hamming metric and the error-detecting and errorcorrecting capabilities of a code.

9 Finite Mathematics 9 Theorem 2.6. Let C Z n 2 be a code, and let k 1. (i) We can detect every error of weight at most k if and only if C has minimum distance at least k + 1. (ii) We can correct every error of weight at most k if and only if C has minimum distance at least 2k + 1. Proof. (i) ( ) Suppose that we can detect every error of weight at most k. Let u, v C, u v. Note that v = u + (v u) C, so that we cannot detect the error v u. Hence k < wt(v u) = d(u, v), and the minimum distance is at least k + 1. ( ) Suppose that C has minimum distance k + 1. Let u C and let wt(x) k. Consider the word u + x. We have d(u, u + x) = wt(u + x u) = wt(x) k, so that u + x C, and we can detect x. Thus we can detect every error of weight at most k. (ii) ( ) Let us assume that we can correct every error of weight at most k, but that there are two code-words u, v C such that d(u, v) 2k. If we let I = {i : u i v i }, then I 2k, and hence we can write I as the union of two disjoint subsets I 1 and I 2 of size at most k: I = I 1 I 2, I 1 k, I 2 k, I 1 I 2 =. Define two words x = x 1 x 2... x n and y = y 1 y 2... y n in Z n 2 as follows: x i = 1 i I 1, y i = 1 i I 2. Clearly we have wt(x) k, wt(y) k and u + x + y = v. The last equality can be written as u + x = v + y, and, because of the error correcting capability of C, we conclude that u = v, a contradiction. So the minimum distance of the code is at least 2k + 1. ( ) Suppose that C has minimum distance at least 2k + 1. Let u, v C and x, y Z n 2 be such that wt(x) k, wt(y) k and u + x = v + y. Then we have d(u, v) = wt(u v) = wt(y x) = wt(x + y) wt(x) + wt(y) 2k. Since the minimum distance is at least 2k+1, we conclude that u = v, meaning that we can correct all errors of weight at most k. Example 2.7. Consider the code C = {00001, 01010, 10100, 11111} Z 5 2. The distances between elements of C are respectively 3, 3, 4, 4, 3, 3, and so we can detect errors of weight at most 2, and correct errors of weight Linear codes We have seen that a code is simply a subset C Z n 2, that an encoding function is a bijection E : Z m 2 C and a decoding function is a mapping D : Z n 2 Z m 2 such that ued = u for all u Z n 2. The problem with these general codes is that the encoding and decoding functions are not convenient for computing. For instance, for the encoding function one has to store a table containing all the elements of Z m 2 and the corresponding elements of C, and to look in this table whenever sending a message. This problem can be overcome by giving codes an algebraic structure, most often that of a vector space. Definition 3.1. A linear code is any subspace of the vector space Z n 2.

10 10 Nik Ruškuc and Colva M. Roney-Dougal The first advantage of having a linear code, as opposed to an arbitrary code, is that it is easier to analyse its error-detecting and error-correcting capabilities. Theorem 3.2. Let C Z n 2 be a linear code. Then the minimum distance is equal to the minimal weight of a non-zero vector in C. In particular, we can detect (respectively, correct) every error of weight at most k if and only if this minimal weight is at least k + 1 (respectively, 2k + 1). Proof. Let M be the minimum distance, with d(x, y) = M, and let N be the minimal weight of a non-zero vector, with wt(z) = N. Since C is a subspace of Z n 2 we must have x y C and also 0 C. But then we have and so M = N as required. M = d(x, y) = wt(x y) N = wt(z) = d(z, 0) M, Assume that we want to encode elements of Z m 2 by means of a linear code C Z n 2. Then C is a subspace of Z n 2 with 2 m elements, and so dim(c) = m. Hence we can define C by listing a basis for C, which is a set of m linearly independent vectors from C: a i = a i1 a i2... a in, 1 i m, If we take these vectors for the rows of a matrix G, we obtain what is called a generator matrix for C: a 1 a 11 a a 1n a 2 a 21 a a 2n G =. a m =... a m1 a m2... a mn. It is worth remarking that G is not unique, as C has several bases. Example 3.3. Let C := {000, 110, 101, 011}. Then [ ] G 1 = is a generator matrix for C, but so is G 2 = [ The generator matrix of a code can be used to define an easy encoding function. Theorem 3.4. Let C Z n 2 be a linear code of dimension m, and let G be a generator matrix for C. Then the function E : Z m 2 Z n 2 defined by E(x) = xg, is an encoding function. Proof. We have to prove that E is a bijection. First note that for x = x 1 x 2... x m Z m 2 we have a 1 a 2 E(x) = xg = [x 1 x 2... x m ]. = x 1a 1 + x 2 a x m a m C, a m and so E maps Z m 2 ]. onto C, since {a 1,..., a m } is a basis for C. Also, E(x) = E(y) x 1 a x m a m = y 1 a y m a m x i = y i (1 i m), since the vectors a 1,..., a m are linearly independent. Therefore E is indeed a bijection.

11 Finite Mathematics 11 Example 3.5. Let us consider the encoding function E : Z 3 2 Z 6 2 given by the generator matrix G = Then we have [1 0 1]G = [ ] and so the word 101 is encoded as In this way one can calculate all the codewords, and obtain C = {000000, , , , , , , }. The weights of the code-words are respectively 0, 3, 4, 3, 5, 2, 3, 4. So we can detect single errors, but cannot correct them. Another way to define an m-dimensional linear code in Z n 2 is to give it as the null-space of an (n m) n matrix with linearly independent rows. This matrix is called the parity check matrix. Example 3.6. Consider the matrix H = [ ]. A vector v = v 1 v 2 v 3 Z 3 2 is in the null-space of H if and only if v 1 +v 2 +v 3 = 0. Thus the null-space of H is {000, 110, 101, 011}, which is exactly the code of Example 3.3. The question then arises about the connection between the generator matrix and the parity check matrix for the same code. Let the code C be given by the generator matrix G = [a ij ] m n, and let w = w 1 w 2... w n Z n 2. Then w C if and only if there exists x = x 1 x 2... x m Z m 2 such that xg = w. This is a system of n equations in the variables x 1,..., x m and w 1,..., w n. If we eliminate x 1,..., x m from this system, we obtain a homogeneous system of n m equations in variables w 1,..., w n. This system can be written as Hw T = 0, where H is an (n m) n matrix whose (i, j)-entry is the coefficient of w j in the i-th equation. So we have w C if and only if Hw T = 0, and hence H is a parity check matrix for C. Conversely, if we are given a parity check matrix H = [b ij ] (n m) n, then for every w C we have Hw T = 0. If w = w 1 w 2... w n, then the above equality can be written as a system of n m equations in n variables w 1,..., w n. We can solve this system for w 1,..., w n. Since the number of variables is greater than the number of equations, n (n m) = m parameters x 1,..., x m will appear; in other words we obtain the solution in the form w j = a 1j x a mj x m (1 j n). Thus, if we define G = [a ij ] m n, we have xg = w, and G is a generator matrix for C. Example 3.7. Let us find a parity check matrix for the code given in Example 3.5. So we consider the system xg = w, where x = x 1 x 2 x 3 and w = w 1 w 2 w 3 w 4 w 5 w 6. In expanded form this system is: x 1 + x 3 = w 1 x 2 = w 2 x 1 + x 2 + x 3 = w 3 x 1 = w 4 x 2 = w 5 x 2 + x 3 = w 6. Substituting any values in for x 1, x 2 and x 3 would yield a codeword W. Instead we eliminate x 1, x 2, x 3 by using x 1 = w 4, x 2 = w 2, x 3 = w 1 + w 4 (remember, over Z 2

12 12 Nik Ruškuc and Colva M. Roney-Dougal we have x 1 + x 1 = 0). This gives us a set of 3 equations that do not involve x 1, x 2 or x 3 : w 1 + w 2 + w 3 = 0 w 2 + w 5 = 0 w 1 + w 2 + w 4 + w 6 = 0. Any set of 6 variables w 1 w 2 w 3 w 4 w 5 w 6 that satisfy these three equations is a codeword. We can write these three equations as Hw T = 0, where H = , and H is a parity check matrix for C. Example 3.8. Let a code C Z 6 2 be given by the parity check matrix [ ] H = We find the corresponding generator matrix as follows. Hw T = 0, which is equivalent to We consider the system w 1 + w 2 + w 3 = 0 w 1 + w 3 + w 4 + w 5 + w 6 = 0. Any choice of w 1 w 2 w 3 w 4 w 5 w 6 which satisfies these two equations is a codeword. Let us therefore solve it for w 1, w 2, w 3, w 4, w 5, w 6, writing the parameters as x 1, x 2, x 3, x 4 : w 1 = x 1 w 2 = x 2 w 3 = x 1 + x 2 w 4 = x 3 w 5 = x 4 w 6 = x 2 + x 3 + x 4. This solution can be written as w = xg, where G = and G is a generator matrix for C Remark 3.9. If we start with with a generator matrix G, find the corresponding parity check matrix H and then the corresponding generator matrix G 1, we need not have G = G 1. This is because a code may have several different generator matrices. As one might expect, the parity check matrix also contains information about the error detecting and correcting capabilities of the code. Theorem Let C Z n 2 be a linear code with parity check matrix H. Then the minimum distance is equal to the size of the smallest set of linearly dependent columns of H. In particular, we can detect (respectively, correct) all errors of weight up to k if and only if the size of the smallest set of linearly dependent columns of H is k + 1 (respectively, 2k + 1).,

13 Finite Mathematics 13 Proof. By Theorem 3.2 since C is linear the minimum distance is equal to the minimum weight of a non-zero code-word. Write H = [c 1... c n ], where the c i are columns of H. For a word w = a 1... a n Z n 2 we have w C if and only if Hw T = 0, i.e. if and only if a 1 c a n c n = 0. The word w has weight k if and only if exactly k of a 1,..., a n are equal to 1. Therefore C contains a word of weight k if and only if H has a set of k linearly dependent columns. 4. Some more group theory: subgroups and cosets In this section we recall some more elementary group theory that we will need in Section 5. In order to make the notation compatible with what follows, we shall use the additive notation for groups; in other words we shall denote the group operation by +, the identity element by 0, and the inverse of x by x. A non-empty subset H of a group G is a subgroup if it is a group itself under the same operation. For example, if G = Z 6 = {0, 1, 2, 3, 4, 5}, then H = {0, 2, 4} is a subgroup, while K = {0, 1, 2, 3} is not. Actually, it can be proved that H is a subgroup of G if and only if H is closed under + and under taking inverses. Let G be a group, let H be a subgroup of G, and let a G. The coset of H determined by a is the set a + H = {a + h : h H}. The main properties of cosets are given in the following Theorem 4.1. Let G be a group, and let H be a subgroup of G. The cosets of H satisfy the following properties. (i) H = 0 + H is a coset of itself. (ii) a a + H for every element a G. (iii) a + H = b + H for all a, b G; in other words, any two cosets of H have the same number of elements. (iv) Any two distinct cosets of H are disjoint (i.e. their intersection is the empty set). (v) G = a G (a + H); in other words, G is the union of all cosets of H. Proof. Exercise. The above theorem can be summed up as follows: the cosets of a subgroup partition the group into blocks of equal size. 5. Decoding with coset leaders and syndromes Let C Z n 2 be an m-dimensional linear code, let G be a generator matrix for C, and let H be a parity check matrix for C. We have seen that G yields an easy-tocompute encoding function E : Z m 2 Z n 2 given by E(x) = xg. In this section we discuss decoding. First of all we introduce a restriction on the generator matrix G: we require that G be in standard form, meaning that G is written as b 11 b b 1,n m b 21 b b 2,n m G = b m1 b m2... b m,n m

14 14 Nik Ruškuc and Colva M. Roney-Dougal So G consists of the identity matrix I m, followed by an m (n m) matrix B; we write briefly G = [I m B]. The reason for making this restriction is that it makes it easy to decode the code-words. Note that for every x Z m 2 we have E(x) = xg = x[i m B] = [xi m xb] = [x xb]. So every word x Z m 2 is encoded as a longer word beginning with x. Conversely, if a code-word w = w 1 w 2... w n C is received, we ought to decode it as w 1 w 2... w m. Another advantage of G being in the standard form is that it is easy to find the corresponding parity check matrix. Theorem 5.1. Let C Z n 2 be an m-dimensional linear code. Then G = [I m B] is a generator matrix for C if and only if the matrix H = [B T I n m ] is a parity check matrix for C. Proof. We show that if G = [I m B] is a generator matrix for C, then H = [B T I n m ] is a parity check matrix for C, and leave the converse as an exercise. We consider the system xg = w. In an expanded form this system is: x 1 = w 1 x 2 = w x m = w m b 11 x 1 + b 21 x b m1 x m = w m+1 b 12 x 1 + b 22 x b m2 x m = w m+2.. b 1(n m) x 1 + b 2(n m) x b m(n m) x m = w n We solve this for w by substituting w 1 = x 1,..., w m = x m, yielding the system of n m equations: b 11 w 1 + b 21 w b m1 w m + w m+1 = 0 b 12 w 1 + b 22 w b m2 w m + w m+2 = b 1(n m) w 1 + b 2(n m) w b m(n m) w m + w n = 0 This system of equations can be written as Hw T = 0 where H = [B T I n m ]. The problem arises when we want to decode a word which is not a code-word. A reasonable approach to this is to find first the corresponding code-word, and then to decode this code-word, as explained above. This amounts to finding the error of transmission. However, since every word is a possible error, we cannot determine which error has occurred. Instead, we want to discover the error which is most likely to have occurred. Now remember that the probability of a single error is small, and certainly smaller than 0.5. This means that errors of small weights are likelier to occur than those of large weights. Consequently, for any received word we want to find the code-word closest to it (with respect to the Hamming distance). Note that C, being a linear code, is certainly a subgroup of Z n 2. Thus we may talk about cosets of C in Z n 2. If C has dimension m, then C = 2 m, and so there are 2 n /2 m = 2 n m cosets, say C = C 1, C 2,..., C 2 n m. Definition 5.2. Let C i be a coset of C, and let a C i. A coset leader of C i is any element a C i of minimal weight; in other words a is a coset leader if for any other b C i we have wt(a) wt(b).

15 Finite Mathematics 15 The following theorem shows how coset leaders give us a method for decoding with the desired properties. Theorem 5.3. Let C Z n 2 be an m-dimensional linear code, let C = C 1, C 2,..., C 2 n m be the cosets of C, and let a 1, a 2,..., a 2 n m be respective coset leaders. If a word w Z n 2 belongs to the coset C i then w + a i is the code-word closest to w. Proof. From w C i = a i + C it follows that w = a i + v for some v C. But then w + a i = w a i = v C is a code-word. Let u C be any other code-word, and let b = w + u. We have b = w + u = a i + v + u a i + C = C i. Since a i is a coset leader for C i, we have wt(a i ) wt(b), and so d(w, u) = wt(w u) = wt(b) wt(a i ) = wt(w v) = d(w, v), as required. This solution for the problem of decoding is still not entirely satisfactory: we have not avoided the need to store all the elements of Z n 2. We solve this final problem by introducing the following new concept. Definition 5.4. Let C Z n 2 be a linear code of dimension m, and let H be its parity check matrix. For a word w Z n 2, its syndrome is the word Hw T Z n m 2. The significance of syndromes for decoding is based on the following theorem. Theorem 5.5. Let C Z n 2 be a linear code, let H be its parity check matrix, and let w 1, w 2 Z n 2. Then w 1 and w 2 belong to the same coset of C if and only if their syndromes are equal. Proof. ( ) Assume that w 1 and w 2 belong to the same coset a + C of C. This means that w 1 = a + u, w 2 = a + v for some u, v C. Since H is a parity check matrix for C we have Hu T = Hv T = 0, and so Hw T 1 = H(a T + u T ) = Ha T = H(a T + v T ) = Hw T 2. ( ) Now assume that Hw1 T = Hw2 T. This implies that H(w1 T w2 T ) = 0, and hence w 1 w 2 C. If we denote w 1 w 2 by u, then we have w 1 = w 2 + u w 2 + C, and so w 1 and w 2 belong to the same coset of C. So assume that we know coset leaders for C (we show in Example 5.6 how to calculate them) and corresponding syndromes. Then we can find the coset leader for an arbitrary word just by computing its syndrome. Therefore, we have no need to store 2 n elements of Z n 2, but instead store 2 n m coset leaders, and the same number of syndromes. If we combine all the results from this section we obtain the following method for decoding. Decoding. Let C Z n 2 be an m-dimensional linear code, with the generator matrix G in standard form, and the corresponding parity check matrix H. Prior to decoding do the following four steps: 1) calculate code-words of C; 2) find coset leaders for C;

16 16 Nik Ruškuc and Colva M. Roney-Dougal 3) for each coset leader calculate its syndrome; 4) Store the table of coset leaders and their syndromes: all the other codewords may be discarded. To decode an arbitrary word w Z n 2 do the following four steps: 1) calculate the syndrome of w; 2) find the coset leader a with the same syndrome; 3) let v = a + w; 4) decode w as the first m symbols of v. Example 5.6. Let C be the code with the generator matrix G = The corresponding parity check matrix is H = The elements of C can be obtained as xg, x Z 3 2. We obtain C = {000000, , , , , , , }. Now we find coset leaders. We list the elements of C in the first row of a table, and then we find an element of Z n 2 of minimal weight which is not listed. For example, we can take to be this element. We add this element to all the elements of C, and thus obtain the second row of our table. So after these two steps the table looks like this: Next, we again find an element of minimal weight not already listed, and add it to all the elements of C, obtaining the third row of the table. We keep doing this until we exhaust all the elements of Z n 2. We obtain the following table: It is easy to see that the rows of the table are cosets of C, and that the elements in the first column are coset leaders. Next, for each leader we calculate its syndrome. For example Ha T 1 = 0 0 0, Ha T 2 = 1 1 0,

17 Finite Mathematics 17 where a 1 and a 2 are the first two coset leaders. The complete table of coset leaders and syndromes is as follows: To decode, say, the word w = we first compute its syndrome: 0 Hw T = 1. 0 The corresponding coset leader is , and so the correct code-word is = Consequently, w should be decoded as 110. We give a second example, and decode the word w = Again, we first compute its syndrome: 1 Hw T = 0. 0 The corresponding coset leader is , and so the correct code-word is = We decode w as Perfect codes We have seen that a code is a device for transfer of information, potentially capable of detecting and correcting random errors in transmission. However, these two functions of a code are in conflict with one another. For example, any one-element code C = {u}, u Z n 2 can correct all errors, but cannot carry any information. At the other extreme, the full code C = Z n 2 can carry a lot of information, but cannot detect any errors, let alone correct them. Recall that ( ) n i = n! i!(n i)! is the number of ways of choosing i objects from a set of n objects. The following theorem gives an upper bound for the number of code-words in a code of specified error correcting capabilities. Theorem 6.1. For a code C Z n 2 with minimum distance at least 2e + 1 we have C 2 n / e i=0 ( ) n. i Proof. For each code-word w C consider the ball with centre w and radius e: B(w, e) = {u Z n 2 : d(u, w) e}. We claim that for distinct w 1, w 2 C we have B(w 1, e) B(w 2, e) =. Indeed, u B(w 1, e) B(w 2, e) would imply d(w 1, w 2 ) d(w 1, u) + d(w 2, u) e + e < 2e + 1,

18 18 Nik Ruškuc and Colva M. Roney-Dougal a contradiction. Next note that e B(w, e) = {u Z n 2 : d(w, u) = i}. i=0 For each i, the set {u Z n 2 : d(w, u) = i} consists of all the words which differ from w in exactly i positions; clearly there are exactly ( n i) such words. Hence B(w, e) = e i=0 ( ) n. i Since for all w C the sets B(w, e) are disjoint subsets of Z n 2 we conclude 2 n = Z n 2 w C B(w, e) = C e i=0 ( ) n, i and the desired inequality follows. Definition 6.2. A code C Z n 2 is said to be perfect if it attains the equality in the previous theorem, i.e. if the minimum distance is 2e + 1 and C = 2 n / e i=0 ( n i). We now show that perfect codes exist. Example 6.3. Let H be any d (2 d 1) matrix, the columns of which are all nonzero vectors from Z d 2, and let C Z 2d 1 2 be the code with parity check matrix H. It is obvious that every two columns of H are linearly independent, and that one can find three linearly dependent columns. Therefore, by Theorem 3.10, the minimum distance is 3, and we can correct single errors. In the notation of Theorem 6.1 we have n = 2 d 1 and e = 1. The generator matrix for C has dimension (n d) n, and so the number of code-words is e ( ) n 2 n d = 2 n /2 d = 2 n /(1 + n) = 2 n /. i Therefore, C is a perfect code; it is called the (2 d 1, 2 d d 1) Hamming code. Let us, for example, construct the (7, 4) Hamming code, so that d = 3. We have the freedom of choice in which order to put the columns of H. With future use in mind, we opt for H = By Theorem 5.1 the corresponding generator matrix is G = , and hence the code-words are i=0 C = { , , , , , , , , , , , , , , , }.

19 Finite Mathematics 19 The minimum weight of C is 3, and hence the minimum distance is 3. It is possible (and not too difficult) to show that Hamming codes are the only linear perfect codes with e = 1. However, there exist various non-linear perfect codes with e = 1. For e > 1 the perfect codes are few and far between. For example, for e = 2, a perfect code C Z n 2 must satisfy: C = 2 n / 2 i=0 ( ) n = 2 n+1 /(n 2 + n + 2). i So we must have n 2 + n + 2 = 2 a for some a. If we multiply both sides by 4 and set x = 2n + 1, y = a + 2, we obtain the equation x = 2 y. This equation is known as Nagell s equation, and its solutions are (±1, 3), (±3, 4), (±5, 5), (±11, 7) and (±181, 15) (this is non-trivial). Since e = 2 we must have n 2e + 1 = 5, and so 2n + 1 = x 11. We see that the only possibilities are n = 5 and n = 90. For n = 5 we have the following Example 6.4. Let C = {00000, 11111} Z 5 2. Here we have C = 2, n = 5, e = 2 and ( ) ( ) ( ) C = 2 = 32/16 = 2 5 /( ) = 2 5 /( + + ) So C is a (not very exciting) perfect code; it is called the 5-repetition code. On the other hand there is no perfect code C Z 90 2 with e = 2; we will show this in Corollary 6.2 in Chapter 5. Consider now the case e = 3; note that here we must have n 7. This time we have C = 2 n / 3 i=0 ( ) n = 3 2 n+1 /(6 + 6n + 3n(n 1) + n(n 1)(n 2)). i If we put m = n + 1 we see that we must have m(m 2 3m + 8) = 3 2 a for some a. We have the following two cases. Case 1: m = 2 b, m 2 3m + 8 = 3 2 c for some b, c. We have n = 2 b 1 7, so we must have b 3. If b 4 we have m 2 3m (mod 16), so that c = 3 and m 2 3m + 8 = 24. But this last equation has no integer solutions. For b = 3 we have m = 8, c = 4, n = 7 and C = 2. The 7-repetition code { , } is an example of this situation. Case 2: m = 3 2 b, m 2 3m + 8 = 2 c for some b, c. We must have b 2 as if b = 1 we get n = 5. The case b 4 is eliminated as in Case 1. For b = 2 we have m = 12 and m 2 3m + 8 = 116, which is not a power of 2. For b = 3 we have m = 24, n = 23. An example of such a code is given below. Example 6.5. Consider the (7,4) Hamming code H, as defined in Example 6.3. Extend each code-word in H by one component, which is equal to the sum of all the other components. Thus, for example, the code-word is extended to , while the code-word is extended to The obtained code H has the minimum weight of a non-zero code-word equal to 4, as the minimum weight of H is 3. Let H Z 7 2 be the code obtained from H by reversing all the

20 20 Nik Ruškuc and Colva M. Roney-Dougal code-words, and let H be the code obtained from H by adding to each code-word the extra component equal to the sum of all components. Finally, form a new code C Z 24 2 as follows: C = {(a + x, b + x, a + b + x) : a, b H, x H }. If A and X are bases for H and H respectively, then one may prove that the set {(a, 0, a) : a A} {(0, b, b) : b A} {(x, x, x) : x X} is a basis for C. In particular, C = Also, one may prove that the minimal weight of a non-zero code-word in C is equal to 8. Now, delete the last component of every vector in C to obtain a new code C Z 23. It still has 2 12 code-words, and the minimum weight of a non-zero codeword is 7, so that C corrects up to three errors (e = 3). Now we have 2 n / e i=0 ( ) n i ( 23 = 2 23 /( 0 ) + ( ) ( ) ( ) 23 ) 3 = 2 23 /( ) = 2 23 /2 11 = 2 12 = C, and C is perfect! The code C is called the (binary) Golay code. It actually turns out that for e > 1 the Golay code and the repetition codes are the only perfect codes that exist.

21 Chapter 3 Latin squares 1. Definition and existence Let us consider the following problem posed by Euler in The thirty six officers problem. There are six regiments, each having six officers, one of each of six possible ranks. Is it possible to parade these thirty six officers in a six by six pattern, so that every row and every column contain exactly one officer of each rank and exactly one member of each regiment? Euler conjectured that the answer was negative. This was finally proved by Tarry in 1900 by a systematic examination of all possibilities. Today this can be done relatively easily using computers. Let us consider the first condition in the problem: every row and every column should contain exactly one officer of each rank. Denote the ranks by 1, 2, 3, 4, 5, 6, and replace the officers by their ranks. We obtain a 6 6 array of numbers {1, 2, 3, 4, 5, 6}, such that every row and every column of the array contain each number exactly once. Definition 1.1. A Latin square of order n is an n n array of numbers {1, 2,..., n} (or some other n symbols) in which every row and every column contains each number exactly once. Do Latin squares exist? What are the possible orders of Latin squares? The following theorem answers these questions. Theorem 1.2. Let G = {g 1, g 2,..., g n } be a finite group of order n. The multiplication table for G is a Latin square. In particular, for each n there exists a Latin square of order n. Proof. We prove that an arbitrary row, corresponding to the element g i, contains an arbitrary element g k. The proof for columns is similar. Let g j = g 1 i g k. Then the (g i, g j ) entry in the table is g i g j = g i g 1 i g k = g k. The second statement follows from the first and the fact that for every n there exists a group of order n (e.g. Z n ). 2. Counting Latin squares In this section we will prove that there are many Latin squares of order n. To do this, we need to make a significant detour. Let A 1,..., A n be sets. A system of distinct representatives (SDR for short) for these sets is an n-tuple (x 1,..., x n ) of elements with the properties: 21

22 22 Nik Ruškuc and Colva M. Roney-Dougal x i A i for i = 1,..., n (so that x i is a representative of A i ); x i x j for i j (so that all representatives are distinct). A system of distinct representatives therefore contains one element from each set A i with 1 i n, and these elements are all different. Example 2.1. Let A 1 := {1, 2, 3, 4}, A 2 := {2, 4, 7}, and A 3 := {3, 4, 7}. There are many different SDRs for these three sets. Some are: (1, 2, 3), (1, 2, 4), (1, 2, 7), (1, 4, 3), (1, 4, 7), (1, 7, 3), (1, 7, 4). Theorem 2.2. Let (A 1,..., A n ) be finite sets of size at least r satisfying j J A j J for all J {1,..., n}. ( ) The number of SDRs for this family is at least { r! if r n r(r 1)... (r n + 1) if r > n. Proof. Omitted. This is a version of Hall s Marriage Theorem, which is proved in MT4514 Graph Theory. Theorem 2.3. Let (A 1,..., A n ) be a family of subsets of {1,..., n}, and let r be a positive integer. If each of the sets A i has size r and if each element of {1,..., n} is contained in exactly r sets, then the family (A 1,..., A n ) has at least r! SDRs. Proof. We prove that (A 1,..., A n ) satisfies ( ); the result then follows from Theorem 2.2. For an arbitrary J {1,..., n} we count in two different ways the number of pairs (j, x) where j J and x A j. There are J choices for j, and, having chosen j, there are A j = r choices for x. So there are precisely r J such pairs. On the other hand, there are j J A j choices for x, and, having chosen x, there are at most r possible choices for j, since x lies in precisely r sets. We conclude that r J r j J A j, implying ( ), as required. Let us now return to our problem of counting Latin squares. The idea is to build a Latin square row by row, and to count how many choices for adding each new row we have. To this end we introduce a notion of a Latin rectangle: it is a k n array (with k n) with entries from {1,..., n} such that each entry occurs precisely once in each row and at most once in each column. Lemma 2.4. Given a k n Latin rectangle with k < n, there are at least (n k)! ways to add a row to form a (k + 1) n Latin rectangle. Proof. Let A i be the set of all entries not appearing in the ith column. We see that (x 1,..., x n ) is a possible (k + 1)st row if and only if x i A i and x i x j for i j, i.e. if and only if (x 1,..., x n ) is an SDR for (A 1,..., A n ). Now, clearly each set A i has size n k. Also, a fixed entry x appears precisely k times (once in each row), and so it belongs to precisely n k sets. The conditions of Theorem 2.3 are fulfilled for r = n k, and the lemma follows. Theorem 2.5. The number of Latin squares of order n is at least n!(n 1)!... 2!1!. Proof. There are n! choices for the first row; having chosen it, there are at least (n 1)! choices for the second row, etc.

23 Finite Mathematics Orthogonality Let us analyse the thirty six officers problem in more detail. We have already considered the ranks of the officers. The second requirement is that every row and every column contain one officer from each regiment. So if we denote each regiment by 1, 2, 3, 4, 5, 6, and replace each officer by the number of its regiment we obtain another Latin square. Thus we have two Latin squares: L 1 representing the ranks and L 2 representing the regiments. These two Latin squares are related by the condition that every regiment has one officer of each rank. Let us put the square L 2 over L 1, so that in each cell we can see a pair of numbers. Now, it cannot happen that a pair (i, j) occurs twice, as it would mean that the regiment j has two officers of rank i. Since there are 36 cells and 36 pairs of numbers {1, 2, 3, 4, 5, 6}, we conclude that each pair must occur exactly once. Definition 3.1. Two Latin squares A = (a ij ) n n and B = (b ij ) n n are orthogonal if the set {(a ij, b ij ) : 1 i, j n} contains all possible pairs. Example 3.2. The following two Latin squares are orthogonal: One may ask for which values of n there exist orthogonal squares of order n? It is clear that they do not exist for n = 2. Also, Tarry s solution to the thirty six officers problem means that there are no orthogonal Latin squares of order 6. On the other hand we shall prove in Section 5 that if n 2 (mod 4) then there exist orthogonal Latin squares of order n. Euler conjectured that the converse was also true: if n 2 (mod 4) then orthogonal squares of order n do not exist. However, he was wrong: Bose, Shrikhande and Parker proved in 1960 that for every n, except for n = 2 and n = 6, orthogonal squares exist. Another interesting question that one may ask is the following. Question 3.3. What is the maximal number of mutually orthogonal Latin squares of order n? (Latin squares A 1, A 2,..., A k are mutually orthogonal if each pair A i and A j are orthogonal.) The following notion will be useful in considering the above question. Definition 3.4. A Latin square A = (a ij ) n n is in standard form if its first row is n. It is clear that every Latin square A can be standardised by reordering the symbols in it; we denote the resulting square by A. Example 3.5. The first square A in Example 3.2 is in standard form. The second square B in the same example has standardisation B = Note that A and B are orthogonal. Lemma 3.6. If A and B are orthogonal Latin squares, then so are A and B.

24 24 Nik Ruškuc and Colva M. Roney-Dougal Proof. Let A = (a ij ) n n and B = (b ij ) n n. Standardisation of A is achieved by means of a permutation σ of the set {1, 2,..., n}, so that A = (σ(a ij )) n n. Similarly, we have B = (τ(b ij )) n n for some other permutation τ. Assume that A and B are not orthogonal. This means that among the pairs (σ(a ij ), τ(b ij )) (1 i, j n) at least one pair occurs twice. Thus we have (σ(a ij ), τ(b ij )) = (σ(a kl ), τ(b kl )), for some i, j, k, l. Since σ and τ are permutations, this implies a ij = a kl and b ij = b kl, which contradicts the fact that A and B are orthogonal. The following theorem gives an upper bound for the maximal number of mutually orthogonal Latin squares of order n. Theorem 3.7. If A 1, A 2,..., A m are mutually orthogonal Latin squares of order n then m n 1. Proof. Let A k = (a (k) ij ) n n. By Lemma 3.6 we may assume that all A 1,..., A m are in standard form (otherwise we standardise them, without affecting orthogonality), i.e. a (k) 1j = j. Consider the set S = {(i, j, k) : a (k) ij = 1}. Clearly, the number of elements of S is equal to the total number of 1 s in A 1,..., A m, so that S = nm. (3.1) Consider a triple (i, j, k) S. Each of the squares has 1 in the position (1, 1). Hence, if i = j = 1 then k can be arbitrary. Also, no other entry in the position (1, j) or (i, 1) can be 1, so that we cannot have one of i and j being equal to 1 and the other one not. Finally, if i 1 and j 1 then, because of orthogonality, there may exist at most one k such that (i, j, k) S. We conclude that S m + (n 1) 2. (3.2) Combining (3.1) and (3.2) we obtain m n 1, as required. 4. Latin squares from finite fields Theorem 3.7 gives no indication about the sharpness of the given bound. Here we show that for infinitely many n there are sets of n 1 mutually orthogonal Latin squares, namely whenever n is a prime power. To do so we introduce a method of constructing orthogonal Latin squares from finite fields. Theorem 4.1. If n = p t, where p is a prime and t 1, then there exist n 1 mutually orthogonal Latin squares of order n. Proof. By the Fundamental Theorem for Finite Fields (Theorem 2.5 in Chapter 1) there exists a finite field F = {f 1, f 2,..., f n = 0} of order n. Define n 1 arrays A k = (a (k) ij ) n n, 1 k n 1, with elements from F by setting a (k) ij = f i f k + f j.

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt

More information

The Hamming Codes and Delsarte s Linear Programming Bound

The Hamming Codes and Delsarte s Linear Programming Bound The Hamming Codes and Delsarte s Linear Programming Bound by Sky McKinley Under the Astute Tutelage of Professor John S. Caughman, IV A thesis submitted in partial fulfillment of the requirements for the

More information

EE512: Error Control Coding

EE512: Error Control Coding EE512: Error Control Coding Solution for Assignment on Linear Block Codes February 14, 2007 1. Code 1: n = 4, n k = 2 Parity Check Equations: x 1 + x 3 = 0, x 1 + x 2 + x 4 = 0 Parity Bits: x 3 = x 1,

More information

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

MATH 433 Applied Algebra Lecture 22: Review for Exam 2. MATH 433 Applied Algebra Lecture 22: Review for Exam 2. Topics for Exam 2 Permutations Cycles, transpositions Cycle decomposition of a permutation Order of a permutation Sign of a permutation Symmetric

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................

More information

We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes.

We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes. Chapter 5 Golay Codes Lecture 16, March 10, 2011 We saw in the last chapter that the linear Hamming codes are nontrivial perfect codes. Question. Are there any other nontrivial perfect codes? Answer. Yes,

More information

Orthogonal Arrays & Codes

Orthogonal Arrays & Codes Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible

More information

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination

More information

Arrangements, matroids and codes

Arrangements, matroids and codes Arrangements, matroids and codes first lecture Ruud Pellikaan joint work with Relinde Jurrius ACAGM summer school Leuven Belgium, 18 July 2011 References 2/43 1. Codes, arrangements and matroids by Relinde

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

Mathematics Department

Mathematics Department Mathematics Department Matthew Pressland Room 7.355 V57 WT 27/8 Advanced Higher Mathematics for INFOTECH Exercise Sheet 2. Let C F 6 3 be the linear code defined by the generator matrix G = 2 2 (a) Find

More information

MATH32031: Coding Theory Part 15: Summary

MATH32031: Coding Theory Part 15: Summary MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,

More information

Math 512 Syllabus Spring 2017, LIU Post

Math 512 Syllabus Spring 2017, LIU Post Week Class Date Material Math 512 Syllabus Spring 2017, LIU Post 1 1/23 ISBN, error-detecting codes HW: Exercises 1.1, 1.3, 1.5, 1.8, 1.14, 1.15 If x, y satisfy ISBN-10 check, then so does x + y. 2 1/30

More information

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane 2301532 : Coding Theory Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, 2006 http://pioneer.chula.ac.th/ upattane Chapter 1 Error detection, correction and decoding 1.1 Basic definitions and

More information

On Locating-Dominating Codes in Binary Hamming Spaces

On Locating-Dominating Codes in Binary Hamming Spaces Discrete Mathematics and Theoretical Computer Science 6, 2004, 265 282 On Locating-Dominating Codes in Binary Hamming Spaces Iiro Honkala and Tero Laihonen and Sanna Ranto Department of Mathematics and

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

Hamming codes and simplex codes ( )

Hamming codes and simplex codes ( ) Chapter 6 Hamming codes and simplex codes (2018-03-17) Synopsis. Hamming codes are essentially the first non-trivial family of codes that we shall meet. We start by proving the Distance Theorem for linear

More information

0 Sets and Induction. Sets

0 Sets and Induction. Sets 0 Sets and Induction Sets A set is an unordered collection of objects, called elements or members of the set. A set is said to contain its elements. We write a A to denote that a is an element of the set

More information

Vector spaces. EE 387, Notes 8, Handout #12

Vector spaces. EE 387, Notes 8, Handout #12 Vector spaces EE 387, Notes 8, Handout #12 A vector space V of vectors over a field F of scalars is a set with a binary operator + on V and a scalar-vector product satisfying these axioms: 1. (V, +) is

More information

A Do It Yourself Guide to Linear Algebra

A Do It Yourself Guide to Linear Algebra A Do It Yourself Guide to Linear Algebra Lecture Notes based on REUs, 2001-2010 Instructor: László Babai Notes compiled by Howard Liu 6-30-2010 1 Vector Spaces 1.1 Basics Definition 1.1.1. A vector space

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

MT5821 Advanced Combinatorics

MT5821 Advanced Combinatorics MT5821 Advanced Combinatorics 1 Error-correcting codes In this section of the notes, we have a quick look at coding theory. After a motivating introduction, we discuss the weight enumerator of a code,

More information

The Witt designs, Golay codes and Mathieu groups

The Witt designs, Golay codes and Mathieu groups The Witt designs, Golay codes and Mathieu groups 1 The Golay codes Let V be a vector space over F q with fixed basis e 1,..., e n. A code C is a subset of V. A linear code is a subspace of V. The vector

More information

MAS309 Coding theory

MAS309 Coding theory MAS309 Coding theory Matthew Fayers January March 2008 This is a set of notes which is supposed to augment your own notes for the Coding Theory course They were written by Matthew Fayers, and very lightly

More information

MTH6108 Coding theory

MTH6108 Coding theory MTH6108 Coding theory Contents 1 Introduction and definitions 2 2 Good codes 6 2.1 The main coding theory problem............................ 6 2.2 The Singleton bound...................................

More information

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials Outline MSRI-UP 2009 Coding Theory Seminar, Week 2 John B. Little Department of Mathematics and Computer Science College of the Holy Cross Cyclic Codes Polynomial Algebra More on cyclic codes Finite fields

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 x + + a n 1 x n 1 + a n x n, where the coefficients a 0, a 1, a 2,,

More information

Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore

Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore (Refer Slide Time: 00:15) Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore Lecture No. # 03 Mathematical Preliminaries:

More information

Week 3: January 22-26, 2018

Week 3: January 22-26, 2018 EE564/CSE554: Error Correcting Codes Spring 2018 Lecturer: Viveck R. Cadambe Week 3: January 22-26, 2018 Scribe: Yu-Tse Lin Disclaimer: These notes have not been subjected to the usual scrutiny reserved

More information

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013 Coding Theory and Applications Solved Exercises and Problems of Cyclic Codes Enes Pasalic University of Primorska Koper, 2013 Contents 1 Preface 3 2 Problems 4 2 1 Preface This is a collection of solved

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Error Correcting Codes Prof. Dr. P Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore

Error Correcting Codes Prof. Dr. P Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore (Refer Slide Time: 00:54) Error Correcting Codes Prof. Dr. P Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore Lecture No. # 05 Cosets, Rings & Fields

More information

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic. Modular numbers and Error Correcting Codes Introduction Modular Arithmetic Finite fields n-space over a finite field Error correcting codes Exercises Introduction. Data transmission is not normally perfect;

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Lecture 1: Latin Squares!

Lecture 1: Latin Squares! Latin Squares Instructor: Paddy Lecture : Latin Squares! Week of Mathcamp 00 Introduction Definition. A latin square of order n is a n n array, filled with symbols {,... n}, such that no symbol is repeated

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

exercise in the previous class (1)

exercise in the previous class (1) exercise in the previous class () Consider an odd parity check code C whose codewords are (x,, x k, p) with p = x + +x k +. Is C a linear code? No. x =, x 2 =x =...=x k = p =, and... is a codeword x 2

More information

Week 15-16: Combinatorial Design

Week 15-16: Combinatorial Design Week 15-16: Combinatorial Design May 8, 2017 A combinatorial design, or simply a design, is an arrangement of the objects of a set into subsets satisfying certain prescribed properties. The area of combinatorial

More information

Graph Theory. Thomas Bloom. February 6, 2015

Graph Theory. Thomas Bloom. February 6, 2015 Graph Theory Thomas Bloom February 6, 2015 1 Lecture 1 Introduction A graph (for the purposes of these lectures) is a finite set of vertices, some of which are connected by a single edge. Most importantly,

More information

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0 Coding Theory Massoud Malek Binary Linear Codes Generator and Parity-Check Matrices. A subset C of IK n is called a linear code, if C is a subspace of IK n (i.e., C is closed under addition). A linear

More information

Can You Hear Me Now?

Can You Hear Me Now? Can You Hear Me Now? An Introduction to Coding Theory William J. Turner Department of Mathematics & Computer Science Wabash College Crawfordsville, IN 47933 19 October 2004 W. J. Turner (Wabash College)

More information

Definition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}.

Definition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}. 2.4. Coset Decoding i 2.4 Coset Decoding To apply MLD decoding, what we must do, given a received word w, is search through all the codewords to find the codeword c closest to w. This can be a slow and

More information

Error-Correcting Codes

Error-Correcting Codes Error-Correcting Codes HMC Algebraic Geometry Final Project Dmitri Skjorshammer December 14, 2010 1 Introduction Transmission of information takes place over noisy signals. This is the case in satellite

More information

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016 MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016 Notation: We will use the notations x 1 x 2 x n and also (x 1, x 2,, x n ) to denote a vector x F n where F is a finite field. 1. [20=6+5+9]

More information

Section 3 Error Correcting Codes (ECC): Fundamentals

Section 3 Error Correcting Codes (ECC): Fundamentals Section 3 Error Correcting Codes (ECC): Fundamentals Communication systems and channel models Definition and examples of ECCs Distance For the contents relevant to distance, Lin & Xing s book, Chapter

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Lecture 2: Vector Spaces, Metric Spaces

Lecture 2: Vector Spaces, Metric Spaces CCS Discrete II Professor: Padraic Bartlett Lecture 2: Vector Spaces, Metric Spaces Week 2 UCSB 2015 1 Vector Spaces, Informally The two vector spaces 1 you re probably the most used to working with, from

More information

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics Answers and Solutions to (Even Numbered) Suggested Exercises in Sections 6.5-6.9 of Grimaldi s Discrete and Combinatorial Mathematics Section 6.5 6.5.2. a. r = = + = c + e. So the error pattern is e =.

More information

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES

STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES The Pennsylvania State University The Graduate School Department of Mathematics STRONG FORMS OF ORTHOGONALITY FOR SETS OF HYPERCUBES A Dissertation in Mathematics by John T. Ethier c 008 John T. Ethier

More information

Homework #5 Solutions

Homework #5 Solutions Homework #5 Solutions p 83, #16. In order to find a chain a 1 a 2 a n of subgroups of Z 240 with n as large as possible, we start at the top with a n = 1 so that a n = Z 240. In general, given a i we will

More information

Lecture 6: Finite Fields

Lecture 6: Finite Fields CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

On non-antipodal binary completely regular codes

On non-antipodal binary completely regular codes On non-antipodal binary completely regular codes J. Borges, J. Rifà Department of Information and Communications Engineering, Universitat Autònoma de Barcelona, 08193-Bellaterra, Spain. V.A. Zinoviev Institute

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Notes 10: Public-key cryptography

Notes 10: Public-key cryptography MTH6115 Cryptography Notes 10: Public-key cryptography In this section we look at two other schemes that have been proposed for publickey ciphers. The first is interesting because it was the earliest such

More information

Chapter 1. Latin Squares. 1.1 Latin Squares

Chapter 1. Latin Squares. 1.1 Latin Squares Chapter Latin Squares. Latin Squares Definition... A latin square of order n is an n n array in which n distinct symbols are arranged so that each symbol occurs exactly once in each row and column. If

More information

Groups and Symmetries

Groups and Symmetries Groups and Symmetries Definition: Symmetry A symmetry of a shape is a rigid motion that takes vertices to vertices, edges to edges. Note: A rigid motion preserves angles and distances. Definition: Group

More information

Introduction to finite fields

Introduction to finite fields Chapter 7 Introduction to finite fields This chapter provides an introduction to several kinds of abstract algebraic structures, particularly groups, fields, and polynomials. Our primary interest is in

More information

Lecture 2 Linear Codes

Lecture 2 Linear Codes Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of

More information

FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES. Presenting: Wednesday, June 8. Section 1.6 Problem Set: 35, 40, 41, 43

FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES. Presenting: Wednesday, June 8. Section 1.6 Problem Set: 35, 40, 41, 43 FUNDAMENTALS OF ERROR-CORRECTING CODES - NOTES BRIAN BOCKELMAN Monday, June 6 2005 Presenter: J. Walker. Material: 1.1-1.4 For Lecture: 1,4,5,8 Problem Set: 6, 10, 14, 17, 19. 1. Basic concepts of linear

More information

Lecture 12: November 6, 2017

Lecture 12: November 6, 2017 Information and Coding Theory Autumn 017 Lecturer: Madhur Tulsiani Lecture 1: November 6, 017 Recall: We were looking at codes of the form C : F k p F n p, where p is prime, k is the message length, and

More information

1 I A Q E B A I E Q 1 A ; E Q A I A (2) A : (3) A : (4)

1 I A Q E B A I E Q 1 A ; E Q A I A (2) A : (3) A : (4) Latin Squares Denition and examples Denition. (Latin Square) An n n Latin square, or a latin square of order n, is a square array with n symbols arranged so that each symbol appears just once in each row

More information

A Little Beyond: Linear Algebra

A Little Beyond: Linear Algebra A Little Beyond: Linear Algebra Akshay Tiwary March 6, 2016 Any suggestions, questions and remarks are welcome! 1 A little extra Linear Algebra 1. Show that any set of non-zero polynomials in [x], no two

More information

Reed-Muller Codes. Sebastian Raaphorst Carleton University

Reed-Muller Codes. Sebastian Raaphorst Carleton University Reed-Muller Codes Sebastian Raaphorst Carleton University May 9, 2003 Abstract This paper examines the family of codes known as Reed-Muller codes. We begin by briefly introducing the codes and their history

More information

Proof: Let the check matrix be

Proof: Let the check matrix be Review/Outline Recall: Looking for good codes High info rate vs. high min distance Want simple description, too Linear, even cyclic, plausible Gilbert-Varshamov bound for linear codes Check matrix criterion

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Codes over Subfields. Chapter Basics

Codes over Subfields. Chapter Basics Chapter 7 Codes over Subfields In Chapter 6 we looked at various general methods for constructing new codes from old codes. Here we concentrate on two more specialized techniques that result from writing

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh June 2009 1 Linear independence These problems both appeared in a course of Benny Sudakov at Princeton, but the links to Olympiad problems are due to Yufei

More information

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x),

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x f(x) = q(x)h(x) + r(x), Coding Theory Massoud Malek Linear Cyclic Codes Polynomial and Words A polynomial of degree n over IK is a polynomial p(x) = a 0 + a 1 + + a n 1 x n 1 + a n x n, where the coefficients a 1, a 2,, a n are

More information

2 Metric Spaces Definitions Exotic Examples... 3

2 Metric Spaces Definitions Exotic Examples... 3 Contents 1 Vector Spaces and Norms 1 2 Metric Spaces 2 2.1 Definitions.......................................... 2 2.2 Exotic Examples...................................... 3 3 Topologies 4 3.1 Open Sets..........................................

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

11 Minimal Distance and the Parity Check Matrix

11 Minimal Distance and the Parity Check Matrix MATH32031: Coding Theory Part 12: Hamming Codes 11 Minimal Distance and the Parity Check Matrix Theorem 23 (Distance Theorem for Linear Codes) Let C be an [n, k] F q -code with parity check matrix H. Then

More information

G Solution (10 points) Using elementary row operations, we transform the original generator matrix as follows.

G Solution (10 points) Using elementary row operations, we transform the original generator matrix as follows. EE 387 October 28, 2015 Algebraic Error-Control Codes Homework #4 Solutions Handout #24 1. LBC over GF(5). Let G be a nonsystematic generator matrix for a linear block code over GF(5). 2 4 2 2 4 4 G =

More information

3360 LECTURES. R. Craigen. October 15, 2016

3360 LECTURES. R. Craigen. October 15, 2016 3360 LECTURES R. Craigen October 15, 2016 Introduction to designs Chapter 9 In combinatorics, a design consists of: 1. A set V elements called points (or: varieties, treatments) 2. A collection B of subsets

More information

The extended Golay code

The extended Golay code The extended Golay code N. E. Straathof July 6, 2014 Master thesis Mathematics Supervisor: Dr R. R. J. Bocklandt Korteweg-de Vries Instituut voor Wiskunde Faculteit der Natuurwetenschappen, Wiskunde en

More information

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes ELEC 519A Selected Topics in Digital Communications: Information Theory Hamming Codes and Bounds on Codes Single Error Correcting Codes 2 Hamming Codes (7,4,3) Hamming code 1 0 0 0 0 1 1 0 1 0 0 1 0 1

More information

ON LINEAR CODES WHOSE WEIGHTS AND LENGTH HAVE A COMMON DIVISOR. 1. Introduction

ON LINEAR CODES WHOSE WEIGHTS AND LENGTH HAVE A COMMON DIVISOR. 1. Introduction ON LINEAR CODES WHOSE WEIGHTS AND LENGTH HAVE A COMMON DIVISOR SIMEON BALL, AART BLOKHUIS, ANDRÁS GÁCS, PETER SZIKLAI, AND ZSUZSA WEINER Abstract. In this paper we prove that a set of points (in a projective

More information

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x]. The second exam will be on Friday, October 28, 2. It will cover Sections.7,.8, 3., 3.2, 3.4 (except 3.4.), 4. and 4.2 plus the handout on calculation of high powers of an integer modulo n via successive

More information

Linear Algebra Lecture Notes-I

Linear Algebra Lecture Notes-I Linear Algebra Lecture Notes-I Vikas Bist Department of Mathematics Panjab University, Chandigarh-6004 email: bistvikas@gmail.com Last revised on February 9, 208 This text is based on the lectures delivered

More information

Abstract Algebra I. Randall R. Holmes Auburn University. Copyright c 2012 by Randall R. Holmes Last revision: November 11, 2016

Abstract Algebra I. Randall R. Holmes Auburn University. Copyright c 2012 by Randall R. Holmes Last revision: November 11, 2016 Abstract Algebra I Randall R. Holmes Auburn University Copyright c 2012 by Randall R. Holmes Last revision: November 11, 2016 This work is licensed under the Creative Commons Attribution- NonCommercial-NoDerivatives

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

De Bruijn Cycles for Covering Codes

De Bruijn Cycles for Covering Codes De Bruijn Cycles for Covering Codes Fan Chung, 1 Joshua N. Cooper 1,2 1 Department of Mathematics, University of California, San Diego, La Jolla, California 92093 2 Courant Institute of Mathematical Sciences,

More information

Classification of root systems

Classification of root systems Classification of root systems September 8, 2017 1 Introduction These notes are an approximate outline of some of the material to be covered on Thursday, April 9; Tuesday, April 14; and Thursday, April

More information

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 USA wiebke@udayton.edu

More information

Euler s, Fermat s and Wilson s Theorems

Euler s, Fermat s and Wilson s Theorems Euler s, Fermat s and Wilson s Theorems R. C. Daileda February 17, 2018 1 Euler s Theorem Consider the following example. Example 1. Find the remainder when 3 103 is divided by 14. We begin by computing

More information

ECEN 5682 Theory and Practice of Error Control Codes

ECEN 5682 Theory and Practice of Error Control Codes ECEN 5682 Theory and Practice of Error Control Codes Introduction to Algebra University of Colorado Spring 2007 Motivation and For convolutional codes it was convenient to express the datawords and the

More information

HW2 Solutions Problem 1: 2.22 Find the sign and inverse of the permutation shown in the book (and below).

HW2 Solutions Problem 1: 2.22 Find the sign and inverse of the permutation shown in the book (and below). Teddy Einstein Math 430 HW Solutions Problem 1:. Find the sign and inverse of the permutation shown in the book (and below). Proof. Its disjoint cycle decomposition is: (19)(8)(37)(46) which immediately

More information

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b . FINITE-DIMENSIONAL VECTOR SPACES.. Fields By now you ll have acquired a fair knowledge of matrices. These are a concrete embodiment of something rather more abstract. Sometimes it is easier to use matrices,

More information

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture - 05 Groups: Structure Theorem So, today we continue our discussion forward.

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. # 02 Vector Spaces, Subspaces, linearly Dependent/Independent of

More information

Vector Spaces. Chapter 1

Vector Spaces. Chapter 1 Chapter 1 Vector Spaces Linear algebra is the study of linear maps on finite-dimensional vector spaces. Eventually we will learn what all these terms mean. In this chapter we will define vector spaces

More information