Notes on matroids and codes

Size: px
Start display at page:

Download "Notes on matroids and codes"

Transcription

1 Notes on matroids and codes Peter J. Cameron Abstract The following expository article is intended to describe a correspondence between matroids and codes. The key results are that the weight enumerator of a code is a specialisation of the Tutte polynomial of the corresponding matroid, and that the MacWilliams relation between weight enumerators of a code and its dual can be obtained from matroid duality. It also provides a general introduction to matroids, an introduction to trellis decoding, and an algebraic construction of the minimal trellis of a code. Some of this material was presented in the QMW study group, although this version is my own re-working of it. I am grateful to Carrie Rutherford and Costas Papadopoulos for their contributions. Some of Carrie s are acknowledged in the text, while Costas taught me about trellis decoding. 1 Introduction Matroids were invented by Whitney to generalise the notion of linear independence in vector spaces. They also describe in a natural way many other situations such as acyclic sets of edges of graphs, partial transversals of families of sets, and others. The purpose of these notes is to explain a connection between matroids and linear codes. In particular, the weight enumerator of a code is a specialisation of a two-variable polynomial called the Tutte polynomial of the corresponding matroid. Duality of codes corresponds to duality of matroids, and we easily obtain the MacWilliams relation between the weight enumerators of a code and its dual. We then consider trellis decoding, examine how the size of the minimal trellis for a code is determined by the matroid. A general reference for matroid theory is Welsh [6]. A matroid M is a pair (E,I ), where E is the finite set of elements of the matroid, and I a family of subsets of E called the independent sets of the matroid, satisfying the following axioms: 1

2 (M1) any subset of an independent set is independent; (M2) if I 1 and I 2 are independent and I 1 < I 2, then there exists e I 2 \ I 1 such that I 1 {e} is independent. There are two standard examples of matroids to which we often refer. These examples give rise to much of the terminology of the subject. Graphs. Let E be the edge set of a graph (which is permitted to have loops and multiple edges). A set of edges is independent if it contains no circuit. Axiom (M1) is clear; we prove (M2). For this, we use the fact that if a graph with n vertices, m edges and c connected components contains no circuits, then n = m + c. Now if I 1 < I 2 and both I 1 and I 2 are independent, then the graph with edge set I 2 has fewer components then the graph with edge set I 1, so some edge e of I 2 must join vertices in different components of the latter graph; then adding e to I 1 creates no circuit. This matroid is called the cycle matroid of the graph, for reasons that will appear shortly. Sets of vectors. Let v 1,v 2,...,v n be vectors in a vector space V. We take E = {1,...,n}, and let the subset I be independent if and only if {v i : i I} is linearly independent in V. (This is slightly more clumsy than taking E to be the set of vectors and independence to mean linear independence; but it allows us to have repeated vectors.) The proofs of (M1) and (M2) are given in any linear algebra text. If the vector space V is the set F m of all m-tuples of elements of F (written as column vectors), then we can conveniently represent the elements of the matroid as the columns of an m n matrix over F. A matroid of this form is a vector matroid. We check the matroid axioms. Condition (M1) is clear. To prove (M2), suppose that (v 1,...,v n ) are linearly independent, as also are (w 1,...,w n+1 ). Assume that the conclusion is false, that is, that (v 1,...,v n,w i ) is linearly dependent for all i. Then we have w i = a i1 v a in v n for i = 1,...,n + 1. Consider the system of equations x 1 a 1 j + + x n+1 a n+1 j = 0 for j = 1,...,n. These comprise n homogeneous equations in n + 1 unknowns, so they have a non-zero solution (x 1,...,x n+1 ). (This fact is the only algebra 2

3 required in the proof.) But then we have x 1 w x n+1 w n+1 + 0, contrary to the assumption that (w 1,...,w n+1 ) is a linearly independent family. Now the basic properties of linear independence, as developed in elementary linear algebra texts, can be proved using (M1) and (M2). 2 Bases and cycles A basis is an independent set which is maximal with respect to inclusion. It follows from (M2) that all bases have the same size. In a connected graph G, a basis of the cycle matroid is the set of edges of a spanning tree of G. The rank ρ(a)) of a set A of elements of a matroid is the cardinality of the largest independent set contained in A. Again, by (M2), all maximal independent subsets of A have the same size. In the cycle matroid of a graph, the rank of a set A of edges is n c, where n is the number of vertices of the graph, and c the number of connected components of the subgraph with edge set A. The rank of a sunset of a vector matroid is the dimension of the subspace spanned by the corresponding vectors. A cycle in a matroid is a set which is not independent but has the property that every proper subset is independent. A cycle in the cycle matroid of a graph is the edge set of a circuit (closed path) in the graph (possibly a loop at a vertex, or two parallel edges between the same two vertices) hence the name. A matroid is determined by its bases, its rank function, or its cycles. For a set I is independent if and only if it is contained in some basis, or if and only if it satisfies ρ(a) = A, or if and only if it contains no cycle. It is possible to axiomatise matroids in terms of their sets of bases, their rank functions, or their sets of cycles. As examples, we treat the axiomatisation of matroids via bases and via cycles. Theorem 2.1 Let B be a non-empty family of subsets of E. Then B is the family of bases of a matroid on E if and only if the following condition holds: (MB) if B 1,B 2 B and x B 1 \ B 2, then there exists y B 2 \ B 1 such that B 1 \ {x} {y} B. 3

4 Proof All bases of a matroid have the same cardinality (since if B 1,B 2 I and B 1 < B 2 then B 1 cannot be maximal independent). Then (MB) follows from (M2) since B 1 \ {x} < B 2. Conversely, suppose that (MB) holds, and let I be the set of all subsets of E which are contained in some member of B. Clearly (M1) holds. To prove (M2), take I 1,I 2 I with I 1 < I 2. Let B 1,B 2 be members of B containing I 1 and I 2 respectively. If B 1 contains an element x of I 2 \I 1, then I 1 {x} is the set required by (M2). So suppose that no such point x occurs. Now, using (MB) repeatedly, we can replace points of B 1 \I 1 by points of B 2. Since I 1 < I 2, we are forced to use a point x of I 2 as the replacement point before we run out of points of B 1 \ I 1. Then I 1 {x} is contained in the basis produced at this step. Finally, we observe that B is precisely the set of bases in the matroid M = (E,I ): for every member of B is a maximal independent set, and conversely a maximal independent set is contained in a member B of B but has the same cardinality as B, so is equal to it. For later use, here is a further property of bases. You may have to read this carefully to see how it differs from (MB). Lemma 2.2 Let B 1,B 2 be bases of a matroid, and let y B 2 \ B 1. Then there exists x B 1 \ B 2 such that B 1 \ {x} {y} is a basis. Proof If the conclusion were that B 2 \ {y} {x} is a basis, this would just be (MB), with B 1 and B 2 interchanged. However, as it is, we have some work to do! We use induction on k = B 1 \ B 2 = B 2 \ B 1. If k = 0, the result is vacuous, while if k = 1 it is trivial. So suppose that k 2 and suppose that the result holds in all situations where B 1 and B 2 are bases with B 1 \ B 2 < k. Since k 2, we can choose y B 2 \B 1 with y y. Now by (MB), there exists x B 1 \ B 2 such that B 2 = B 2 \ {y } {x } is a basis. Now B 1 \ B 2 = k 1. By the induction hypothesis, there is a point x B 1 \ B 2 such that B 1 \ {x} {y} is a basis, as required. We now turn to cycles. Theorem 2.3 Let C be a family of subsets of a set E. Then C is the set of cycles of a matroid on E if and only if the following conditions hold: (MC1) No member of C contains another; (MC2) If C 1,C 2 are distinct elements of C and e C 1 C 2, then there exists C 3 C such that C 3 C 1 C 2 and e / C 3. 4

5 Proof Suppose first that C is the set of cycles (minimal dependent sets) of a matroid. Then a set is dependent if and only if it contains a member of C. Condition (MC1) is obvious. So suppose that C 1,C 2 are cycles and e C 1 C 2. If C 1 C 2 \ {e} does not contain a cycle, then C 1 C 2 is a minimal dependent set, that is a cycle, contradicting minimality. Conversely, suppose that C is a family of sets satisfying (MC1) and (MC2), and let I be the family of sets containing no member of C. We must show that I is a matroid on E. Condition (M1) is clear. To check (M2), let I 1,I 2 I with I 2 = I 1 1, and suppose, for a contradiction, that there is no point y I 2 \ I 1 such that I 1 {y} is independent. Let I 1 \ I 2 = {x 1,...,x k }, and I 2 \ I 1 = {y 1,...,y k+1 }. We prove by induction on i that there exist at least k i + 1 cycles contained in I 1 I 2 but containing none of x 1 2,...,x i. To start the induction for i = 0, observe that, because I 1 {y j } is dependent, it contains a cycle C(y j ); and y j C(y j ), since otherwise C(y j ) I 1, contradicting the independence of I 1. So these cycles are distinct for j = 1,...,k + 1. So suppose that the assertion holds for i. If none of the given k i = 1 cycles contains x i+1, then we are done. So suppose that x i+1 lies in m of these cycles, say C 1,...,C m. By (MC2), for j = 1,...,m 1, we can find a cycle C j contained in C j C m but not containing x i+1. Replacing C j by C j and deleting C m completes the induction step. Now for j = k, the assertion we have proved says that there is at least one cycle contained in I 1 I 2 but containing no point of I 1 \ I 2 ; that is, contained in I 2. But this contradicts the independence of I 2. So I is a matroid on E. Moreover, C is the family of all cycles in the matroid I. For a set in C is not in I, but all its proper subsets are (else it properly contains another member of C. Conversely, if C is minimal with respect to containing no member of I, then C contains a member of C but no proper subset of it does; that is, C C. An interesting interpretation of the matroid cycle axioms was pointed out to me by Dima Fon-Der-Flaass. Consider the game of Bingo. Each player has a card with some numbers written on it. The caller announces in turn the numbers in a sequence. The first player all of whose numbers have been called is the winner. What conditions should the cards satisfy? Let C i be the set of numbers on the ith card. If C i C j then the player holding the jth card can never win, which is unsatisfactory. 5

6 We want to avoid the situation in which two players complete their cards at the same time and the prize is disputed. Suppose that C 1 and C 2 are the sets of numbers on any two cards, and e C 1 C 2. If the numbers in C 1 C 2 are called with e last, then both players 1 and 2 would claim the prize (contrary to what we want), unless the prize has already been claimed by, say player 3, where C 3 C 1 C 2 and e / C 3. In other words, the sets C i should be the cycles of a matroid! More formally, this result can be stated as follows. We define a clutter (also known as an antichain or Sperner family to be a family of sets, none of which contains another. Theorem 2.4 Let C be a family of subsets of E. Then C is the family of cycles of a matroid if and only if it is a clutter with the following property: for any total ordering of E, there is a set C C whose greatest element is smaller than the greatest element of any other set in C. 3 The greedy algorithm One important property of matroids is that they are precisely the structures in which the greedy algorithm works successfully. The greedy algorithm is defined whenever we are trying to find the maximum value of a function. It operates in the most short-sighted way possible: it proceeds by taking steps, each of which increases the function by as much as possible. So it is prone to get trapped at a local maximum which is smaller than the absolute maximum. More formally, suppose that we are given a set X of points with a weight function w from X to the set of non-negative real numbers. We are also given a non-empty family B of k-element subsets of X. The weight of a subset B is defined to be x B w(x). The problem is to choose the member of B of maximum weight. The greedy algorithm works as follows. Given that e 1,...,e i 1 have already been chosen. The next point e i is chosen to maximise w(e i ) subject to the condition that {e 1,...,e i } is contained in some member of B. Clearly the algorithm succeeds in choosing k points, which form a set in B. Theorem 3.1 The non-empty family B is the family of bases of a matroid if and only if, for any weight function w, the greedy algorithm chooses a member of B of maximum weight. 6

7 Proof Suppose that B is the set of bases of a matroid, and let w be any weight function. Suppose that the greedy algorithm chooses successively e 1,e 2,...,e k. Let us assume that there is a basis of greater weight than {e 1,...,e k }, say { f 1,..., f k }, where w( f 1 ) w( f 2 )... w( f k ). Thus, we have w(e 1 ) + + w(e k ) < w( f 1 ) + + w( f k ). Choose the index i as small as possible subject to the condition Then and so w(e 1 ) + + w(e i ) < w( f 1 ) + + w( f i ). w(e 1 ) + + w(e i 1 ) w( f 1 ) + + w( f i 1 ), w( f 1 )... w( f i ) > w(e i ). (Note that i > 1, since e 1 is the element of largest weight which can occur in a basis.) Now { f 1,..., f i } is an independent set with cardinality larger than that of {e 1,...,e i 1 }; so, for some j with 1 j i, the set {e 1,...,e i 1, f j } must be independent. But then the greedy algorithm should have chosen f j rather than e i at the ith stage, since w( f j ) > w(e i ). To show the reverse implication, we are given a family B which is not the set of bases of a matroid, and we must show how to choose a weight function which defeats the greedy algorithm. The statement that B is not a matroid means that there exist B 1,B 2 B and x B 1 \ B 2 such that, for no y B 2 \ B 1 is it true that B 1 \ {x} {y} is a basis. Let l = B 1 \ B 2 = B 2 \ B 1, and choose a number a satisfying 1 (1/l) < a < 1. Now define the weight function w as follows: w(e) = { 1 if x B1 \ {x}; a if x B 2 \ B 1 ; 0 otherwise. Now the greedy algorithm first chooses all the points of B 1 \ {x}. Then by assumption it cannot choose any point of B 2 \ B 1 ; so the last point chosen (which may be x) has weight zero, and the weight of the chosen set is k 1. On the other hand, the set B 2 has weight (k l) + la > (k l) + l(1 (1/l)) = k 1. 7

8 In the case of a graphic matroid, this result says that, if weights are assigned to the edges of the complete graph, then the greedy algorithm (in the form, add the edge of largest weight subject to creating no cycle at each stage) is guaranteed to find a spanning tree of maximum weight. This result is usually stated in the form obtained by reversing the inequalities, (replacing weight w(e) by W w(e) for all edges e, where W is greater than the greatest weight). In this form the problem is known as the minimal spanning tree or minimal connector problem. Here is another characterisation of matroids, whose statement and proof look somewhat like those of the above result about the greedy algorithm. Suppose that the set X is totally ordered. Now any k-subset of X can be written in non-increasing order: we write such a set as {e 1,...,e k } to indicate that e 1... e k. We say that the k-set {e 1,...,e k } dominates { f 1,..., f k } if e i f i for i = 1,...,k. Theorem 3.2 The non-empty family B of k-subsets of X is the family of bases of a matroid if and only if, for any ordering of X, there is a member of B which dominates all others. Proof Suppose first that B is a matroid. Let X be ordered in any manner, say X = {x 1,...,x n }. Choose a weight function w which is an order-preserving map from X into the non-negative real numbers. Let B be the basis of greatest weight. We claim that B dominates all bases. Let B = {e 1,...,e k }, and suppose for a contradiction that there is a base B = { f 1,..., f k } which is not dominated by B. Then f i > e i for some i. Choose i as small as possible subject to this. Then f 1... f i > e i. We know that B is chosen by the Greedy Algorithm, which does not choose any of f 1,..., f i at stage i, even though they all have greater weight than e i. So {e 1,...,e i 1, f j } is dependent for all j i. Now we have the same contradiction as in the earlier argument: since { f 1,..., f i } is an independent set with larger cardinality than {e 1,...,e i 1 }, it must contain an element f j such that {e 1,...,e i 1, f j } is independent. Conversely, suppose that B has the ordering property. We must prove the exchange axiom. Let B 1,B 2 B and x B 1 \ B 2. If B 1 \ B 2 = 1, say B 2 \ B 1 = {y}, then B 1 \ {x} {y} = B 2, and there is nothing to prove. So suppose that B 1 \ B 2 > 1. Now order the points of X as follows: 8

9 the greatest elements are those of B 1 B 2 (in any order); then the points of B 1 \ B 2 other than x; then the points of B 2 \ B 1 ; next comes x; then the remaining points of X. Now neither of B 1 and B 2 dominates the other, so there must be a set in B which dominates both. But the only sets dominating B 1 are those in which x is replaced by an element of B 2 \ B 1. So the exchange axiom holds. Remark This theorem gives us more insight into what a matroid looks like. For example, the dominance order on 2-subsets of a 4-set is shown in Figure 1. We {3,4} {1,4} {2,4} {2,3} {1,3} {1,2} Figure 1: Dominance order see that the only families of 2-sets of {1,2,3,4} which are not matroids consist of {1,4} and {2,3} together with any subset of {{1,2},{1,3}}, or any permutation of one of these. Remark The dominance condition looks very similar to our characterisation of matroid cycles in Theorem 2.4. Indeed, as Carrie Rutherford pointed out to me, the cycle property is a simple consequence. For take any ordering of the elements of the matroid, and let B be the unique base which dominates all others. Let e be the largest element which is not in B. (If no such e exists, there are no cycles.) 9

10 If B is the set of elements greated than e, then B {e} is dependent (else it is contained in a base not dominated by B), and so it contains a cycle includeing e. This is the unique cycle whose minimal element is greatest. This is just the dual of the property in Theorem 2.4. Remark The second characterisation has led to a subtle but far-reaching generalisation of matroids by Borovik, Gel fand, White and others, to the so-called Coxeter matroids. We do not discuss this here. See [1], for example. 4 The dual matroid Let M = (E,I ) be a matroid, and let B be its set of bases. The dual matroid M is the matroid on the set E whose bases are precisely the complements of the bases of M. (Thus a set is independent in M if and only if it is disjoint from some basis of M.) Checking that it is a matroid requires a little care. The set of all complements of bases of a matroid M satisfies (MB): Lemma 2.2 is exactly what is needed to show this. For let B 1 = E \B 1 and B 2 = E \B 2, where B 1 and B 2 are bases of M. If x B 1 \B 2, then x B 2 \B 1 ; so there exists y B 1 \B 2 such that B 1 \ {y} {x} is a basis; its complement is B 1 \ {x} {y}. For what comes later, we need to prove a technical result about the rank function of the dual of a matroid. Lemma 4.1 Let M be the dual of the matroid M on E, and let A be a subset of E and A = E \ A. If ρ and ρ are the rank functions of M and M respectively, we have A ρ (A ) = ρ(e) ρ(a) and ρ (E ) ρ (A ) = A ρ(a). Proof Let I be a maximal independent subset of A (in M). Extend I to a basis I J of M, so that J A. If K = A \ J, then K is an independent subset of A (in M ), since it is contained in the basis E \ (I J). Hence Dualising the argument gives ρ (A ) K = A J = A ρ(e) + ρ(a). ρ(a) A ρ (E) + ρ (A ). 10

11 But we have A + A = E = ρ(e) + ρ (E), so the two inequalities are equalities, and the equations of the lemma follow. A cycle in a matroid M is a set minimal with respect to being contained in no basis. Thus, a cycle in the dual M is a set minimal with respect to being disjoint from no basis of M, that is, meeting every basis. What does the dual of the cycle matroid of a graph G look like? Its cycles are easily described. Assuming that G is connected, a cycle of M is a set of edges minimal with respect to meeting every spanning tree of G, that is, a set minimal with respect to the property that its removal disconnects G. Such a set is a cutset of G; so the dual of the cycle matroid of G is called the cutset matroid of G. What about the dual of a vector matroid? Suppose that the vectors representing M are the columns of an m n matrix A. We may suppose that these vectors span F m ; in other words, A has rank m, and is the matrix of a linear transformation T from F n onto F m. Let U be the kernel of T, and B the matrix of a linear embedding of U into F n. (Thus B is an n (n m) matrix whose columns form a basis for U; the matrix B has rank n m, and we have B A = 0. Consider the (n m) n matrix B. The columns of B are indexed by the same set as the columns of A, and we claim that B represents the dual matroid M. To see this, take a set of m columns which form a basis for the space F m. For convenience of notation, we suppose that they are the first m columns. Then we can apply elementary row operations to A (these do not affect the linear independence or dependence of sets of columns, and so don t change the matroid represented by A) so that the first m columns form an m m identity matrix I. Thus, A has the form [I X], where X is m (n m). Consider the matrix B = [ X I], of size n (n m). Clearly B has rank n m. Also, B A = O, so B the columns of B lie in the null space of A; considering dimensions, we see that the columns of B form a basis for the null space of A. Also, it is clear that the last n m columns form a basis for the matroid represented by B. We have shown that the complement of any basis of M is a basis in the matrix represented by B. Reversing the argument shows the converse implication, So B represents the dual matroid M. 11

12 5 Restriction and contraction A loop is an element e of a matroid such that {e} is not independent. Equivalently, e which lies in no independent set, or in no maximal independent set. The terminology arises from the cycle matroid of a graph, where loops are just loops in the graph-theoretic sense. In a vector matroid, the index i is a loop if and only if the ith vector (or column of the matrix) is zero. If e is not a loop, we define the contraction M/e as follows: the elements are those of E \ {e}, and a set I is independent in M/e if and only if I {e} is independent in M. The name arises from the interpretation in cycle matroids: if M is the cycle matroid of G, then M/e is the cycle matroid of the graph G/e obtained by contracting the edge e (that is, removing it and identifying the two distinct vertices which were its ends). If e is a non-loop in a vector matroid (corresponding to a non-zero vector v), then M/e is a vector matroid obtained by projecting the other vectors onto the factor space V / v. In matrix terms, assuming that e is the first coordinate, apply elementary row operations to convert the first column to ( ), and then delete the first row and column to obtain a matrix representing M/e. These concepts dualise. An element e is a coloop if it is contained in every basis of M. A coloop in a connected graph is an edge whose removal disconnects the graph. (Such an edge is commonly called a bridge or isthmus.) If e is not a coloop, we define the restriction M\e to have element set E \ {e}, a set being independent in M\e if and only if it is independent in M. In the cycle matroid of G, this operation simply corresponds to deleting the edge e; in a vector matroid, to deleting the corresponding vector or column. Proposition 5.1 versa. (a) e is a loop in M if and only if e is a coloop in M, and vice (b) If e is not a loop in M, then (M/e) = M \e. (c) If e is not a coloop in M, then (M\e) = M /e. Proof We see that e lies in every basis of M (that is, e is a coloop in M) if and only if it lies in no basis of M (that is, e is a loop in M ), and dually. Suppose that e is a non-loop in M. The bases of M/e are the bases of M containing e, with e removed. Their complements (in E \ {e}) are the bases of M not containing e, that is, the bases of M \e. So (M/e) = M \e. The other statement is proved dually. 12

13 6 The Tutte polynomial The Tutte polynomial of a matroid can be regarded as a generalisation of the chromatic polynomial of a graph. We sketch this in order to see the issues involved in its definition. Let G be a graph. A vertex-colouring of G is a map f from the vertex set of G to a set C of colours with the property that, if v and w are joined, then f (v) f (w). Let P(g;λ) denote the number of vertex-colourings of G using a set of λ colours. It is clear that P(G;λ) does not depend on the set of colours. What is less clear is that it is the evaluation at λ of a polynomial with integer coefficients. To see this, we observe: If G has n vertices and no edges, then P(G;λ) = λ n. If G contains a loop, then P(G;λ) = 0. If e is an edge which is not a loop, then P(G;λ) = P(G\e;λ) P(G/e;λ), where G\e and G/e are the graphs obtained from G by deleting and contracting e, respectively. The first and second assertions are clear. For the third, we consider all colourings of G\e, where e = {x,y}, and observe that it can be partitioned into two subsets: colourings f with f (x) = f (y) (counted by P(G/e;λ)), and those with f (x) f (y) (counted by P(G;λ)). Now the fact that P(G;λ) is polynomial in λ follows by an easy induction on the number of edges. The point here is that the three conditions above enable us to calculate P(G;λ), by applying a sequence of edge deletions and contractions; but they give no guarantee that a different sequence of deletions and contractions will lead to the same value. The guarantee comes from the fact that P(G; λ) counts something, independently of the recurrence. Similar considerations apply to the Tutte polynomial. Its most important properties are those which allow us to calculate it by means of a recurrence similar to that for P(G;λ). But we must adopt a different definition in order to show that it is well-defined. In what follows, as in most enumeration theory, we adopt the conventions that u 0 = 1 for any u (including u = 0), and 0 n = 0 for any positive integer n. 13

14 Let M = (E,I ) be a matroid, with rank function ρ. The Tutte polynomial T (G;x,y) is the polynomial in x and y (with integer coefficients) given by T (M;x,y) = (x 1) ρ(e) ρ(a) (y 1) A ρ(a). A E Proposition 6.1 (a) T (/0;x,y) = 1, where /0 is the empty matroid. (b) If e is a loop, then T (M;x,y) = yt (M\e;x,y). (c) If e is a coloop, then T (M; x, y) = xt (M/e; x, y). (d) If e is neither a loop nor a coloop, then T (M;x,y) = T (M\e;x,y)+T (M/e;x,y). Proof (a) is trivial; the other parts are proved by careful analysis of the definition. All the arguments are similar and similarly tedious. We prove (b) here and leave the rest as an exercise. Suppose that e is a loop, Every subset A of E \ {e} corresponds to a pair of subsets A,A {e} of E. So each term in the sum for T (M\e) gives rise to two terms in the sum for T (M). Let ρ denote the rank function of M\e. Then the following hold: (E \ {e} = E 1; ρ (E \ {e}) = ρ(e); A {e} = A + 1; ρ(a) = ρ(a {e}) = ρ (A). Let t be the term in T (M\e) corresponding to the set A. Then the term in T (M) corresponding to A is t, while the term corresponding to A {e} is (y 1)t, giving a contribution of yt to T (M). So T (M) = yt (M\e). As an application, we show that the chromatic polynomial of a graph is, up to normalisation, a specialisation of the Tutte polynomial. Proposition 6.2 For any graph G, P(G;λ) = ( 1) ρ(g) λ κ(g) T (G;1 λ,0), where κ(g) is the number of connected components of G and ρ(g) + κ(g) the number of vertices. 14

15 Proof Let f (G;λ) denote the expression on the right-hand side in the theorem. We verify that it satisfies the same recurrence relation and initial conditions as the chromatic polynomial. If G has n vertices and no edges, then κ(g) = n, ρ(g) = 0, and T (G;x,y) = 1, so the initialisation is correct. If G has a loop, then f (G;λ) = 0 by (b), which is also correct. If e is neither a loop nor a coloop, then contracting e reduces ρ by one without changing κ, while deleting e changes neither ρ nor κ, so the inductive condition holds. The most interesting case is that where e is a coloop or bridge, so that G\e has one more component than G. Let e = {x,y}. Then a fraction 1/λ of the colourings of G\e put any given colour on x, and the same proportion on y; these events are independent, since x and y lie in different components of this graph. Thus, a proportion of 1/λ of the colourings give x and y the same colour (and induce colourings of G/e), while the remaining proportion (λ 1)/λ give x and y different colours (and so give colourings of G). Thus, P(G; λ) = (λ 1)P(G/e; λ). This agrees with the recurrence for f in this case, since contracting e reduces ρ by one without changing κ. Many other graph invariants related to trees and forests, flows, percolation, reliability, and knot polynomials, are evaluations or specialisations of the Tutte polynomial. Two of these are obvious from the definition: Proposition 6.3 (a) T (M;1,1) is the number of bases of M; (b) T (M;2,1) is the number of independent sets in M. Proof The only terms contributing to T (M;1,1) are those with A = ρ(a) = ρ(e), that is, bases of M. Similarly, the only terms contributing to T (M;2,1) are those with A = ρ(a), that is, independent sets. A useful tool for identifying specialisations of the Tutte polynomial is obtained by considering a more general form of the Tutte polynomial. We could ask whether there exists a polynomial T (M; x, y, u, v) satisfying the recurrence relation (a) T (/0;x,y,u,v) = 1. (b) If e is a loop, then T (M;x,y,u,v) = y T (M\e;x,y,u,v). (c) If e is a coloop, then T (M;x,y,u,v) = x T (M/e;x,y,u,v). (d) If e is neither a loop nor a coloop, then T (M;x,y,u,v) = u T (M\e;x,y,u,v) + v T (M/e;x,y,u,v). 15

16 As before, it is clear that there is at most one solution to this recurrence. In fact there is one, which can be obtained from the usual Tutte polynomial by a simple renormalisation: Proposition 6.4 The unique solution to the above recurrence is given by ( x T (M;x,y,u,v) = u E ρ(e) v ρ(e) T v u), y. Proof This is easily checked using the facts that if e is a non-loop, then the passage from M to M/e reduces both the number of elements and the rank by 1; if e is a non-coloop, then the passage from M to M\e reduces the number of elements by 1 but doesn t change the rank. The Tutte polynomial of the dual matroid is obtained very simply: Proposition 6.5 T (M ;x,y) = T (M;y,x). Proof Lemma 4.1 shows that the contribution of a set A to T (M;y,x) is identical to the contribution of the complementary set A = E \ A to T (M ;x,y). 7 Codes We have seen that an m n matrix A over a field F gives rise to a vector matroid on the set {1,..., n}. We may assume that the matrix has rank m (so that the vectors span the space V in which they live). We examine the effect of various operations on A. Elementary row operations have the effect of changing the basis in the space V without altering the vectors used for the representation. So the matroid is unchanged. Column permutations simply re-name the vectors, and so replace the matroid by an isomorphic one. 16

17 Multiplying columns by non-zero scalars changes the vectors by scalar multiples; this does not affect whether a collection of vectors is linearly independent, so again the matroid is unchanged. From the matrix A, we can also obtain an m-dimensional subspace of F n, namely the row space of A. Such a subspace is called a linear code, or code for short. We do not here go into the applications of codes to information transmission (in the case where F is a finite field), though some of the most important concepts will be defined shortly (and there is further discussion in Section 9. A code, then, is not a subspace of an arbitrary vector space, but a subspace of F n (that is, a subspace of a vector space with a prescribed basis). How do the operations on A affect the code? Elementary row operations just change the basis for the code. Column permutations, and multiplication of columns by scalars, have the effect of changing the prescribed basis for F n in a limited way: we can permute basis vectors or multiply them by non-zero scalars. As we will see, the important properties of the code are not changed. So we call two codes equivalent if they are related in this way. These operations generate a certain equivalence relation on the set of all m n matrices of rank m over F. We see that each equivalence class of matrices corresponds to a unique vector matroid (up to re-labelling and change of basis in the ambient space), and also to a unique code (up to a natural notion of equivalence). So vector matroids and linear codes (up to the appropriate equivalence) correspond bijectively. Thus, it comes as no surprise that properties of the codes and matroids are very closely related. We now define a few important concepts from coding theory. The motivation is that the field F is regarded as an alphabet, and elements (or words) in the code are used for sending messages over a noisy communication channel. We require that any two codewords differ in sufficiently many positions that even if a few errors occur, the correct codeword can be recovered (as the codeword which resembles the received word most closely). Since the number of positions in which v and w differ is equal to the number of positions where v w has a non-zero entry, we are led to the following definitions. A word of length n over F is an element of F n. The weight wt(c) of a word c is the number of coordinates in which c has non-zero entries. An important 17

18 parameter of a code C is its minimum weight, the smallest weight of a non-zero word in C. The weight enumerator of a code C of length n is the polynomial W C (x,y) = x n wt(c) y wt(c) = c C n i=0 A i x n i y i, where A i is the number of words of weight i in C. It is really a polynomial in one variable, since we lose no information by putting x = 1; the form given is homogeneous (every term has degree n). We now come to the theorem of Curtis Greene [3] asserting that the weight enumerator of a code is a specialisation of the Tutte polynomial of the corresponding matroid. Theorem 7.1 Let C be a code over a field with q elements, and M the corresponding vector matroid. Then ( W C (x,y) = y n dim(c) (x y) dim(c) x + (q 1)y T M;, x ). x y y Proof This result is an application of Proposition 6.4. We have to describe the codes C and C corresponding to deletion and contraction of the ith coordinate in the matroid, and verify the following. (a) W C (x,y) = 1, if C is the empty code. (b) If the ith coordinate is a loop, then W C (x,y) = xw C (x,y). (c) If the ith coordinate is a coloop, then W C (x,y) = (x + (q 1)y)W C (x,y). (d) If the ith coordinate is neither a loop nor a coloop, then W C (x,y) = yw C (x,y)+ (x y)w C (x,y). Then Proposition 6.4 shows that as required. W C (x,y) = T (M;x + (q 1)y,x,y,x y) ( = y n dim(c) (x y) dim(c) x + (q 1)y T M;, x ) x y y 18

19 Clearly, the ith coordinate of C is a loop in M if and only if the ith column of the matrix A is zero (that is, every codeword has 0 in the ith position). Dually, the ith coordinate is a coloop if and only if, applying elementary row operations so that the ith column of A is ( ), all other entries in the first row of A are zero. Put another way, this means that C is the direct sum of the code F 1 and a code of length n 1. If the ith coordinate is not a coloop, we define the punctured code C to be the code obtained from C by deleting the ith coordinate from every codeword in C. If the ith coordinate is not a loop, we define the shortened code C to be obtained as follows: take all codewords which have entry 0 in the ith coordinate (these will form a subcode of codimension 1 in C) and then delete the ith coordinate. It is easily checked that the punctured and shortened codes correspond to the restriction M/i and contraction M\i respectively. Now we check the recurrence relations. Part (a) is trivial. Consider (b). If the ith coordinate is a loop, then every codeword c has zero there, and the contribution of c to W C (x,y) is just x times its contribution to W C (x,y). So (b) holds. Suppose that the ith coordinate is a coloop. Then each codeword c of C corresponds to q codewords of C, obtained by putting each possible symbol of the alphabet in the ith coordinate. Of these, one (obtained by writing in 0) has the same weight as c, and the remaining q 1 have weight one greater. So (c) holds. Finally, suppose that the ith coordinate is neither a loop nor a coloop. Write +W(2) (1) C, where W C is the sum of terms x n wt(c) y wt(c) corresponding W C = W (1) C to words c having zero in the ith coordinate, and A (2) entry in the ith coordinate. Then by definition and from which we deduce that as required for (d). W C = W (1) C /x, W C = W (1) C C (2) /x +W /y, W C = W (1) (2) C +W C = yw C + (x y)w C, Note that, if X = (x + (q 1)y)/(x y) and Y = x/y, then C (X 1)(Y 1) = q. to words having non-zero So the weight enumerator is an evaluation of the Tutte polynomial along a particular hyperbola in the Tutte plane. 19

20 From this result, we can deduce the MacWilliams relation which shows that the weight enumerator of the dual code C can be calculated from that of C. Theorem 7.2 W C (x,y) = 1 C W C(x + (q 1)y,x y). Proof Since C has dimension n dim(c) and corresponds to the dual matroid M, we have ( W C (x,y) = y dim(c) (x y) n dim(c) T M; x ) x + (q 1)y,. y x y On the other hand, we have 1 C W C(x + (q 1)y,x y) = q dim(c) (x y) n dim(c) (qy) dim(c) ( T M; qx ) x + (q 1)y,. qy x y The two expressions are equal. 8 Tutte polynomial and bases The Tutte polynomial T (M;x,y) of a matroid M has the property that T (M;1,1) is the number of bases of the matroid. (This is clear from the formula for T as a sum over subsets. When we substitute x = y = 1, all those terms which have a factor (x 1) or (y 1) vanish, and we are left only with subsets A such that A = ρ(a) = ρ(e), that is, bases.) This means that the sum of all the coefficients in the Tutte polynomial is equal to the number of bases. Crapo [2], who first extended Tutte s definition from graphs to matroids, observed that the Tutte polynomial has an alternative definition as a sum over bases. The definition is somewhat complicated, however, depending on a total ordering of E. Let B be any base of M. For each y / B, there is a unique cycle containing y and contained in {y} B, called the fundamental cycle of y (with respect to the base B). To show this, note that B {y} contains a cycle (since it is not independent), whereas B contains no cycle; so there is at least one cycle in B {y} containing y. If there were more than one, say C 1 and C 2, then (MC2) would give 20

21 the existence of a cycle contained in C 1 C 2 not containing y, hence contained in B, a contradiction. Dually, for all x B, there is a unique cocycle (that is, cycle of the dual matroid) containing x and contained in {x} (E \ B), called the fundamental cocycle of x (with respect to B); it is just the fundamental cycle of x with respect to the base E \ B of the dual matroid. For example, if the matroid M is graphic and comes from a connected graph G, then a base B is the edge set of a spanning tree of G. The fundamental cycle containing an edge y / B is the unique cycle in the graph with edge set B {y}. Dually, if x B, then removal of x disconnects the spanning tree B into two components; the fundamental cocycle consists of all edges of G which have one end in each component. Now suppose that the ground set E is totally ordered, and let B be any base. We say that x B is internally active if it is the greatest element in its fundamental cocycle; and y / B is externally active if it is the greatest element in its fundamental cycle. The internal activity of B is the number of internally active elements, and the external activity is the number of externally active elements. Now Crapo showed: Theorem 8.1 The coefficient of x i y j in T (M;x,y) is equal to the number of bases of M with internal activity i and external activity j. A remarkable feature of this theorem is that the number of bases with given internal and external activity is independent of the ordering of the elements, although of course the internal and external activity of any given base will change. We saw in Theorem 3.2 that, for any ordering of E, there is a unique base B which dominates all others, in the sense that for all i ρ(e), the ith greatest element of B is at least as large as the ith greatest element of any other base. We call it the last base. Dually, there is a first base, whose ith smallest element is at least as small as the ith smallest element of any other base. Now the internal and external activity of these bases can be calculated: Proposition 8.2 (a) The internal activity of the first base is the number of coloops of M, while its external activity is equal to E ρ(e). (b) The internal activity of the last base is ρ(e), while its external activity is equal to the number of loops of M. 21

22 Proof (a) Let B be the first base and y / B. We show that y is the greatest element in its fundamental cycle C. Suppose not: that is, there exists x B C with x > y. We can assume that x is the greatest element in C. Let X consist of all elements of B smaller than x, together with y. Then X is independent, since it contains no cycle; so X is contained in a base B. Now we have a contradiction, since the elements of B and B less than y agree, but y is smaller than the corresponding element of B. Dually, if x B, then x is the smallest element in its fundamental cocycle. This follows on dualising and reversing the order, when E \B is the new first base, since then x is the greatest element in its fundamental cycle with respect to E \ B in the reversed order. Now it is clear that, with respect to the first base B, every element outside B is externally active, while an element of B is internally active if and only if it is both smallest and largest in its fundamental cocycle, that is, its fundamental cocycle consists of a single element, that is, it is a coloop. (b) Dual. For a trivial example, the matroid on {1,2,3} with two bases of size 2 has no loops and one coloop, so its Tutte polynomial ( ) is x 2 + xy. This matroid is representable over GF(2), by the matrix ; Theorem 7.1 shows that the weight enumerator of the corresponding code is [ (x ) + y 2 y(x y) 2 + x y ( x + y x y )( ) ] x = x 3 + x 2 y + xy 2 + y 3. y In fact, we see directly that the code has one word of each possible weight 0,1,2,3. 9 Trellis decoding We discuss trellis decoding here for two reasons. First, it provides a general method of decoding, which has some advantages over conventional methods such as syndrome decoding. Second, it poses some questions which have a tantalising similarity to aspects of the Tutte polynomial, which we don t yet understand. Conventional coding works as follows. Let C be a linear code of length n and dimension k over a field F. In order to transmit information (which we suppose is presented in blocks or k-tuples of elements of F), we first encode it by a oneto-one linear transformation from F k to the subspace C of F n. The information 22

23 will be transmitted more slowly, since it will take n units of time to send a block, rather than k if we sent the information unencoded. (We say that the code has rate k/n.) We employ this redundancy for error correction. Classically, we assume that the received information is an n-tuple of elements of F (but not necessarily a codeword, since errors may have occurred). Assuming that only a small number of errors are likely, the received word will hopefully be nearer to the transmitted codeword than to any other codeword. (Precisely, this is the case if at most (d 1)/2 symbols are received incorrectly, where d is the minimum distance of the code.) So we search through the codewords and select the one nearest to the received word (in terms of Hamming distance). This strategy is nearest-neighbour decoding. For example, suppose that F = GF(2), and let C be the repetition code of length 3, consisting of the two codewords 000 and 111. Assuming that at most one bit is altered during transmission, we may conclude that if a word with more zeros than ones is received, then 000 was sent, while if the received word has more ones than zeros, then 111 was transmitted. In practice, however, what is received is an electrical voltage which varies continuously, and is sampled at appropriate time intervals to determine the symbols of the received word. The simplest strategy is to round each voltage level to the nearest value which corresponds to a symbol in F. For example, suppose that the symbols 0 and 1 are represented by voltages 0 and 1 respectively. Using the above repetition code, suppose that we received the voltages 0.4, If we round and then decode, we obtain 000. But it appears that 111 might be a better choice. Indeed, the Euclidean distance from the received vector to (0,0,0) is = 1.51, whereas the Euclidean distance to (1,1,1) is = (The choice of Euclidean distance is not arbitrary. Under certain technical assumptions on the errors, namely that they are independent Gaussian random variables with mean zero and constant variance, minimum Euclidean distance corresponds to maximum likelihood.) Trellis decoding is a method of decoding which finds the codeword at minimum Euclidean distance from the received word directly, avoiding the errors caused by rounding as above. A trellis for a code C of length n is a graph with the following properties: 23

24 The vertices lie in n + 1 disjoint layers L 0,...,L n, where L 0 and L n each contain just one vertex (the source s and target t respectively). The edges are directed, and each edge goes from a vertex in layer L i to one in layer L i+1, for some i. Each edge has a label, which is an element of F. There is a bijection between the codewords of C and the paths from s to t, so that the n-tuple of edge labels on any path is equal to the corresponding codeword. For example, Figure 2 is a trellis for the repetition code of length 3. We assume that edges are directed from left to right. 0 s t 1 Figure 2: A trellis Now suppose that α(c) is the voltage level corresponding to the symbol c F. Assume that (x 1,...,x n ) is the received n-tuple of voltage levels. An edge between levels L i 1 and L i which carries the label c is assigned a length (x i α(c)) 2. Now the total length of a path from s to t with edge labels (c 1,...,c n ) is (x 1 α(c 1 )) (x n α(c n )) 2, which is just the square of the Euclidean distance from (x 1,...,x n ) to the point (α(c 1 ),...,α(c n )) representing the codeword (c 1,...,c n ). So nearest-neighbour decoding is achieved by finding the shortest path from s to t in the trellis. This can be done by standard algorithms such as Dijkstra s algorithm. There is an added benefit. Dijkstra s algorithm works in two passes. First, the shortest distances from s to the vertices in L i are computed by induction on i: if v i is such a vertex, then d(s,v i ) = min{d(s,v i 1 ) + l(v i 1,v i ), 24

25 where the minimum is over all vertices v i 1 L i 1 for which (v i 1,v i ) is an edge. This calculation can be done layer by layer as the components of (x 1,...,x n ) are received, since l(v i 1,v i ) depends only on x i. When the entire word has been received, d(s,t) is known, and the path realising this distance is found by a simple backtracking. For example, consider the trellis for the repetition code shown above, with α(c) = c for c {0,1}. When we receive the value 0.4, we assign d(s,v) = 0.16,0.36 for the two nodes in L 1. When 0.4 is received again, we assign 0.32 and Finally, when 1.4 is received, we assign d(s,t) = min{2.28, 0.88} = The backtracking in this case is trivial. 10 Minimal trellises Any code C, not necessarily linear, can be represented by a trellis. We can simply take C disjoint paths from s to t, one for each codeword, and label the edges on the path corresponding to the codeword c with the symbols of c in order. However, it is clear from the description of trellis decoding in the last section that, the smaller the trellis, the more efficient the decoding algorithm will be. The size of the trellis can be measured in various ways (for example, number of vertices, number of edges, cycle rank); for simplicity we will use the number of vertices in this section. For example, let C be the dual of the binary repetition code of length 3. This is a code with four codewords, so the simple construction above gives a trellis with 10 vertices and 12 edges. But there is a trellis with only 6 vertices and 8 edges for this code, as shown in Figure s t Figure 3: A minimal trellis It is not hard to see that this is the smallest trellis which represents the code. 25

26 The following result of Muder [5] settles the question of existence of a best trellis for a linear code in a strong way. Theorem 10.1 Let C be a linear code of length n. Then there is a trellis T for C, with layers L 0,...,L n, having the property that, if T any other trellis for C, having layers L 0,...,L n, we have L i L i for i = 1,...,n. We call L a minimal trellis for C. For the proof, see the next result, which also shows how the sizes of the layers in the minimal trellis are determined by the first and last base of the corresponding matroid. Theorem 10.2 Let C be a linear code over a field of order q. Let A and B be the first and last base of the corresponding matroid on the set {1,2,...,n}. Let a i = A {1,...,i} and b i = B {1,...,i} for i = 0,...,n. Then the cardinality of the ith layer of the minimal trellis for C is q a i b i, and the number of edges between the ith and (i + 1)st layers is q a i+1 b i. Proof We construct a trellis whose layers have the sizes claimed in the theorem. Then we show that it represents the code and is minimal. Let P i denote the subcode of C consisting of words which have entries 0 in positions i+1,...,n, and let F i denote the subcode consisting of words which have entries 0 in positions 1,...,i. (These are called the ith past and future subcodes of C.) We have dim(f i ) = k a i and dim(p i ) = b i, where k = dim(c). (To see the first equation, take a generator matrix for C in reduced echelon form. Then the first base consists of the coordinate positions where the leading ones occur. Also, a codeword c is zero in the first i positions if and only if the a i rows with leading 1s in the first i positions do not occur in the expression for c as a linear combination of rows. The second equation is proved similarly by reversing the order.) Now we construct the trellis as follows. By definition, P i F i = {0}, so dim(p i + F i ) = k a i + b i. We let L i be the vector space C/(P i + F i ); then L i has dimension a i b i, and so has cardinality q a i b i as claimed. For each word c C, we associate the vertex of L i which is the coset (P i + F i ) + c. Join these vertices by a path from s to t, labelling the edges with the coordinates of c. We identify edges which have the same start and end vertices and the same label. This produces a trellis in which every codeword is represented by a path. We must show that every path arises in this way. First, note that there is at most one edge with any given label entering or leaving any vertex of the trellis. For two edges with the same label leaving a vertex 26

Matroids/1. I and I 2 ,I 2 > I 1

Matroids/1. I and I 2 ,I 2 > I 1 Matroids 1 Definition A matroid is an abstraction of the notion of linear independence in a vector space. See Oxley [6], Welsh [7] for further information about matroids. A matroid is a pair (E,I ), where

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

Polynomial aspects of codes, matroids and permutation groups

Polynomial aspects of codes, matroids and permutation groups Polynomial aspects of codes, matroids and permutation groups Peter J. Cameron School of Mathematical Sciences Queen Mary, University of London Mile End Road London E1 4NS UK p.j.cameron@qmul.ac.uk Contents

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

Support weight enumerators and coset weight distributions of isodual codes

Support weight enumerators and coset weight distributions of isodual codes Support weight enumerators and coset weight distributions of isodual codes Olgica Milenkovic Department of Electrical and Computer Engineering University of Colorado, Boulder March 31, 2003 Abstract In

More information

An Introduction of Tutte Polynomial

An Introduction of Tutte Polynomial An Introduction of Tutte Polynomial Bo Lin December 12, 2013 Abstract Tutte polynomial, defined for matroids and graphs, has the important property that any multiplicative graph invariant with a deletion

More information

Lecture 2 Linear Codes

Lecture 2 Linear Codes Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of

More information

MT5821 Advanced Combinatorics

MT5821 Advanced Combinatorics MT5821 Advanced Combinatorics 1 Error-correcting codes In this section of the notes, we have a quick look at coding theory. After a motivating introduction, we discuss the weight enumerator of a code,

More information

Combining the cycle index and the Tutte polynomial?

Combining the cycle index and the Tutte polynomial? Combining the cycle index and the Tutte polynomial? Peter J. Cameron University of St Andrews Combinatorics Seminar University of Vienna 23 March 2017 Selections Students often meet the following table

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

Mathematics Department

Mathematics Department Mathematics Department Matthew Pressland Room 7.355 V57 WT 27/8 Advanced Higher Mathematics for INFOTECH Exercise Sheet 2. Let C F 6 3 be the linear code defined by the generator matrix G = 2 2 (a) Find

More information

WHAT IS A MATROID? JAMES OXLEY

WHAT IS A MATROID? JAMES OXLEY WHAT IS A MATROID? JAMES OXLEY Abstract. Matroids were introduced by Whitney in 1935 to try to capture abstractly the essence of dependence. Whitney s definition embraces a surprising diversity of combinatorial

More information

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x) Outline Recall: For integers Euclidean algorithm for finding gcd s Extended Euclid for finding multiplicative inverses Extended Euclid for computing Sun-Ze Test for primitive roots And for polynomials

More information

TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS III

TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS III TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS III CAROLYN CHUN, DILLON MAYHEW, AND JAMES OXLEY Abstract. This paper proves a preliminary step towards a splitter theorem for internally

More information

The cocycle lattice of binary matroids

The cocycle lattice of binary matroids Published in: Europ. J. Comb. 14 (1993), 241 250. The cocycle lattice of binary matroids László Lovász Eötvös University, Budapest, Hungary, H-1088 Princeton University, Princeton, NJ 08544 Ákos Seress*

More information

MAS309 Coding theory

MAS309 Coding theory MAS309 Coding theory Matthew Fayers January March 2008 This is a set of notes which is supposed to augment your own notes for the Coding Theory course They were written by Matthew Fayers, and very lightly

More information

k-blocks: a connectivity invariant for graphs

k-blocks: a connectivity invariant for graphs 1 k-blocks: a connectivity invariant for graphs J. Carmesin R. Diestel M. Hamann F. Hundertmark June 17, 2014 Abstract A k-block in a graph G is a maximal set of at least k vertices no two of which can

More information

Classification of root systems

Classification of root systems Classification of root systems September 8, 2017 1 Introduction These notes are an approximate outline of some of the material to be covered on Thursday, April 9; Tuesday, April 14; and Thursday, April

More information

1 Fields and vector spaces

1 Fields and vector spaces 1 Fields and vector spaces In this section we revise some algebraic preliminaries and establish notation. 1.1 Division rings and fields A division ring, or skew field, is a structure F with two binary

More information

Arrangements, matroids and codes

Arrangements, matroids and codes Arrangements, matroids and codes first lecture Ruud Pellikaan joint work with Relinde Jurrius ACAGM summer school Leuven Belgium, 18 July 2011 References 2/43 1. Codes, arrangements and matroids by Relinde

More information

Latin squares: Equivalents and equivalence

Latin squares: Equivalents and equivalence Latin squares: Equivalents and equivalence 1 Introduction This essay describes some mathematical structures equivalent to Latin squares and some notions of equivalence of such structures. According to

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

Matroid intersection, base packing and base covering for infinite matroids

Matroid intersection, base packing and base covering for infinite matroids Matroid intersection, base packing and base covering for infinite matroids Nathan Bowler Johannes Carmesin June 25, 2014 Abstract As part of the recent developments in infinite matroid theory, there have

More information

1 Some loose ends from last time

1 Some loose ends from last time Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Kruskal s and Borůvka s MST algorithms September 20, 2010 1 Some loose ends from last time 1.1 A lemma concerning greedy algorithms and

More information

Root systems and optimal block designs

Root systems and optimal block designs Root systems and optimal block designs Peter J. Cameron School of Mathematical Sciences Queen Mary, University of London Mile End Road London E1 4NS, UK p.j.cameron@qmul.ac.uk Abstract Motivated by a question

More information

a (b + c) = a b + a c

a (b + c) = a b + a c Chapter 1 Vector spaces In the Linear Algebra I module, we encountered two kinds of vector space, namely real and complex. The real numbers and the complex numbers are both examples of an algebraic structure

More information

Tutte Polynomials with Applications

Tutte Polynomials with Applications Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 6 (26), pp. 4781 4797 Research India Publications http://www.ripublication.com/gjpam.htm Tutte Polynomials with Applications

More information

Graph Theory. Thomas Bloom. February 6, 2015

Graph Theory. Thomas Bloom. February 6, 2015 Graph Theory Thomas Bloom February 6, 2015 1 Lecture 1 Introduction A graph (for the purposes of these lectures) is a finite set of vertices, some of which are connected by a single edge. Most importantly,

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

HW Graph Theory SOLUTIONS (hbovik) - Q

HW Graph Theory SOLUTIONS (hbovik) - Q 1, Diestel 3.5: Deduce the k = 2 case of Menger s theorem (3.3.1) from Proposition 3.1.1. Let G be 2-connected, and let A and B be 2-sets. We handle some special cases (thus later in the induction if these

More information

Generalized Pigeonhole Properties of Graphs and Oriented Graphs

Generalized Pigeonhole Properties of Graphs and Oriented Graphs Europ. J. Combinatorics (2002) 23, 257 274 doi:10.1006/eujc.2002.0574 Available online at http://www.idealibrary.com on Generalized Pigeonhole Properties of Graphs and Oriented Graphs ANTHONY BONATO, PETER

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Codes on graphs. Chapter Elementary realizations of linear block codes

Codes on graphs. Chapter Elementary realizations of linear block codes Chapter 11 Codes on graphs In this chapter we will introduce the subject of codes on graphs. This subject forms an intellectual foundation for all known classes of capacity-approaching codes, including

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information

RELATIVE TUTTE POLYNOMIALS FOR COLORED GRAPHS AND VIRTUAL KNOT THEORY. 1. Introduction

RELATIVE TUTTE POLYNOMIALS FOR COLORED GRAPHS AND VIRTUAL KNOT THEORY. 1. Introduction RELATIVE TUTTE POLYNOMIALS FOR COLORED GRAPHS AND VIRTUAL KNOT THEORY Y. DIAO AND G. HETYEI Abstract. We introduce the concept of a relative Tutte polynomial. We show that the relative Tutte polynomial

More information

Some aspects of codes over rings

Some aspects of codes over rings Some aspects of codes over rings Peter J. Cameron p.j.cameron@qmul.ac.uk Galway, July 2009 This is work by two of my students, Josephine Kusuma and Fatma Al-Kharoosi Summary Codes over rings and orthogonal

More information

Linear Codes, Target Function Classes, and Network Computing Capacity

Linear Codes, Target Function Classes, and Network Computing Capacity Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:

More information

Trees. A tree is a graph which is. (a) Connected and. (b) has no cycles (acyclic).

Trees. A tree is a graph which is. (a) Connected and. (b) has no cycles (acyclic). Trees A tree is a graph which is (a) Connected and (b) has no cycles (acyclic). 1 Lemma 1 Let the components of G be C 1, C 2,..., C r, Suppose e = (u, v) / E, u C i, v C j. (a) i = j ω(g + e) = ω(g).

More information

An Introduction to Transversal Matroids

An Introduction to Transversal Matroids An Introduction to Transversal Matroids Joseph E Bonin The George Washington University These slides and an accompanying expository paper (in essence, notes for this talk, and more) are available at http://homegwuedu/

More information

Index coding with side information

Index coding with side information Index coding with side information Ehsan Ebrahimi Targhi University of Tartu Abstract. The Index Coding problem has attracted a considerable amount of attention in the recent years. The problem is motivated

More information

Connectivity and tree structure in finite graphs arxiv: v5 [math.co] 1 Sep 2014

Connectivity and tree structure in finite graphs arxiv: v5 [math.co] 1 Sep 2014 Connectivity and tree structure in finite graphs arxiv:1105.1611v5 [math.co] 1 Sep 2014 J. Carmesin R. Diestel F. Hundertmark M. Stein 20 March, 2013 Abstract Considering systems of separations in a graph

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/57796 holds various files of this Leiden University dissertation Author: Mirandola, Diego Title: On products of linear error correcting codes Date: 2017-12-06

More information

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,

More information

The cycle polynomial of a permutation group

The cycle polynomial of a permutation group The cycle polynomial of a permutation group Peter J. Cameron School of Mathematics and Statistics University of St Andrews North Haugh St Andrews, Fife, U.K. pjc0@st-andrews.ac.uk Jason Semeraro Department

More information

Notes 10: Public-key cryptography

Notes 10: Public-key cryptography MTH6115 Cryptography Notes 10: Public-key cryptography In this section we look at two other schemes that have been proposed for publickey ciphers. The first is interesting because it was the earliest such

More information

MATROID PACKING AND COVERING WITH CIRCUITS THROUGH AN ELEMENT

MATROID PACKING AND COVERING WITH CIRCUITS THROUGH AN ELEMENT MATROID PACKING AND COVERING WITH CIRCUITS THROUGH AN ELEMENT MANOEL LEMOS AND JAMES OXLEY Abstract. In 1981, Seymour proved a conjecture of Welsh that, in a connected matroid M, the sum of the maximum

More information

Vector spaces. EE 387, Notes 8, Handout #12

Vector spaces. EE 387, Notes 8, Handout #12 Vector spaces EE 387, Notes 8, Handout #12 A vector space V of vectors over a field F of scalars is a set with a binary operator + on V and a scalar-vector product satisfying these axioms: 1. (V, +) is

More information

Footnotes to Linear Algebra (MA 540 fall 2013), T. Goodwillie, Bases

Footnotes to Linear Algebra (MA 540 fall 2013), T. Goodwillie, Bases Footnotes to Linear Algebra (MA 540 fall 2013), T. Goodwillie, Bases November 18, 2013 1 Spanning and linear independence I will outline a slightly different approach to the material in Chapter 2 of Axler

More information

The Reduction of Graph Families Closed under Contraction

The Reduction of Graph Families Closed under Contraction The Reduction of Graph Families Closed under Contraction Paul A. Catlin, Department of Mathematics Wayne State University, Detroit MI 48202 November 24, 2004 Abstract Let S be a family of graphs. Suppose

More information

THE MINIMALLY NON-IDEAL BINARY CLUTTERS WITH A TRIANGLE 1. INTRODUCTION

THE MINIMALLY NON-IDEAL BINARY CLUTTERS WITH A TRIANGLE 1. INTRODUCTION THE MINIMALLY NON-IDEAL BINARY CLUTTERS WITH A TRIANGLE AHMAD ABDI AND BERTRAND GUENIN ABSTRACT. It is proved that the lines of the Fano plane and the odd circuits of K 5 constitute the only minimally

More information

Notes on Graph Theory

Notes on Graph Theory Notes on Graph Theory Maris Ozols June 8, 2010 Contents 0.1 Berge s Lemma............................................ 2 0.2 König s Theorem........................................... 3 0.3 Hall s Theorem............................................

More information

Enumerative Combinatorics 7: Group actions

Enumerative Combinatorics 7: Group actions Enumerative Combinatorics 7: Group actions Peter J. Cameron Autumn 2013 How many ways can you colour the faces of a cube with three colours? Clearly the answer is 3 6 = 729. But what if we regard two colourings

More information

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination

More information

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 USA wiebke@udayton.edu

More information

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row

More information

TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS II

TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS II TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS II CAROLYN CHUN, DILLON MAYHEW, AND JAMES OXLEY Abstract. Let M and N be internally 4-connected binary matroids such that M has a proper

More information

Spanning, linear dependence, dimension

Spanning, linear dependence, dimension Spanning, linear dependence, dimension In the crudest possible measure of these things, the real line R and the plane R have the same size (and so does 3-space, R 3 ) That is, there is a function between

More information

Near-domination in graphs

Near-domination in graphs Near-domination in graphs Bruce Reed Researcher, Projet COATI, INRIA and Laboratoire I3S, CNRS France, and Visiting Researcher, IMPA, Brazil Alex Scott Mathematical Institute, University of Oxford, Oxford

More information

Constructing Critical Indecomposable Codes

Constructing Critical Indecomposable Codes Constructing Critical Indecomposable Codes Judy L. Walker 1 Abstract Critical indecomposable codes were introduced by Assmus [1], who also gave a recursive construction for these objects. One of the key

More information

CHAPTER 12 Boolean Algebra

CHAPTER 12 Boolean Algebra 318 Chapter 12 Boolean Algebra CHAPTER 12 Boolean Algebra SECTION 12.1 Boolean Functions 2. a) Since x 1 = x, the only solution is x = 0. b) Since 0 + 0 = 0 and 1 + 1 = 1, the only solution is x = 0. c)

More information

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0 Coding Theory Massoud Malek Binary Linear Codes Generator and Parity-Check Matrices. A subset C of IK n is called a linear code, if C is a subspace of IK n (i.e., C is closed under addition). A linear

More information

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) = Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and

More information

Permutation groups/1. 1 Automorphism groups, permutation groups, abstract

Permutation groups/1. 1 Automorphism groups, permutation groups, abstract Permutation groups Whatever you have to do with a structure-endowed entity Σ try to determine its group of automorphisms... You can expect to gain a deep insight into the constitution of Σ in this way.

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Combinatorics 2: Structure, symmetry and polynomials

Combinatorics 2: Structure, symmetry and polynomials Combinatorics 2: Structure, symmetry and polynomials 1 Preface This is the second of a three-part set of lecture notes on Advanced Combinatorics, for the module MT5821 of that title at the University of

More information

The Witt designs, Golay codes and Mathieu groups

The Witt designs, Golay codes and Mathieu groups The Witt designs, Golay codes and Mathieu groups 1 The Golay codes Let V be a vector space over F q with fixed basis e 1,..., e n. A code C is a subset of V. A linear code is a subspace of V. The vector

More information

Tutorial on Mathematical Induction

Tutorial on Mathematical Induction Tutorial on Mathematical Induction Roy Overbeek VU University Amsterdam Department of Computer Science r.overbeek@student.vu.nl April 22, 2014 1 Dominoes: from case-by-case to induction Suppose that you

More information

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic. Modular numbers and Error Correcting Codes Introduction Modular Arithmetic Finite fields n-space over a finite field Error correcting codes Exercises Introduction. Data transmission is not normally perfect;

More information

Semimatroids and their Tutte polynomials

Semimatroids and their Tutte polynomials Semimatroids and their Tutte polynomials Federico Ardila Abstract We define and study semimatroids, a class of objects which abstracts the dependence properties of an affine hyperplane arrangement. We

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal

Finite Mathematics. Nik Ruškuc and Colva M. Roney-Dougal Finite Mathematics Nik Ruškuc and Colva M. Roney-Dougal September 19, 2011 Contents 1 Introduction 3 1 About the course............................. 3 2 A review of some algebraic structures.................

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

arxiv: v1 [math.co] 28 Oct 2016

arxiv: v1 [math.co] 28 Oct 2016 More on foxes arxiv:1610.09093v1 [math.co] 8 Oct 016 Matthias Kriesell Abstract Jens M. Schmidt An edge in a k-connected graph G is called k-contractible if the graph G/e obtained from G by contracting

More information

Unless otherwise specified, V denotes an arbitrary finite-dimensional vector space.

Unless otherwise specified, V denotes an arbitrary finite-dimensional vector space. MAT 90 // 0 points Exam Solutions Unless otherwise specified, V denotes an arbitrary finite-dimensional vector space..(0) Prove: a central arrangement A in V is essential if and only if the dual projective

More information

A survey of Tutte-Whitney polynomials

A survey of Tutte-Whitney polynomials A survey of Tutte-Whitney polynomials Graham Farr Faculty of IT Monash University Graham.Farr@infotech.monash.edu.au July 2007 Counting colourings proper colourings Counting colourings proper colourings

More information

NOTES (1) FOR MATH 375, FALL 2012

NOTES (1) FOR MATH 375, FALL 2012 NOTES 1) FOR MATH 375, FALL 2012 1 Vector Spaces 11 Axioms Linear algebra grows out of the problem of solving simultaneous systems of linear equations such as 3x + 2y = 5, 111) x 3y = 9, or 2x + 3y z =

More information

Parity Versions of 2-Connectedness

Parity Versions of 2-Connectedness Parity Versions of 2-Connectedness C. Little Institute of Fundamental Sciences Massey University Palmerston North, New Zealand c.little@massey.ac.nz A. Vince Department of Mathematics University of Florida

More information

Minors and Tutte invariants for alternating dimaps

Minors and Tutte invariants for alternating dimaps Minors and Tutte invariants for alternating dimaps Graham Farr Clayton School of IT Monash University Graham.Farr@monash.edu Work done partly at: Isaac Newton Institute for Mathematical Sciences (Combinatorics

More information

Perfect matchings in highly cyclically connected regular graphs

Perfect matchings in highly cyclically connected regular graphs Perfect matchings in highly cyclically connected regular graphs arxiv:1709.08891v1 [math.co] 6 Sep 017 Robert Lukot ka Comenius University, Bratislava lukotka@dcs.fmph.uniba.sk Edita Rollová University

More information

Exterior powers and Clifford algebras

Exterior powers and Clifford algebras 10 Exterior powers and Clifford algebras In this chapter, various algebraic constructions (exterior products and Clifford algebras) are used to embed some geometries related to projective and polar spaces

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Proof Techniques (Review of Math 271)

Proof Techniques (Review of Math 271) Chapter 2 Proof Techniques (Review of Math 271) 2.1 Overview This chapter reviews proof techniques that were probably introduced in Math 271 and that may also have been used in a different way in Phil

More information

Citation Osaka Journal of Mathematics. 43(2)

Citation Osaka Journal of Mathematics. 43(2) TitleIrreducible representations of the Author(s) Kosuda, Masashi Citation Osaka Journal of Mathematics. 43(2) Issue 2006-06 Date Text Version publisher URL http://hdl.handle.net/094/0396 DOI Rights Osaka

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Automorphism groups of wreath product digraphs

Automorphism groups of wreath product digraphs Automorphism groups of wreath product digraphs Edward Dobson Department of Mathematics and Statistics Mississippi State University PO Drawer MA Mississippi State, MS 39762 USA dobson@math.msstate.edu Joy

More information

POLYNOMIAL CODES AND FINITE GEOMETRIES

POLYNOMIAL CODES AND FINITE GEOMETRIES POLYNOMIAL CODES AND FINITE GEOMETRIES E. F. Assmus, Jr and J. D. Key Contents 1 Introduction 2 2 Projective and affine geometries 3 2.1 Projective geometry....................... 3 2.2 Affine geometry..........................

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

The Strong Largeur d Arborescence

The Strong Largeur d Arborescence The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics

More information

Hamming codes and simplex codes ( )

Hamming codes and simplex codes ( ) Chapter 6 Hamming codes and simplex codes (2018-03-17) Synopsis. Hamming codes are essentially the first non-trivial family of codes that we shall meet. We start by proving the Distance Theorem for linear

More information

1 Matroid intersection

1 Matroid intersection CS 369P: Polyhedral techniques in combinatorial optimization Instructor: Jan Vondrák Lecture date: October 21st, 2010 Scribe: Bernd Bandemer 1 Matroid intersection Given two matroids M 1 = (E, I 1 ) and

More information

4 CONNECTED PROJECTIVE-PLANAR GRAPHS ARE HAMILTONIAN. Robin Thomas* Xingxing Yu**

4 CONNECTED PROJECTIVE-PLANAR GRAPHS ARE HAMILTONIAN. Robin Thomas* Xingxing Yu** 4 CONNECTED PROJECTIVE-PLANAR GRAPHS ARE HAMILTONIAN Robin Thomas* Xingxing Yu** School of Mathematics Georgia Institute of Technology Atlanta, Georgia 30332, USA May 1991, revised 23 October 1993. Published

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information