( f + g)(ω) = f (ω) + g(ω) for ω in Ω. Scalar multiplication is also defined pointwise: for λ in F and f in F Ω,

Similar documents
Combining Association Schemes

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

MATH Mathematics for Agriculture II

β α : α β We say that is symmetric if. One special symmetric subset is the diagonal

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Matrices and Vectors

Recitation 8: Graphs and Adjacency Matrices

Matrix Basic Concepts

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

Linear Equations in Linear Algebra

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Combining the cycle index and the Tutte polynomial?

Notes. Relations. Introduction. Notes. Relations. Notes. Definition. Example. Slides by Christopher M. Bourke Instructor: Berthe Y.

3 (Maths) Linear Algebra

1 Matrices and matrix algebra

USING GRAPHS AND GAMES TO GENERATE CAP SET BOUNDS

Numerical Analysis Lecture Notes

Matrix Arithmetic. j=1

NOTE ON CYCLIC DECOMPOSITIONS OF COMPLETE BIPARTITE GRAPHS INTO CUBES

Effective Resistance and Schur Complements

CSCI 239 Discrete Structures of Computer Science Lab 6 Vectors and Matrices

Matrices: 2.1 Operations with Matrices

Asymptotic enumeration of sparse uniform linear hypergraphs with given degrees

Matrix representation of a linear map

8-15. Stop by or call (630)

Section 9.2: Matrices.. a m1 a m2 a mn

Notes for Science and Engineering Foundation Discrete Mathematics

7 Matrix Operations. 7.0 Matrix Multiplication + 3 = 3 = 4

Matrices and Linear Algebra

Relations Graphical View

PATH BUNDLES ON n-cubes

* is a row matrix * An mxn matrix is a square matrix if and only if m=n * A= is a diagonal matrix if = 0 i

Solutions to Exercises Chapter 10: Ramsey s Theorem

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

A necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees

Monogamous decompositions of complete bipartite graphs, symmetric H -squares, and self-orthogonal I-factorizations

Citation Osaka Journal of Mathematics. 43(2)

Chapter 2 - Basics Structures MATH 213. Chapter 2: Basic Structures. Dr. Eric Bancroft. Fall Dr. Eric Bancroft MATH 213 Fall / 60

TOWARDS A SPLITTER THEOREM FOR INTERNALLY 4-CONNECTED BINARY MATROIDS IV

Hadamard matrices and strongly regular graphs with the 3-e.c. adjacency property

. a m1 a mn. a 1 a 2 a = a n

Chapter 2 - Basics Structures

Ma/CS 6a Class 28: Latin Squares

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

10.4 The Kruskal Katona theorem

On completing partial Latin squares with two filled rows and at least two filled columns

Row and Column Distributions of Letter Matrices

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

On minors of the compound matrix of a Laplacian

Matrix Multiplication

Discrete Applied Mathematics

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Total Dominator Colorings in Paths

Elementary maths for GMT

Matrix Algebra: Definitions and Basic Operations

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

Section 1.6. M N = [a ij b ij ], (1.6.2)

MATHEMATICS. r Statement I Statement II p q ~p ~q ~p q q p ~(p ~q) F F T T F F T F T T F T T F T F F T T T F T T F F F T T

Resistance distance in wheels and fans

Edge colored complete bipartite graphs with trivial automorphism groups

Detailed Proof of The PerronFrobenius Theorem

and let s calculate the image of some vectors under the transformation T.

2013 University of New South Wales School Mathematics Competition

MATRICES. a m,1 a m,n A =

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Section Summary. Sequences. Recurrence Relations. Summations Special Integer Sequences (optional)

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Matrix operations Linear Algebra with Computer Science Application

Arithmetic properties of the adjacency matrix of quadriculated disks

Integral Free-Form Sudoku graphs

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Math 4377/6308 Advanced Linear Algebra

Graph Theory. Thomas Bloom. February 6, 2015

Tree sets. Reinhard Diestel

Data Mining and Analysis: Fundamental Concepts and Algorithms

Phys 201. Matrices and Determinants

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

ON MULTI-AVOIDANCE OF RIGHT ANGLED NUMBERED POLYOMINO PATTERNS

Graph coloring, perfect graphs

Matrix Algebra 2.1 MATRIX OPERATIONS Pearson Education, Inc.

Lecture 3: Matrix and Matrix Operations

1 Counting spanning trees: A determinantal formula

1 Matrices and Systems of Linear Equations. a 1n a 2n

Laplacian Integral Graphs with Maximum Degree 3

Section 1.1: Systems of Linear Equations

Arrays: Vectors and Matrices

Prepared by: M. S. KumarSwamy, TGT(Maths) Page

5.1 Introduction to Matrices

Inequalities for matrices and the L-intersecting problem

Strongly Regular Graphs, part 1

CHAPTER 1. Relations. 1. Relations and Their Properties. Discussion

Math 775 Homework 1. Austin Mohr. February 9, 2011

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. .

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

2.1 Matrices. 3 5 Solve for the variables in the following matrix equation.

Coloring. Basics. A k-coloring of a loopless graph G is a function f : V (G) S where S = k (often S = [k]).

Chih-Hung Yen and Hung-Lin Fu 1. INTRODUCTION

χ 1 χ 2 and ψ 1 ψ 2 is also in the plane: αχ 1 αχ 2 which has a zero first component and hence is in the plane. is also in the plane: ψ 1

Transcription:

10 1.3 Matrices Given a field F, the set F Ω of functions from Ω to F forms a vector space. Addition is defined pointwise: for f and g in F Ω, ( f + g)(ω) = f (ω) + g(ω) for ω in Ω. Scalar multiplication is also defined pointwise: for λ in F and f in F Ω, (λ f )(ω) = λ( f (ω)) for ω in Ω. For ω in Ω, let χ ω be the characteristic function of ω; that is χ ω (ω) = 1 χ ω (α) = 0 for α in Ω with α ω. Then f = ω Ω f (ω)χ ω for all f in F Ω, and so {χ ω : ω Ω} forms a natural basis of F Ω : hence dimf Ω = Ω. For any subset of Ω it is also convenient to define χ in F Ω by { 1 if ω χ (ω) = 0 if ω /. so that χ = ω χ ω. Given two finite sets Γ and, we can form their product Γ = {(γ,δ) : γ Γ, δ }, which may be thought of as a rectangle, as in Figure 1.7. Applying the preceding δ Γ γ (γ,δ) Figure 1.7: The product set Γ ideas to Γ, we obtain the vector space F Γ of all functions from Γ to F. It has dimension Γ. If M is such a function, we usually write M(γ,δ)

1.3. MATRICES 11 rather than M ((γ,δ)). In fact, M is just a matrix, with its rows labelled by Γ and its columns labelled by. So long as we retain this labelling, it does not matter in what order we write the rows and columns. In fact, what most people regard as an m n matrix over F is just a matrix in F Γ with Γ = {1,...,m} and = {1,...,n}. The usual convention is that row 1 appears first, etc., so that order does matter but labelling is not needed. Here I use the opposite convention (order does not matter, but labelling is needed) because usually the elements of Γ and are not integers. Some examples of matrices with such labelled rows and columns were given in Example 1.5. If M F Γ and Γ = then we say that M is square. This is stronger than merely having the same number of rows as of columns. If we need to emphasize the set, we say that M is square on. The transpose of M is the matrix M in F Γ defined by M (δ,γ) = M(γ,δ) for δ in and γ in Γ. The matrix M in F is symmetric if M = M. There are three special symmetric matrices in F : { 1 if δ1 = δ I (δ 1,δ 2 ) = 2 0 otherwise; J (δ 1,δ 2 ) = 1 for all δ 1, δ 2 in ; O (δ 1,δ 2 ) = 0 for all δ 1, δ 2 in. Moreover, if f F we define the symmetric matrix diag( f ) in F by diag( f )(δ 1,δ 2 ) = { f (δ1 ) if δ 1 = δ 2 0 otherwise. Matrix multiplication is possible when the labelling sets are compatible. If M 1 F Γ and M 2 F Φ then M 1 M 2 is the matrix in F Γ Φ defined by (M 1 M 2 )(γ,φ) = M 1 (γ,δ)m 2 (δ,φ). δ All the usual results about matrix multiplication hold. In particular, matrix multiplication is associative, and (M 1 M 2 ) = M 2 M 1. Similarly, if M F Γ then M defines a linear transformation from F to F Γ by f M f where (M f )(γ) = M(γ,δ) f (δ) for γ Γ. δ

12 If Φ is any subset of Γ then its characteristic function χ Φ satisfies { χ Φ (γ,δ) = 1 if (γ,δ) Φ 0 otherwise. In the special case that Γ = = Ω we call χ Φ the adjacency matrix of Φ and write it A Φ. In particular, A Ω Ω = J Ω A /0 = O Ω A Diag(Ω) = I Ω. For an association scheme with classes C 0, C 1,..., C s, we write A i for the adjacency matrix A Ci. Thus the (α,β)-entry of A i is equal to 1 if α and β are i-th associates; otherwise it is equal to 0. Condition (iii) in the definition of association scheme has a particularly nice consequence for multiplication of the adjacency matrices: it says that A i A j = s p k i ja k. (1.1) k=0 For suppose that (α,β) C k. Then the (α,β)-entry of the right-hand side of Equation (1.1) is equal to p k i j, while the (α,β)-entry of the left-hand side is equal to ( ) Ai A j (α,β) = A i (α,γ)a j (γ,β) γ Ω = { } γ : (α,γ) C i and (γ,β) C j = p k i j because the product A i (α,γ)a j (γ,β) is zero unless (α,γ) C i and (γ,β) C j, in which case it is 1. This leads to a definition of association schemes in terms of the adjacency matrices. Definition (Third definition of association scheme) An association scheme with s associate classes on a finite set Ω is a set of matrices A 0, A 1,..., A s in R Ω Ω, all of whose entries are equal to 0 or 1, such that (i) A 0 = I Ω ; (ii) A i is symmetric for i = 1,..., s; (iii) for all i, j in {1,...,s}, the product A i A j is a linear combination of A 0, A 1,..., A s ;

1.3. MATRICES 13 (iv) none of the A i is equal to O Ω, and s i=0 A i = J Ω. Notice that we do not need to specify that the coefficients in (iii) be integers: every entry in A i A j is a non-negative integer if all the entries in A i and A j are 0 or 1. Condition (iv) is the analogue of the sets C 0,..., C s forming a partition of Ω Ω. Since A i is symmetric with entries in {0,1}, the diagonal entries of A 2 i are the row-sums of A i. Condition (iii) implies that A 2 i has a constant element, say a i, on its diagonal. Therefore every row and every column of A i contains a i entries equal to 1. Hence A i J Ω = J Ω A i = a i J Ω. Moreover, A 0 A i = A i A 0 = A i. Thus condition (iii) can be checked by, for each i in {1,...,s}, verifying that A i has constant row-sums and that, for all but one value of j in {1,...,s}, the product A i A j is a linear combination of A 0,..., A s. In fact, the first check is superfluous, because it is a byproduct of checking A 2 i. Example 1.6 Let Λ be a Latin square of size n: an n n array filled with n letters in such a way that each letter occurs once in each row and once in each column. A Latin square of size 4 is shown in Figure 1.8(a). Let Ω be the set of n 2 cells in the array. For α, β in Ω with α β let α and β be first associates if α and β are in the same row or are in the same column or have the same letter, and let α and β be second associates otherwise. Figure 1.8(b) shows one element α and marks all its first associates as β. A B C D D A B C C D A B B C D A (a) A Latin square β β β β α β β β β β (b) An element α and all its first associates β Figure 1.8: A association scheme of Latin-square type We need to check that all the nine products A i A j are linear combinations of A 0, A 1 and A 2. Five products involve A 0, which is I, so there is nothing to check. (Here and elsewhere we omit the suffix from I, J etc. when the set involved is clear from the context.) I claim that only the product A 2 1 needs to be checked. A 0 A 1 A 2 A 0 A 1? A 2

14 To check A 2 1, we need a simple expression for A 1. Let R be the adjacency matrix of the subset {(α,β) : α and β are in the same row}, and let C and L be the adjacency matrices of the similarly-defined subsets for columns and letters. Then A 1 = R +C + L 3I, because the elements of Diag(Ω) are counted in each of R, C, and L and need to be removed. Moreover, A 2 = J A 1 I. These adjacency matrices have constant row-sums: a 1 = 3(n 1) and a 2 = n 2 3(n 1) 1 = (n 2)(n 1). Now R 2 (α,β) = R(α, γ)r(γ, β) γ = {γ : γ is the same row as α and β} { = n if α and β are in the same row 0 otherwise = nr(α, β) so R 2 = nr. Similarly C 2 = nc and L 2 = nl. Also RC(α, β) = R(α, γ)c(γ, β) γ = {γ : γ is in the same row as α and the same column as β} = 1 so RC = J. Similarly CR = RL = LR = CL = LC = J. Hence A 2 1 = n(r +C + L) + 6J 6(R +C + L) + 9I, which is a linear combination of A 1, J and I, hence of A 1, A 2 and A 0. Now let us verify my claim that no further products need to be evaluated. We do not need to check A 1 A 2, because A 1 A 2 = A 1 (J A 1 I) = a 1 J A 2 1 A 1. Neither do we need to check A 2 A 1, because A 2 A 1 = (J A 1 I)A 1 = a 1 J A 2 1 A 1. Finally, we do not need to check A 2 2 either, because A 2 2 = A 2 (J A 1 I) = a 2 J A 2 A 1 A 2. This association scheme is said to be of Latin-square type L(3,n).

1.3. MATRICES 15 It is no accident that A 1 A 2 = A 2 A 1 in Example 1.6. Lemma 1.2 If A 0, A 1,..., A s are the adjacency matrices of an association scheme then A i A j = A j A i for all i, j in {0,1,...s}. Proof A j A i = A ja i, because the adjacency matrices are symmetric, = ( ) A i A j ( = ) p k i ja k k = p k i ja k k = p k i ja k, k = A i A j. because the adjacency matrices are symmetric, As we saw in Example 1.6, there is very little to check when there are only two associate classes. Lemma 1.3 Let A be a symmetric matrix in R Ω Ω with zeros on the diagonal and all entries in {0,1}. Suppose that A O and A J I. Then {I,A,J A I} is an association scheme on Ω if and only if A 2 is a linear combination of I, A and J. Consideration of the α-th row of A i A j sheds new light on the graph way of looking at association schemes. Stand at the vertex α. Take a step along an edge coloured i (if i = 0 this means stay still). Then take a step along an edge coloured j. Where can you get to? If β is joined to α by a k-coloured edge, then you can get to β in this way if p k i j 0. In fact, there are exactly pk i j such two-step ways of getting to β from α.