ORIE 6334 Spectral Graph Theory September 8, Lecture 6. In order to do the first proof, we need to use the following fact.

Similar documents
1 Counting spanning trees: A determinantal formula

The Matrix-Tree Theorem

Determinants by Cofactor Expansion (III)

Math Linear Algebra Final Exam Review Sheet

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

1 Matrices and Systems of Linear Equations. a 1n a 2n

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

2 b 3 b 4. c c 2 c 3 c 4

Evaluating Determinants by Row Reduction

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

3 Matrix Algebra. 3.1 Operations on matrices

Lecture 1 and 2: Random Spanning Trees

Review problems for MA 54, Fall 2004.

LINEAR ALGEBRA REVIEW

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Conceptual Questions for Review

2.1 Laplacian Variants

Discrete Optimization 23

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Solution Set 7, Fall '12

Graduate Mathematical Economics Lecture 1

Chapter 2. Square matrices

2. Every linear system with the same number of equations as unknowns has a unique solution.

Fiedler s Theorems on Nodal Domains

Determinants. Samy Tindel. Purdue University. Differential equations and linear algebra - MA 262

Fiedler s Theorems on Nodal Domains

Math 215 HW #9 Solutions

4. Determinants.

Linear Systems and Matrices

Review of Linear Algebra

Linear Algebra Practice Final

A Brief Outline of Math 355

Determinants: Uniqueness and more

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

An Introduction to Spectral Graph Theory

II. Determinant Functions

1 Matrices and Systems of Linear Equations

Undergraduate Mathematical Economics Lecture 1

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1

Lecture 2: Eigenvalues and their Uses

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

CS 246 Review of Linear Algebra 01/17/19

Laplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron

Topics in Graph Theory

Properties of the Determinant Function

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATH 2030: EIGENVALUES AND EIGENVECTORS

1 Last time: least-squares problems

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Math 240 Calculus III

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Lecture Summaries for Linear Algebra M51A

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Chapter 4. Determinants

3 (Maths) Linear Algebra

Introduction to Matrix Algebra

Lecture 10: Determinants and Cramer s Rule

Calculating determinants for larger matrices

Jordan normal form notes (version date: 11/21/07)

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Chapter 3. Basic Properties of Matrices

18.312: Algebraic Combinatorics Lionel Levine. Lecture 19

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

MA 265 FINAL EXAM Fall 2012

Recall the convention that, for us, all vectors are column vectors.

MATH 235. Final ANSWERS May 5, 2015

and let s calculate the image of some vectors under the transformation T.

Definition 2.3. We define addition and multiplication of matrices as follows.

Quick Tour of Linear Algebra and Graph Theory

Graph fundamentals. Matrices associated with a graph

Linear Algebra Highlights

Online Exercises for Linear Algebra XM511

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

Matrix Operations: Determinant

Linear algebra and applications to graphs Part 1

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

TOPIC III LINEAR ALGEBRA

1 9/5 Matrices, vectors, and their applications

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

G1110 & 852G1 Numerical Linear Algebra

det(ka) = k n det A.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Determinants Chapter 3 of Lay

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Chapter 5: Matrices. Daniel Chan. Semester UNSW. Daniel Chan (UNSW) Chapter 5: Matrices Semester / 33

Here are some additional properties of the determinant function.

= 1 and 2 1. T =, and so det A b d

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

MTH 464: Computational Linear Algebra

Transcription:

ORIE 6334 Spectral Graph Theory September 8, 2016 Lecture 6 Lecturer: David P. Williamson Scribe: Faisal Alkaabneh 1 The Matrix-Tree Theorem In this lecture, we continue to see the usefulness of the graph Laplacian, and its connection to yet another standard concept in graph theory, that of a spanning tree. Let A[i] be the matrix A with its i th column and row removed. We will give two different proofs of the following. Theorem 1 (Kirchhoff s Matrix-Tree Theorem) det(l G [i]) gives the number of spanning trees in G (for any i). In order to do the first proof, we need to use the following fact. Fact 1 Let E ii be a matrix with 1 in the (i, i) th entry and 0s elsewhere. Then det(a + E ii ) = det(a) + det(a[i]). If you think about a determinant as being the sum over all permutations of the products of the entries corresponding to the permutation, the fact makes sense: we ve increased the (i, i) entry, a ii, to (a ii + 1), and we can think about each permutation that uses the (i, i) entry either multiplying by a ii (in which case we just get det(a) or by the 1, in which case, we get the sum over all the permutations that avoid the ith row and column, or det(a[i]). Proof of Theorem 1: Our first proof will be by induction on the number of vertices and edges of graph G. Base case: G is an empty graph of two vertices, then [ ] 0 0 L G =, 0 0 so that L G [i] = [0] and det(l G [i]) = 0. Inductive step: Suppose there exists e = (i, j) incident in i. If there is not and i is an isolated vertex, then there are zeros along i th row and column of L G. Then det(l G [i]) = det(l G i ) = 0 = n i=1 λ i and, as we showed previously, λ 1 = 0 for any 0 This lecture is derived from Cvetković, Rowlinson, and Simić, An Introduction to the Theory of Graph Spectra, Sections 7.1 and 7.2, and Godsil and Royle, Algebraic Graph Theory, Section 13.2. 6-1

L G. Note also that the number of spanning trees is 0 if i is isolated, so the theorem holds in this case. Now we introduce some notations. Let τ(g) is the number of spanning trees in G, let G e be G with edge e removed, and G/e be G with edge e contracted. See below for an illustration of graph contraction. i j ij Before contraction After contraction For any spanning tree T, either e T or e T. We note that τ(g/e) gives the number of trees T with e T, while τ(g e) gives the number of trees T with e T. Thus τ(g) = τ(g\e) + τ(g e); note that the first term is G with one fewer edge, while the second has one fewer vertex, and so these will serve as the basis of our induction. First we try to relate L G to L G e, and we observe that L G [i] = L G e [i] + E jj (that is, if we remove edge e, then the only difference in the matrix L G [i] is that we have to correct for the change in degree of j). Then by the Fact 1 det(l G [i]) = det(l G e + E jj ) = det(l G e [i]) + det(l G e [i, j]) = det(l G e [i]) + det(l G [i, j]), where by L G [i, j] we mean L G with both the ith and jth rows and columns removed; the last equality follows since once we ve removed both the ith and jth rows and columns there s no difference between L G and L G e for e = (i, j). Now to relate L G to L G/e. Suppose we contract i onto j (so that L G/e has no row/column corresponding to i). Then L G/e [j] = L G [i, j]. Thus we have that det(l G [i]) = det(l G e [i]) + det(l G/e [j]) = τ(g e) + τ(g/e) = τ(g). where the second equation follows by induction; this completes the proof. For the second proof of the theorem, we need the following fact which explains how to take the determinant of the product of rectangular matrices. 6-2

Fact 2 (Cauchy-Binet Formula) Let A R n m, B R m n, for m n. Let A S (respectively B S ) be submatrices formed by taking the columns (respectively rows) indexed by S [m] of A (respectively B). Let ( ) [m] n be the set of all size n subsets of [m]. Then det(ab) = det(a S ) det(b S ). S ( [m] n ) Recall that L G = (i,j) E (e i e j )(e i e j ) T. Thus we can write L G = BB T where B R m n has one column of B per edge (i, j), with the column (e i e j ). Since we can write L G = BB T, this is yet another proof that L G is positive semidefinite. Then if B[i] denotes B with its i th row omitted, then L G [i] = B[i]B[i] T. We let B S [i] denote B[i] with just the columns of S E. We need the following lemma, whose proof we defer for a moment. Lemma 2 For S E, S = n 1, det(b S [i]) = 1 if S is a spanning tree, 0 otherwise. The second proof of the matrix-tree theorem now becomes very short. Proof of Theorem 1: det(l G [i]) = det(b[i]b[i] T ) = (det(b S [i]))(det(b S [i])) S ( E n 1) = τ(g), where the second equation follows by the Cauchy-Binet formula, and the third by Lemma 2. We can now turn to the proof of the lemma. Proof of Lemma 2: Assume that the edges in B S [i] are directed however we want; that is, we can change the column corresponding to (i, j) from e i e j to e j e i, since this only flips the sign of the determinant. If S E, S = n 1, and S is not a spanning tree, then it must contain a cycle. We direct edges around the cycles. If we then sum the columns of B S [i] corresponding to the cycle, we obtain the 0 vector, which implies that the columns of B S [i] are linearly dependent, and thus det(b S [i]) = 0. Now we suppose that S is a spanning tree; we prove the lemma statement by induction on n. Base case n = 2. Then [ ] 1 B S =, 1 6-3

so that B S [i] = ±1, and thus det(b S [i] = 1). Inductive case: Suppose the lemma statement is true for graphs of size n 1. Let j leaf of the tree j i. Let (k, j) be edge incident on j. We exchange rows/columns so that (k, j) is last column, and j is last row; this may flip sign of determinant, but that doesn t matter. Then B S [i] = (k, j) 0 0 1 0 0... 0 1 Thus if we expand the determinant along the last row we get det(b S [i]) = det(b S {(k,j)} [i]) = 1. The last equality follows by induction since S {(k, j)} is a tree on the vertex set without j, since we assumed that j is a leaf. 2 Consequences of the Matrix-Tree Theorem Once we have the matrix-tree theorem, there are a number of interesting consequences, which we explore in this section. Given a square matrix A R n n, let A ij be matrix without row i column j (so A[i] = A ii ). Let C ij = ( 1) i+j det(a ij ) be the i, j cofactor of A. Then we define the adjugate adj(a) as the matrix with i, j entry C ji. We will need the following fact. Fact 3 A adj(a) = det(a)i. By the matrix-tree theorem, the (i, i) cofactor of L G is equal to τ(g). But we can say something even stronger. Theorem 3 Every cofactor of L G is τ(g), so that adj(l G ) = τ(g)j. Proof: If G is not connected, then τ(g) = 0 and λ 2 (L G ) = 0 = λ 1 (L G ). So the rank of L G rank is at most n 2. Then det((l G ) ij ) = 0, which implies that adj(l G ) = 0, as desired. 6-4

If G is connected, since det(l G ) = 0, by the fact above L G adj(l G ) = 0 (i.e. the zero matrix). Because G is connected, multiples of e are the only eigenvectors of L G with eigenvalue of 0. Thus every column of adj(l G ) must be some multiple of e. But we know that for the ith column of adj(l G ), its ith entry is τ(g), so the column itself must be τ(g)e, and the lemma statement follows. We conclude with one more theorem. Theorem 4 Let 0 = λ 1 λ 2... λ n be the eigenvalues of L G. Then τ(g) = 1 n n λ i. Proof: The theorem is true if G is not connected, since then λ 2 = 0 and τ(g) = 0. Otherwise, we will look at linear term of the characteristic polynomial in two different ways. In the first way, the characteristic polynomial is so the linear term is (λ λ 1 )(λ λ 2 )...(λ λ n ) = λ(λ λ 2 )(λ λ 3 )...(λ λ n ), ( 1) n 1 For the second way, we want the linear term of det(λi L G ); the matrix looks like the following: λ d(1).. L G. L G... λ d(n) If we think about the determinant as the sum over all permutations of the products of the entries corresponding to the permutation, then we get a linear term in λ whenever an (i, i) term is part of the permutation, but no other diagonal entries are part of the permutation; also, if the (i, i) term is part of the permutation then no other entry from row and column i is part of the permutation. Finally, since all the other entries are negations of their entry in L G, we get that if we have a linear term in λ because we include the (i, i) term of the matrix as part of the permutation, the linear term is ( 1) n 1 det(l G [i]). Summing over all (i, i) entries, the linear term of λ in det(λi L G ) is n ( 1) n 1 det(l G [i]) = ( 1) n 1 n τ(g). i=1 Thus we have that τ(g) = 1 n n i=2 λ i. i=2 n i=2 λ i. 6-5