The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries We define preliminary definitions and give a brief list of facts from linear algebra without proof. Definition 1.1. For k Z 0, denote by [k] the set {1, 2,..., k} ([0] := ). Definition 1.2. Let X be a set, then denote by ( ) X k the set of k-element subsets of X. Denote by ( ( ) X k ) the set of k-element multisets (sets with repeated elements) of X. Example 1.3. Let X = {a, b, c, d}. Then {a, c}, {b, c} ( ) ( X 2, {a, a} / X ) 2, and {a, b, d}, {a, a, b}, {c, c, c} ( ( ) X 3 ) Proposition 1.4. A, a n n matrix, that is real symmetric has a orthonormal eigenbasis (u 1, u 2,..., u n ), u i R n where u t i u j = δ ij (Kronecker delta). Moreover, if U = [u 1... u n ] is a n n matrix whose columns are u 1,..., u n, then U 1 = U t and U t AU is diagonal. Definition 1.5. Let A be a m n matrix with m n, and S ( [n] m). Denote by A[S] the m m submatrix of A formed by taking columns of A indexed by S. Likewise, if B is a n m matrix with m n, then denote by B[S] the m m submatrix of B formed by taking rows of B indexed by S. Observation 1.6. Note that (A[S]) t = A t [S] 1
Example 1.7. If Then A = [ ] 1 2 7 1, B = 3 4 0 5 A[S] = 3 1 2 5 9 7 0 1 [ ] 2 7, B[S] = 4 0 ( [4], S = {2, 3} 2 [ 2 ] 5 9 7 Theorem 1.8. (Cauchy-Binet Formula) Let A, B be m n, n m matrices, respectively, and m n. Then det(ab) = det(a[s]) det(b[s]) S ( [n] m) Proposition 1.9. Let A be an n n matrix such that the sum of entries in each row and column is zero, and A 0 be the matrix obtained by removing the last row and column of A. Then the coefficient of x in det(a xi) is equal to n det(a 0 ). ) 2 Walks on Graphs We present elementary graph theory on walks on graphs. Definition 2.1. A (finite) graph G = (V (G), E(G)) consists of a vertex set V (G) = {v 1,..., v p } and an edge set E(G) = {e 1,..., e q } with a function ϕ : E ( ( V 2) ). Moreover, for e E(G) and v, v V (G), if ϕ(e) = {v, v } then e connects or is incident to v and v ; if e that connects v, v, then v and v are adjacent; if ϕ(e) = {v, v} then e is called a loop Example 2.2. v2 e1 e2 e3 v3 v1 is a graph G = (V = {v1, v2, v3}, E = {e1, e2, e3}) where ϕ(e1) = ϕ(e2) = {v2, v3}, ϕ(e3) = {v1, v2} 2
NOTE: For the remainder of the article we will say our graph G has p many vertices and q many edges unless stated otherwise. Definition 2.3. The adjacency matrix of graph G, denoted A(G) (or just A if G is clear), is the p p matrix where A ij =(number of edges incident to v i and v j ). 0 1 0 Example 2.4. In the Example 2.2, we have A = 1 0 2 0 2 0 Observation 2.5. Let G be a graph. Then we can easily see that A(G) is (real) symmetric, and the trace is the number of loops in G. Definition 2.6. A walk of length l on a graph G from vertex u to vertex v is a sequence v 1, e 2, v 2, e 2,..., v l, e l, v l+1 such that v i s are in V (G), v 1 = u, v l+1 = v, and e i s are in E(G) such that e i connects v i and v i+1. Now we have our first elementary result: Theorem 2.7. Let G be a graph, A = A(G). Then (A l ) ij =(number of walks from v i to v j for all l N) Proof) We induct on l. l = 1 case is trivial. (A l ) ij = p k=1 A ik(a l 1 ) kj and A ik is the number of walks of length 1 from v i to v k and by induction (A l 1 ) kj is the number of walks of length l 1 from v k to v i, so summing over k we have our desired result. We also know that A(G) is real symmetric, so let λ 1,..., λ p be the eigenvalues and u 1,..., u p be the orthonormal basis that corresponds to them respectively, and let U be the orthogonal matrix formed by u i s as the columns. Then we have the following refinement of the previous theorem in more algebraic terms: Corollary 2.8. Let u ij be (i, j)th coordinate of U. Then (A l ) ij = p u ik u jk λ l k k=1 and defining closed walk as a walk that starts and ends at the same vertex, we have (number of closed walks of length l) = Tr(A l ) = λ l 1 +... + λ l p 3
Proof) Let D = Diag(λ 1,..., λ p ). Then both statements follow immediately from U D l U t = A l (noting that Tr(AB) = Tr(BA)). We give an application of this corollary to completely graph as an example: Definition 2.9. A complete graph K p is a graph G with p vertices such that there exists exactly one edge connecting v i to v j for all 1 i < j p. Example 2.10. Below are K 3, K 4, and K 5 : Proposition 2.11. Let K p be a complete graph, A = A(K p ). Then (A l ) ii = 1 p ((p 1) l + (p 1)( 1) l) Proof) By symmetry, note that (A l ) ii s are equal for all i = 1,..., p. So, we show Tr(A l ) = (p 1) l + (p 1)( 1) l. Let J be a p p matrix with all entries equal to 1. It is easy to check that J has eigenvalues 0 (with multiplicity p 1) and p (with multiplicity 1), so J I has eigenvalues -1 (with multiplicity p 1) and p 1 (with multiplicity 1). And since A(K p ) is exactly J I, applying our previous corollary we have our desired result. 3 The Matrix-Tree Theorem For this section, we assume that G has no loops because they are completely irrelevant to our discussion. We also assume G is connected, that is, there exists a walk between any two vertices of G. 4
Definition 3.1. A cycle is a closed walk with no repeated vertices or edges except for the first and last vertex. A tree is a (connected) graph that has no cycles. Observation 3.2. It s useful to observe that the following are equivalent: (i) G is a tree, (ii) G is connected and has p 1 edges, (iii) G has no cycles and has p 1 edges. Definition 3.3. Let G be a graph. A subgraph G of G is a graph with V (G ) V (G), E(G ) E(G). A subgraph G is a spanning subgraph if V (G ) = V (G). A subgraph G is a spanning tree if it is a spanning subgraph that is a tree. Definition 3.4. Let G be a graph. The complexity of G, denoted κ(g), is the number of spanning trees of G. Example 3.5. Going back to K 3, K 4, and K 5 : It is not hard to count combinatorially and confirm that κ(k 3 ) = 3, κ(k 4 ) = 16, κ(k 5 ) = 125. For example, for K 5, the three types of trees with 5 vertices are: v1 v2 v3 v4 v5 v1 v2 v3 v5 v4 v1 v5 v3 v2 v4 5
And there are 5!/2 many spanning trees of K 5 of the first type, ( 5 2) (3)(2) many of the second type, and 5 of the last type. Summing up we have 125. From this we may guess that the formula for κ(k p ) = p p 2. The remainder of our discussion in this section will culminate in proving this result. Recall our notation that a generic graph G has vertices {v 1,..., v p } and edges {e 1,..., e q }. Definition 3.6. Let G be a graph. We can give G an orientation. That is, for ϕ(e) = {u, v}, choose one of two ordered pairs (u, v), (v, u). Say we chose (u, v), then we call u the initial vertex and v the final vertex of e, and e is directed from u to v. (Intuitively, this is putting an arrow on e pointing from u to v). We first define two more algebraic tools (matrices) that will aid us: Definition 3.7. Let G be a graph with an orientation. The incidence matrix M(G) = M is the p q matrix defined by: 1 if the edge e j has initial vertex v i M ij = 1 if the edge e j has final vertex v i 0 otherwise Definition 3.8. Let G be a graph with an orientation and A = A(G). The Laplacian matrix L(G) = L is the p p matrix defined by: { Aij if i j M ij = deg(v i ) if i = j Example 3.9. Below is an oriented graph v2 e1 e2 v3 e3 e5 v1 e4 v4 6
with: 0 0 1 1 0 2 1 0 1 M = 1 1 1 0 1 1 1 0 0 0, L = 1 4 2 1 0 2 2 0 0 0 0 1 1 1 1 0 2 Observation 3.10. In M, each column entries sum to zero, so rows of M sum to zero vector, hence rank(m) < p. In L, entries in each column or row sum to zero, and L is symmetric. Lastly, if G is regular of degree d, that is deg(v i ) = d for all v i V (G), then L = di A(G). We can now state our main theorem: Theorem 3.11. (Matrix-Tree Theorem) Let G be (finite connected) graph (without loops), and let L = L(G). Denote by L 0 the matrix obtained by removing the last row and column of L. Then Our proof requires two lemmas: det(l 0 ) = κ(g) Lemma 3.12. Let G be a graph, give G any orientation, and let L = L(G), M = (G). Then L = MM t. Proof) We have M ij = q M ik M jk k=1 If i j, for each k, if e k connects v i and v j exactly one of M ik, M jk is 1, and the other 1. If i = j, either M ik, M jk are both zero (if e k is not incident to v i ) and both 1 or 1 if e k is incident to v i. Definition 3.13. Let G be a graph, M = M(G), and M 0 := (p 1) q matrix obtained by removing the last row of M. For S ( [q] p 1), we define S(E) to be a subset of edges indexed by S (i.e. S(E) = {e i } i S {e 1,..., e q }), and define p 1 p 1 matrix M 0 [S] as in definition 1.5. Moreover, by graph formed by S(E) (which we also denote as S(E)), we mean graph with edges in S(E) and vertices incident to an edge in S(E). Lemma 3.14. If S(E) forms a spanning tree, then det M 0 [S] = ±1, if S(E) does not, then det M 0 [S] = 0. 7
Proof) If S(E) does not form a spanning tree, since we have p 1 edges there must be a subset R of S such that R(E) forms a cycle, so let this cycle be f 1,..., f j (a walk without the vertices written down). Then multiplying 1 if necessary, the columns of M corresponding to f i s sum to zero, hence det M 0 [S] = 0. Now suppose S(E) forms a spanning tree. Let e be an edge of S(E) incident to v p, then the column of M 0 [S] corresponding to e contains exactly one nonzero entry which is ±1. So, If M 0 is the p 2 p 2 matrix obtained by removing the two and column containing the nonzero entry of column e, by determinant expansion det M 0 [S] = ± det M 0. Now consider G a graph (in fact a tree) obtained by contracting the edge e to a single vertex (so v p is now identified with the vertex e connected v p to.) M(G ) p 1 p 2 matrix formed by removing the column e, and so M 0 is the matrix formed by removing the bottom row of M(G ). And thus by induction det M 0 = ±1, and this completes our proof. Proof of the Matrix-Tree Theorem Since L = MM, we have L 0 = M 0 M0 t. By Cauchy-Binet formula, we have det L 0 = (det M 0 [S])(det M0[S]) t = (det(m 0 [S])) 2 ( (A[S]) t = A t [S]) S S ( [q] p 1) and (det(m 0 [S])) 2 is 1 if S(E) forms a spanning tree and 0 otherwise, and a tree with p vertices has p 1 edges, so summing over S ( [q] p 1), RHS exactly counts the number of spanning trees of G. Corollary 3.15. Let G be a (finite connected) graph (without loops) with p vertices, and let λ 1,..., λ p be the eigenvalues of L(G) with λ p = 0, then κ(g) = 1 p λ 1λ 2 λ p 1 Proof) First we note that λ p = 0 is possible since each rows (and columns) sum to zero in L(G) ( Observation 3.10). And det(l xi) = (λ 1 x)(λ 2 x) (λ p 1 x)( x). So by Proposition 1.9 we have λ 1... λ p 1 ( coeff. of x) equals p det(l 0 ), and the corollary thus follows. 8
Corollary 3.16. Let G be a regular graph of degree d, and let the eigenvalues of A(G) be λ 1,..., λ p with λ p = d. Then, κ(g) = 1 p (d λ 1)(d λ 2 ) (d λ p 1 ) Proof) Follows immediately from L = di A(G) and the previous corollary. Now, as we promised, we have: Theorem 3.17. κ(k p ) = p p 2 Proof) K p is regular of degree d, and A(K p ) has eigenvalues 1 with multiplicity p 1 and p 1 with multiplicity 1. So from our Corollary 3.16 we have κ(k p ) = 1 p ((p 1) ( 1))p 1 = p p 2, as desired. 4 References Stanley, Richard P. Topics in Algebraic Combinatorics. Version 1 Feb. 2013. http://www-math.mit.edu/ rstan/algcomb/algcomb.pdf 9