Fiedler s Theorems on Nodal Domains

Similar documents
Fiedler s Theorems on Nodal Domains

Effective Resistance and Schur Complements

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Spectral Graph Theory Lecture 3. Fundamental Graphs. Daniel A. Spielman September 5, 2018

Walks, Springs, and Resistor Networks

Linear algebra and applications to graphs Part 1

Specral Graph Theory and its Applications September 7, Lecture 2 L G1,2 = 1 1.

Lecture 1 and 2: Random Spanning Trees

Rings, Paths, and Paley Graphs

Matching Polynomials of Graphs

ORIE 6334 Spectral Graph Theory September 8, Lecture 6. In order to do the first proof, we need to use the following fact.

Laplacian Integral Graphs with Maximum Degree 3

Recall the convention that, for us, all vectors are column vectors.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Spectra of Adjacency and Laplacian Matrices

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Econ Slides from Lecture 7

Markov Chains, Random Walks on Graphs, and the Laplacian

1 Matrix notation and preliminaries from spectral graph theory

Math 408 Advanced Linear Algebra

Physical Metaphors for Graphs

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

The Singular Value Decomposition and Least Squares Problems

Note on deleting a vertex and weak interlacing of the Laplacian spectrum

Convergence of Random Walks

Z-Pencils. November 20, Abstract

Chapter 3 Transformations

The effect on the algebraic connectivity of a tree by grafting or collapsing of edges

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Bipartite Ramanujan Graphs of Every Degree

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Eigenvalue comparisons in graph theory

Lecture 7: Positive Semidefinite Matrices

Iterative solvers for linear equations

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

Iterative solvers for linear equations

Definition 2.3. We define addition and multiplication of matrices as follows.

The Adjacency Matrix, Standard Laplacian, and Normalized Laplacian, and Some Eigenvalue Interlacing Results

Spectral Analysis of Random Walks

1 Last time: least-squares problems

1 Positive definiteness and semidefiniteness

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

Spectral radius, symmetric and positive matrices

Algebraic Methods in Combinatorics

Iterative solvers for linear equations

LINEAR ALGEBRA KNOWLEDGE SURVEY

The third smallest eigenvalue of the Laplacian matrix

Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving

Rings, Paths, and Cayley Graphs

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Linear Algebra- Final Exam Review

1 Adjacency matrix and eigenvalues

COMPSCI 514: Algorithms for Data Science

1 Linearity and Linear Systems

Econ Slides from Lecture 8

Graph fundamentals. Matrices associated with a graph

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Large Scale Data Analysis Using Deep Learning

Math Matrix Algebra

The non-bipartite graphs with all but two eigenvalues in

Properties of Linear Transformations from R n to R m

Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering. Some Slides

Problem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit.

An Algorithmist s Toolkit September 10, Lecture 1

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

On the second Laplacian eigenvalues of trees of odd order

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

7. Symmetric Matrices and Quadratic Forms

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

CS 143 Linear Algebra Review

Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics

Review problems for MA 54, Fall 2004.

Introduction. Disclaimer. 1.1 First Things. 1.2 Introduction. Spectral Graph Theory Lecture 1. Daniel A. Spielman September 2, 2015

Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis

Linear Algebra and its Applications

Elementary linear algebra

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Chapter 2 Spectra of Finite Graphs

The Distance Spectrum

Iowa State University and American Institute of Mathematics

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

Linear Least Squares. Using SVD Decomposition.

This section is an introduction to the basic themes of the course.

NORMS ON SPACE OF MATRICES

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

18.312: Algebraic Combinatorics Lionel Levine. Lecture 19

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

The Matrix-Tree Theorem

Lecture 8 : Eigenvalues and Eigenvectors

Laplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron

On Hadamard Diagonalizable Graphs

Lecture notes: Applied linear algebra Part 1. Version 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Symmetric Matrices and Eigendecomposition

Transcription:

Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors to draw graphs in the first lecture. First, recall some of the drawings we made of graphs: We will show that the subgraphs obtained in the right and left halfs of each image are connected. Path graphs exhibited more interesting behavior: their kth eigenvector changes sign k times: 0.4 v2 v3 v4 0.4 v10 0.2 0.0-0.2 0.2 0.0-0.2-0.4-0.4 Here are the analogous plots for a path graph with edge weights randomly chosen in [0, 1]: 7-1

Lecture 7: September 19, 2018 7-2 0.4 v2 v3 v4 0.50 v11 0.2 0.0-0.2 0.25 0.00-0.25-0.4-0.50 Here are the first few eigenvectors of another: 0.50 v2 v3 v4 0.25 0.00-0.25-0.50 Random.seed!(1) M = spdiagm(1=>rand(10)) M = M + M L = lap(m) E = eigen(matrix(l)) Plots.plot(E.vectors[:,2],label="v2",marker = 5) Plots.plot!(E.vectors[:,3],label="v3",marker = 5) Plots.plot!(E.vectors[:,4],label="v4",marker = 5) xlabel!("") ylabel!("") savefig("rpath2v24.pdf") We see that the kth eigenvector still changes sign k times. We will see that this always happens. These are some of Fiedler s theorems about nodal domains. Nodal domains are the connected parts of a graph on which an eigenvector is negative or positive.

Lecture 7: September 19, 2018 7-3 7.2 Sylverter s Law of Interia Let s begin with something obvious. Claim 7.2.1. If A is positive semidefinite, then so is B T AB for every matrix B. Proof. For any x, since A is positive semidefinite. x T B T ABx = (Bx) T A(Bx) 0, In this lecture, we will make use of Sylvester s law of intertia, which is a powerful generalization of this fact. I will state and prove it now. Theorem 7.2.2 (Sylvester s Law of Intertia). Let A be any symmetric matrix and let B be any non-singular matrix. Then, the matrix BAB T has the same number of positive, negative and zero eigenvalues as A. Note that if the matrix B were orthonormal, or if we used B 1 in place of B T, then these matrices would have the same eigenvalues. What we are doing here is different, and corresponds to a change of variables. Proof. It is clear that A and BAB T eigenvalues. have the same rank, and thus the same number of zero We will prove that A has at least as many positive eigenvalues as BAB T. One can similarly prove that that A has at least as many negative eigenvalues, which proves the theorem. Let γ 1,..., γ k be the positive eigenvalues of BAB T and let Y k be the span of the corresponding eigenvectors. Now, let S k be the span of the vectors B T y, for y Y k. As B is non-singluar, S k has dimension k. Let α 1 α n be the eigenvalues of A. By the Courant-Fischer Theorem, we have α k = x T Ax max min S IR n x S x T x dim(s)=k min x S k x T Ax x T x = min y Y k y T BAB T y y T BB T y γ ky T y y T BB T y > 0. So, A has at least k positive eigenvalues (The point here is that the denominators are always positive, so we only need to think about the numerators.) To finish, either apply the symmetric argument to the negative eigenvalues, or apply the same argument with B 1. 7.3 Weighted Trees We will now examine a theorem of Fiedler [Fie75].

Lecture 7: September 19, 2018 7-4 Theorem 7.3.1. Let T be a weighted tree graph on n vertices, let L T have eigenvalues 0 = λ 1 < λ 2 λ n, and let ψ k be an eigenvector of λ k. If there is no vertex u for which ψ k (u) = 0, then there are exactly k 1 edges for which ψ k (u)ψ k (v) < 0. One can extend this theorem to accomodate zero entries and prove that the eigenvector changes k 1 times. We will just prove this theorem for weighted path graphs. Our analysis will rest on an understanding of Laplacians of paths that are allowed to have negative edges weights. Lemma 7.3.2. Let M be the Laplacian matrix of a weighted path that can have negative edge weights: M = w a,a+1 L a,a+1, 1 a<n where the weights w a,a+1 are non-zero and we recall that L a,b is the Laplacian of the edge (a, b). The number of negative eigenvalues of M equals the number of negative edge weights. Proof. Note that x T M x = (u,v) E w u,v (x (u) x (v)) 2. We now perform a change of variables that will diagonalize the matrix M. Let δ(1) = x (1), and for every a > 1 let δ(a) = x (a) x (a 1). Every variable x (1),..., x (n) can be expressed as a linear combination of the variables δ(1),..., δ(n). In particular, x (a) = δ(1) + δ(2) + + δ(a). So, there is a square matrix L of full rank such that By Sylvester s law of intertia, we know that x = Lδ. L T M L has the same number of positive, negative, and zero eigenvalues as M. On the other hand, δ T L T M Lδ = w a,a+1 (δ(v)) 2. 1 a<n So, this matrix clearly has one zero eigenvalue, and as many negative eigenvalues as there are negative w a,a+1. Proof of Theorem 7.3.1. We assume that λ k has multiplicity 1. One can prove it, but we will skip it. Let Ψ k denote the diagonal matrix with ψ k on the diagonal, and let λ k be the corresponding eigenvalue. Consider the matrix M = Ψ k (L P λ k I )Ψ k.

Lecture 7: September 19, 2018 7-5 The matrix L P λ k I has one zero eigenvalue and k 1 negative eigenvalues. As we have assumed that ψ k has no zero entries, Ψ k is non-singular, and so we may apply Sylvester s Law of Intertia to show that the same is true of M. I claim that M = (u,v) E w u,v ψ k (u)ψ k (v)l u,v. To see this, first check that this agrees with the previous definition on the off-diagonal entries. To verify that these expression agree on the diagonal entries, we will show that the sum of the entries in each row of both expressions agree. As we know that all the off-diagonal entries agree, this implies that the diagonal entries agree. We compute Ψ k (L P λ k I )Ψ k 1 = Ψ k (L P λ k I )ψ k = Ψ k (λ k ψ k λ k ψ k ) = 0. As L u,v 1 = 0, the row sums agree. Lemma 7.3.2 now tells us that the matrix M, and thus L P λ k II, has as many negative eigenvalues as there are edges (u, v) for which ψ k (u)ψ k (v) < 0. 7.4 More linear algebra There are a few more facts from linear algebra that we will need for the rest of this lecture. We stop to prove them now. 7.4.1 The Perron-Frobenius Theorem for Laplacians In Lecture 3, we proved the Perron-Frobenius Theorem for non-negative matrices. I wish to quickly observe that this theory may also be applied to Laplacian matrices, to principal sub-matrices of Laplacian matrices, and to any matrix with non-positive off-diagonal entries. The difference is that it then involves the eigenvector of the smallest eigenvalue, rather than the largest eigenvalue. Corollary 7.4.1. Let M be a matrix with non-positive off-diagonal entries, such that the graph of the non-zero off-diagonally entries is connected. Let λ 1 be the smallest eigenvalue of M and let v 1 be the corresponding eigenvector. Then v 1 may be taken to be strictly positive, and λ 1 has multiplicity 1. Proof. Consider the matrix A = σi M, for some large σ. For σ sufficiently large, this matrix will be non-negative, and the graph of its non-zero entries is connected. So, we may apply the Perron-Frobenius theory to A to conclude that its largest eigenvalue α 1 has multiplicity 1, and the corresponding eigenvector v 1 may be assumed to be strictly positive. We then have λ 1 = σ α 1, and v 1 is an eigenvector of λ 1. 7.4.2 Eigenvalue Interlacing We will often use the following elementary consequence of the Courant-Fischer Theorem. I will assign it as homework.

Lecture 7: September 19, 2018 7-6 Theorem 7.4.2 (Eigenvalue Interlacing). Let A be an n-by-n symmetric matrix and let B be a principal submatrix of A of dimension n 1 (that is, B is obtained by deleting the same row and column from A). Then, α 1 β 1 α 2 β 2 α n 1 β n 1 α n, where α 1 α 2 α n and β 1 β 2 β n 1 are the eigenvalues of A and B, respectively. 7.5 Fiedler s Nodal Domain Theorem Given a graph G = (V, E) and a subset of vertices, W V, recall that the graph induced by G on W is the graph with vertex set W and edge set This graph is sometimes denoted G(W ). {(i, j) E, i W and j W }. Theorem 7.5.1 ([Fie75]). Let G = (V, E, w) be a weighted connected graph, and let L G be its Laplacian matrix. Let 0 = λ 1 < λ 2 λ n be the eigenvalues of L G and let ψ 1,..., ψ n be the corresponding eigenvectors. For any k 2, let W k = {i V : ψ k (i) 0}. Then, the graph induced by G on W k has at most k 1 connected components. Proof. To see that W k is non-empty, recall that ψ 1 = 1 and that ψ k is orthogonal ψ 1. So, ψ k must have both positive and negative entries. Assume that G(W k ) has t connected components. After re-ordering the vertices so that the vertices in one connected component of G(W k ) appear first, and so on, we may assume that L G and ψ k have the forms B 1 0 0 C 1 x 1 0 B 2 0 C 2 x 2 L G =....... ψ k =., and 0 0 B t C t C1 T C2 T Ct T D B 1 0 0 C 1 0 B 2 0 C 2........ 0 0 B t C t x t C1 T C2 T Ct T D y x 1 x 2 x t y x 1 x 2 = λ k. x t y The first t sets of rows and columns correspond to the t connected components. So, x i 0 for 1 i t and y < 0 (when I write this for a vector, I mean it holds for each entry). We also know.

Lecture 7: September 19, 2018 7-7 that the graph of non-zero entries in each B i is connected, and that each C i is non-positive, and has at least one non-zero entry (otherwise the graph G would be disconnected). We will now prove that the smallest eigenvalue of B i is smaller than λ k. We know that B i x i + C i y = λ k x i. As each entry in C i is non-positive and y is strictly negative, each entry of C i y is non-negative and some entry of C i y is positive. Thus, x i cannot be all zeros, and B i x i = λ k x i C i y λ k x i x T i B i x i λ k x T i x i. If x i has any zero entries, then the Perron-Frobenius theorem tells us that x i cannot be an eigenvector of smallest eigenvalue, and so the smallest eigenvalue of B i is less than λ k. On the other hand, if x i is strictly positive, then x T i C iy > 0, and Thus, the matrix x T i B i x i = λ k x T i x i x T i C i y < λ k x T i x i. B 1 0 0 0 B 2 0...... 0 0 B t has at least t eigenvalues less than λ k. By the eigenvalue interlacing theorem, this implies that L G has at least t eigenvalues less than λ k. We may conclude that t, the number of connected components of G(W k ), is at most k 1. We remark that Fiedler actually proved a somewhat stronger theorem. He showed that the same holds for W = {i : ψ k (i) t}, for every t 0. This theorem breaks down if we instead consider the set The star graphs provide counter-examples. W = {i : ψ k (i) > 0}. References [Fie75] M. Fiedler. A property of eigenvectors of nonnegative symmetric matrices and its applications to graph theory. Czechoslovak Mathematical Journal, 25(100):618 633, 1975.

Lecture 7: September 19, 2018 7-8 1 1 0 3 1 Figure 7.1: The star graph on 5 vertices, with an eigenvector of λ 2 = 1.