Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics

Size: px
Start display at page:

Download "Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics"

Transcription

1 Spectral Graph Theory and You: and Centrality Metrics Jonathan Gootenberg March 11, / 19

2 Outline of Topics 1 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial 2 Preliminary concepts Proof of 3 2 / 19

3 Why use spectral graph theory? Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial The eigenvalues of a graph s adjacency matrix (and others) can reveal important information about the community structure. The number of matrix trees The most central nodes 3 / 19

4 Easily calculate all spanning trees Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial 4 / 19

5 Find most central nodes in a network Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial 5 / 19

6 Some definitions: Baby s first matrix Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial Definition (Adjacency Matrix A) Given an undirected graph G = (V, E), the adjacency matrix of G is the n n matrix A = A(G) with entries a ij such that { 1, {v i, v j } E a ij = 0, otherwise 6 / 19

7 Some definitions: Baby s first matrix Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial G = A(G) = / 19

8 Putting the spec in Spectral Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial Definition (Spectrum of a graph) The spectrum of graph G is the set of eigenvalues of A(G), along with their multiplicities. If the eigenvalues of A(G) are λ 0 > λ 0 >... > λ s 1 with multiplicites m(λ 0 ),..., m(λ s 1 ) then ( ) λ0 λ Spec G = 1 λ s 1 m(λ 0 ) m(λ 1 ) m(λ s 1 ) 8 / 19

9 Putting the spec in Spectral Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial A(G) = ( 1 ( ) ( ) ) Spec G = / 19

10 Putting the spec in Spectral Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial A(G) = ( 1 ( ) ( ) ) Spec G = / 19

11 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial What does the spectrum of a graph tell us? Definition (Characteristic Polynomial) The characteristic polynomial of G, χ(g, λ) is χ(g, λ) = det(λi A) =λ n + c 1 λ n 1 + c 2 λ n c n c 1 = i λ i = Tr(A) = 0 10 / 19

12 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial What does the spectrum of a graph tell us? Definition (Characteristic Polynomial) The characteristic polynomial of G, χ(g, λ) is χ(g, λ) = det(λi A) =λ n + c 1 λ n 1 + c 2 λ n c n c 1 = i λ i = Tr(A) = 0 10 / 19

13 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial What does the spectrum of a graph tell us? Definition (Characteristic Polynomial) The characteristic polynomial of G, χ(g, λ) is χ(g, λ) = det(λi A) =λ n + c 1 λ n 1 + c 2 λ n c n c 1 = i λ i = Tr(A) = 0 10 / 19

14 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial What does the spectrum of a graph tell us? Definition (Characteristic Polynomial) The characteristic polynomial of G, χ(g, λ) is χ(g, λ) = det(λi A) =λ n + c 1 λ n 1 + c 2 λ n c n c 1 = i λ i = Tr(A) = 0 10 / 19

15 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial What does the spectrum of a graph tell us? Characteristic Polynomial χ(g, λ) = λ n + c 1 λ n 1 + c 2 λ n c n You can express c i in terms of the principal minors of A Principal Minor The principal minor det(a) J,J, J {1,..., n} is the determiniant of the subset of A the same subset J of columns and rows 11 / 19

16 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial What does the spectrum of a graph tell us? Characteristic Polynomial χ(g, λ) = λ n + c 1 λ n 1 + c 2 λ n c n You can express c i in terms of the principal minors of A c i ( 1) i is the sum of the principal minors of size i i c i ( 1) i = J =i det(a) J,J 11 / 19

17 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial What does the spectrum of a graph tell us? Consider c 2 All non-zero principal minors are of the form Therefore, c 2 = E 11 / 19

18 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial What does the spectrum of a graph tell us? Consider c 3 The non-trivial principal matrices of size 3 are , , / 19

19 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial What does the spectrum of a graph tell us? Consider c = 0, = 0, = 2 Therefore, c 3 is twice the number of triangles in G 11 / 19

20 The incidence matrix Preliminary concepts Proof of Definition (Incidence Matrix S) Given an undirected graph G = (V, E) with V = {1,..., n}, E = {e 1,..., e m } the incidence matrix of G is the n m matrix S = S(G) with entries S ij such that 1, e j ends at i S ij = 1, e j starts at i 0, otherwise 12 / 19

21 The incidence matrix Preliminary concepts Proof of G = S(G) = / 19

22 Preliminary concepts Proof of The incidence matrix implies connectivity of the graph Theorem rank(s) = n - number of connected components of G 14 / 19

23 Preliminary concepts Proof of The incidence matrix implies connectivity of the graph Theorem rank(s) = n - number of connected components of G Reorder the graph such that S 1 0 S = S S c where c is the number of connected components of GWe wish to show the rank of each component is n i 1 14 / 19

24 Preliminary concepts Proof of The incidence matrix implies connectivity of the graph Theorem rank(s) = n - number of connected components of G For a given connected component S i, suppose we have a summation over the rows s j n i α j s j = 0 j= / 19

25 Preliminary concepts Proof of The incidence matrix implies connectivity of the graph Theorem rank(s) = n - number of connected components of G Consider a row s k, α k / 19

26 Preliminary concepts Proof of The incidence matrix implies connectivity of the graph Theorem rank(s) = n - number of connected components of G Each non-zero column in s k has a corresponding row s l = α k = α l / 19

27 Preliminary concepts Proof of The incidence matrix implies connectivity of the graph Theorem rank(s) = n - number of connected components of G Since the graph is connected, all α j = 1, and n i s j = 0 j=1 Therefore, the rank of S i is n i 1 Remark- S Since S i has rank n i 1, we can remove an arbitrary row to create S i without loss of information 15 / 19

28 Preliminary concepts Proof of The incidence matrix implies connectivity of the graph Theorem rank(s) = n - number of connected components of G Remark- S Since S i has rank n i 1, we can remove an arbitrary row to create S i without loss of information 15 / 19

29 Preliminary concepts Proof of In the following proof, we will try all selections of n 1 edges and use the determinant to see if the resulting subgraph is connected. We use create the matrix that is the combination of the columns of the incidence matrix S: Q = S S T = / 19

30 Preliminary concepts Proof of In the following proof, we will try all selections of n 1 edges and use the determinant to see if the resulting subgraph is connected. We use create the matrix that is the combination of the columns of the incidence matrix S: Q = S S T = / 19

31 Preliminary concepts Proof of In the following proof, we will try all selections of n 1 edges and use the determinant to see if the resulting subgraph is connected. We use create the matrix that is the combination of the columns of the incidence matrix S: Q = S S T = / 19

32 Preliminary concepts Proof of Theorem The number of spanning trees of a graph G is equal to T det(q), where Q = S S Graph Laplacian Q Q is referred to as the graph laplacian, and can also be expressed as Q = D A where D is the degree matrix 17 / 19

33 Preliminary concepts Proof of Theorem The number of spanning trees of a graph G is equal to det(q), where Q = S S T Lemma If T = (V, E) is an directed tree rooted at n, we can order E such that e i ends at i. 17 / 19

34 Preliminary concepts Proof of Theorem The number of spanning trees of a graph G is equal to det(q), where Q = S S T Lemma If T = (V, E) is an directed tree rooted at n, we can order E such that e i ends at i. Proof. Label edges such that e i := (p(i), i), where p(i) is the parent of i 17 / 19

35 Preliminary concepts Proof of Theorem The number of spanning trees of a graph G is equal to det(q), where Q = S S T Lemma If G = (V, E), E = n 1 is a directed graph that is not a tree, det( S) = 0 17 / 19

36 Preliminary concepts Proof of Theorem The number of spanning trees of a graph G is equal to det(q), where Q = S S T Lemma If G = (V, E), E = n 1 is a directed graph that is not a tree, det( S) = 0 Proof. If E = n 1 and G is not a tree, then it is not connected, and rank( S) = rank(s) n 2 17 / 19

37 Preliminary concepts Proof of Theorem The number of spanning trees of a graph G is equal to det(q), where Q = S S T Lemma If T = (V, E), E = n 1 is a tree with e i E ending at i V, det( S) = 1 17 / 19

38 Preliminary concepts Proof of Lemma If T = (V, E), E = n 1 is a tree with e i E ending at i V, det( S) = 1 Proof. By the results of the previous lemmas, we can order the vertices such that p(i) > i. Then 1. S = which is upper diagonal. 17 / 19

39 Preliminary concepts Proof of Proof. We will use the linearity of the determinant to break down det(q) into a sum of determinants of subgraphs: det(q) = det( S j ) where all S j are subgraphs with n 1 edges (Tr(S j ) = n 1). 17 / 19

40 Formulation of PageRank PageRank (PR) is a centrality metric which approximates a web surfer. Jumps with probability q: q n. Like typing a URL Surfs with probability 1 q: f (j) j:j i k out(j). Like clicking a link PageRank reccurance f (i) = q + (1 q) n j:j i f (j) k out (j) 18 / 19

41 Formulation of PageRank PageRank (PR) is a centrality metric which approximates a web surfer. Jumps with probability q: q n. Like typing a URL Surfs with probability 1 q: f (j) j:j i k out(j). Like clicking a link PageRank reccurance f (i) = q + (1 q) n j:j i f (j) k out (j) 18 / 19

42 Formulation of PageRank PageRank (PR) is a centrality metric which approximates a web surfer. Jumps with probability q: q n. Like typing a URL Surfs with probability 1 q: f (j) j:j i k out(j). Like clicking a link PageRank reccurance f (i) = q + (1 q) n j:j i f (j) k out (j) 18 / 19

43 Formulation of PageRank PageRank (PR) is a centrality metric which approximates a web surfer. Jumps with probability q: q n. Like typing a URL Surfs with probability 1 q: f (j) j:j i k out(j). Like clicking a link PageRank reccurance f (i) = q + (1 q) n j:j i f (j) k out (j) 18 / 19

44 Solving the PageRank reccurance PageRank reccurance f (i) = q + (1 q) n j:j i f (j) k out (j) Definition (Transition Matrix P) Given an undirected graph G = (V, E), the transition matrix of G is the n n matrix P = P(G) with entries p ij such that p ij = { 1 k a ik, {v i, v j } E 0, otherwise 19 / 19

45 Solving the PageRank reccurance PageRank reccurance f (i) = q + (1 q) n j:j i f (j) k out (j) j:j i f (j) k out (j) = fp 19 / 19

46 Solving the PageRank reccurance PageRank reccurance f (i) = q + (1 q) n j:j i f (j) k out (j) j:j i f (j) k out (j) = fp We can use the property i f (i) = 1 = f 1T 19 / 19

47 Solving the PageRank reccurance PageRank reccurance f (i) = q + (1 q) n j:j i f (j) k out (j) j:j i f (j) k out (j) = fp We can use the property i f (i) = 1 = f 1T f = q ( q ) n f 1T 1 + (1 q)fp = f n J + (1 q)p 19 / 19

48 Solving the PageRank reccurance PageRank reccurance f (i) = q + (1 q) n j:j i f (j) k out (j) j:j i f (j) k out (j) = fp We can use the property i f (i) = 1 = f 1T f = q ( q ) n f 1T 1 + (1 q)fp = f n J + (1 q)p We can compute PageRank with the eigenvector! 19 / 19

18.312: Algebraic Combinatorics Lionel Levine. Lecture 19

18.312: Algebraic Combinatorics Lionel Levine. Lecture 19 832: Algebraic Combinatorics Lionel Levine Lecture date: April 2, 20 Lecture 9 Notes by: David Witmer Matrix-Tree Theorem Undirected Graphs Let G = (V, E) be a connected, undirected graph with n vertices,

More information

The Matrix-Tree Theorem

The Matrix-Tree Theorem The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries

More information

The Number of Spanning Trees in a Graph

The Number of Spanning Trees in a Graph Course Trees The Ubiquitous Structure in Computer Science and Mathematics, JASS 08 The Number of Spanning Trees in a Graph Konstantin Pieper Fakultät für Mathematik TU München April 28, 2008 Konstantin

More information

Lecture 1 and 2: Random Spanning Trees

Lecture 1 and 2: Random Spanning Trees Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny

More information

ORIE 6334 Spectral Graph Theory September 8, Lecture 6. In order to do the first proof, we need to use the following fact.

ORIE 6334 Spectral Graph Theory September 8, Lecture 6. In order to do the first proof, we need to use the following fact. ORIE 6334 Spectral Graph Theory September 8, 2016 Lecture 6 Lecturer: David P. Williamson Scribe: Faisal Alkaabneh 1 The Matrix-Tree Theorem In this lecture, we continue to see the usefulness of the graph

More information

An Introduction to Spectral Graph Theory

An Introduction to Spectral Graph Theory An Introduction to Spectral Graph Theory Mackenzie Wheeler Supervisor: Dr. Gary MacGillivray University of Victoria WheelerM@uvic.ca Outline Outline 1. How many walks are there from vertices v i to v j

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis

More information

Algebraic Representation of Networks

Algebraic Representation of Networks Algebraic Representation of Networks 0 1 2 1 1 0 0 1 2 0 0 1 1 1 1 1 Hiroki Sayama sayama@binghamton.edu Describing networks with matrices (1) Adjacency matrix A matrix with rows and columns labeled by

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

1.10 Matrix Representation of Graphs

1.10 Matrix Representation of Graphs 42 Basic Concepts of Graphs 1.10 Matrix Representation of Graphs Definitions: In this section, we introduce two kinds of matrix representations of a graph, that is, the adjacency matrix and incidence matrix

More information

Dimension. Eigenvalue and eigenvector

Dimension. Eigenvalue and eigenvector Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Graph Models The PageRank Algorithm

Graph Models The PageRank Algorithm Graph Models The PageRank Algorithm Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 The PageRank Algorithm I Invented by Larry Page and Sergey Brin around 1998 and

More information

CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68

CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68 CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68 References 1 L. Freeman, Centrality in Social Networks: Conceptual Clarification, Social Networks, Vol. 1, 1978/1979, pp. 215 239. 2 S. Wasserman

More information

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

MAT 1302B Mathematical Methods II

MAT 1302B Mathematical Methods II MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture

More information

Topics in Graph Theory

Topics in Graph Theory Topics in Graph Theory September 4, 2018 1 Preliminaries A graph is a system G = (V, E) consisting of a set V of vertices and a set E (disjoint from V ) of edges, together with an incidence function End

More information

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a). .(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)

More information

Distance Geometry-Matrix Completion

Distance Geometry-Matrix Completion Christos Konaxis Algs in Struct BioInfo 2010 Outline Outline Tertiary structure Incomplete data Tertiary structure Measure diffrence of matched sets Def. Root Mean Square Deviation RMSD = 1 n x i y i 2,

More information

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true. 1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

More information

Recitation 8: Graphs and Adjacency Matrices

Recitation 8: Graphs and Adjacency Matrices Math 1b TA: Padraic Bartlett Recitation 8: Graphs and Adjacency Matrices Week 8 Caltech 2011 1 Random Question Suppose you take a large triangle XY Z, and divide it up with straight line segments into

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Problems for M 10/26:

Problems for M 10/26: Math, Lesieutre Problem set # November 4, 25 Problems for M /26: 5 Is λ 2 an eigenvalue of 2? 8 Why or why not? 2 A 2I The determinant is, which means that A 2I has 6 a nullspace, and so there is an eigenvector

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

1 Counting spanning trees: A determinantal formula

1 Counting spanning trees: A determinantal formula Math 374 Matrix Tree Theorem Counting spanning trees: A determinantal formula Recall that a spanning tree of a graph G is a subgraph T so that T is a tree and V (G) = V (T ) Question How many distinct

More information

Graph fundamentals. Matrices associated with a graph

Graph fundamentals. Matrices associated with a graph Graph fundamentals Matrices associated with a graph Drawing a picture of a graph is one way to represent it. Another type of representation is via a matrix. Let G be a graph with V (G) ={v 1,v,...,v n

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

1 Searching the World Wide Web

1 Searching the World Wide Web Hubs and Authorities in a Hyperlinked Environment 1 Searching the World Wide Web Because diverse users each modify the link structure of the WWW within a relatively small scope by creating web-pages on

More information

Chapter 2 Spectra of Finite Graphs

Chapter 2 Spectra of Finite Graphs Chapter 2 Spectra of Finite Graphs 2.1 Characteristic Polynomials Let G = (V, E) be a finite graph on n = V vertices. Numbering the vertices, we write down its adjacency matrix in an explicit form of n

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS LINK ANALYSIS Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 1 Outline Intro Basics of probability and information theory Retrieval models Retrieval evaluation Link analysis Models

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

Energy of Graphs. Sivaram K. Narayan Central Michigan University. Presented at CMU on October 10, 2015

Energy of Graphs. Sivaram K. Narayan Central Michigan University. Presented at CMU on October 10, 2015 Energy of Graphs Sivaram K. Narayan Central Michigan University Presented at CMU on October 10, 2015 1 / 32 Graphs We will consider simple graphs (no loops, no multiple edges). Let V = {v 1, v 2,..., v

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

2.1 Laplacian Variants

2.1 Laplacian Variants -3 MS&E 337: Spectral Graph heory and Algorithmic Applications Spring 2015 Lecturer: Prof. Amin Saberi Lecture 2-3: 4/7/2015 Scribe: Simon Anastasiadis and Nolan Skochdopole Disclaimer: hese notes have

More information

Application. Stochastic Matrices and PageRank

Application. Stochastic Matrices and PageRank Application Stochastic Matrices and PageRank Stochastic Matrices Definition A square matrix A is stochastic if all of its entries are nonnegative, and the sum of the entries of each column is. We say A

More information

IR: Information Retrieval

IR: Information Retrieval / 44 IR: Information Retrieval FIB, Master in Innovation and Research in Informatics Slides by Marta Arias, José Luis Balcázar, Ramon Ferrer-i-Cancho, Ricard Gavaldá Department of Computer Science, UPC

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Generalizations of the Strong Arnold Property and the Inverse Eigenvalue Problem of a Graph

Generalizations of the Strong Arnold Property and the Inverse Eigenvalue Problem of a Graph Generalizations of the and the Inverse Eigenvalue Problem of a Graph Iowa State University and American Institute of Mathematics MM Joint Mathematics Meetings Atlanta, GA, January 7, 2017 Joint work with

More information

Link Analysis Ranking

Link Analysis Ranking Link Analysis Ranking How do search engines decide how to rank your query results? Guess why Google ranks the query results the way it does How would you do it? Naïve ranking of query results Given query

More information

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is

More information

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. 3.3 Diagonalization Let A = 4. Then and are eigenvectors of A, with corresponding eigenvalues 2 and 6 respectively (check). This means 4 = 2, 4 = 6. 2 2 2 2 Thus 4 = 2 2 6 2 = 2 6 4 2 We have 4 = 2 0 0

More information

PRODUCT DISTANCE MATRIX OF A GRAPH AND SQUARED DISTANCE MATRIX OF A TREE. R. B. Bapat and S. Sivasubramanian

PRODUCT DISTANCE MATRIX OF A GRAPH AND SQUARED DISTANCE MATRIX OF A TREE. R. B. Bapat and S. Sivasubramanian PRODUCT DISTANCE MATRIX OF A GRAPH AND SQUARED DISTANCE MATRIX OF A TREE R B Bapat and S Sivasubramanian Let G be a strongly connected, weighted directed graph We define a product distance η(i, j) for

More information

Linear Algebra Practice Final

Linear Algebra Practice Final . Let (a) First, Linear Algebra Practice Final Summer 3 3 A = 5 3 3 rref([a ) = 5 so if we let x 5 = t, then x 4 = t, x 3 =, x = t, and x = t, so that t t x = t = t t whence ker A = span(,,,, ) and a basis

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Quadrature for the Finite Free Convolution

Quadrature for the Finite Free Convolution Spectral Graph Theory Lecture 23 Quadrature for the Finite Free Convolution Daniel A. Spielman November 30, 205 Disclaimer These notes are not necessarily an accurate representation of what happened in

More information

Lecture 13 Spectral Graph Algorithms

Lecture 13 Spectral Graph Algorithms COMS 995-3: Advanced Algorithms March 6, 7 Lecture 3 Spectral Graph Algorithms Instructor: Alex Andoni Scribe: Srikar Varadaraj Introduction Today s topics: Finish proof from last lecture Example of random

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name. HW2 - Due 0/30 Each answer must be mathematically justified. Don t forget your name. Problem. Use the row reduction algorithm to find the inverse of the matrix 0 0, 2 3 5 if it exists. Double check your

More information

This section is an introduction to the basic themes of the course.

This section is an introduction to the basic themes of the course. Chapter 1 Matrices and Graphs 1.1 The Adjacency Matrix This section is an introduction to the basic themes of the course. Definition 1.1.1. A simple undirected graph G = (V, E) consists of a non-empty

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Mining Graph/Network Data Instructor: Yizhou Sun yzsun@ccs.neu.edu March 16, 2016 Methods to Learn Classification Clustering Frequent Pattern Mining Matrix Data Decision

More information

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions. The exam will cover Sections 6.-6.2 and 7.-7.4: True/False 30% Definitions 0% Computational 60% Skip Minors and Laplace Expansion in Section 6.2 and p. 304 (trajectories and phase portraits) in Section

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

Spectral Bounds on the Chromatic Number

Spectral Bounds on the Chromatic Number Spectral Bounds on the Chromatic Number Naneh Apkarian March 14, 2013 Abstract The purpose of this paper is to discuss spectral bounds on the chromatic number of a graph. The classic result by Hoffman,

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

On the inverse matrix of the Laplacian and all ones matrix

On the inverse matrix of the Laplacian and all ones matrix On the inverse matrix of the Laplacian and all ones matrix Sho Suda (Joint work with Michio Seto and Tetsuji Taniguchi) International Christian University JSPS Research Fellow PD November 21, 2012 Sho

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Part I: Preliminary Results. Pak K. Chan, Martine Schlag and Jason Zien. Computer Engineering Board of Studies. University of California, Santa Cruz

Part I: Preliminary Results. Pak K. Chan, Martine Schlag and Jason Zien. Computer Engineering Board of Studies. University of California, Santa Cruz Spectral K-Way Ratio-Cut Partitioning Part I: Preliminary Results Pak K. Chan, Martine Schlag and Jason Zien Computer Engineering Board of Studies University of California, Santa Cruz May, 99 Abstract

More information

Laplacian Integral Graphs with Maximum Degree 3

Laplacian Integral Graphs with Maximum Degree 3 Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,

More information

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them. Prof A Suciu MTH U37 LINEAR ALGEBRA Spring 2005 PRACTICE FINAL EXAM Are the following vectors independent or dependent? If they are independent, say why If they are dependent, exhibit a linear dependence

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1 Lecture Notes: Determinant of a Square Matrix Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Determinant Definition Let A [a ij ] be an

More information

15.1 Matching, Components, and Edge cover (Collaborate with Xin Yu)

15.1 Matching, Components, and Edge cover (Collaborate with Xin Yu) 15.1 Matching, Components, and Edge cover (Collaborate with Xin Yu) First show l = c by proving l c and c l. For a maximum matching M in G, let V be the set of vertices covered by M. Since any vertex in

More information

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS DATA MINING LECTURE 3 Link Analysis Ranking PageRank -- Random walks HITS How to organize the web First try: Manually curated Web Directories How to organize the web Second try: Web Search Information

More information

Networks and Matrices

Networks and Matrices Bowen-Lanford Zeta function, Math table talk Oliver Knill, 10/18/2016 Networks and Matrices Assume you have three friends who do not know each other. This defines a network containing 4 nodes in which

More information

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d

More information

The Structure of the Jacobian Group of a Graph. A Thesis Presented to The Division of Mathematics and Natural Sciences Reed College

The Structure of the Jacobian Group of a Graph. A Thesis Presented to The Division of Mathematics and Natural Sciences Reed College The Structure of the Jacobian Group of a Graph A Thesis Presented to The Division of Mathematics and Natural Sciences Reed College In Partial Fulfillment of the Requirements for the Degree Bachelor of

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Math 215 HW #9 Solutions

Math 215 HW #9 Solutions Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith

More information

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine CS 277: Data Mining Mining Web Link Structure Class Presentations In-class, Tuesday and Thursday next week 2-person teams: 6 minutes, up to 6 slides, 3 minutes/slides each person 1-person teams 4 minutes,

More information

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues

More information

Refined Inertia of Matrix Patterns

Refined Inertia of Matrix Patterns Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl

More information

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u) CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) Final Review Session 03/20/17 1. Let G = (V, E) be an unweighted, undirected graph. Let λ 1 be the maximum eigenvalue

More information

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics Diagonalization MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Motivation Today we consider two fundamental questions: Given an n n matrix A, does there exist a basis

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

On minors of the compound matrix of a Laplacian

On minors of the compound matrix of a Laplacian On minors of the compound matrix of a Laplacian R. B. Bapat 1 Indian Statistical Institute New Delhi, 110016, India e-mail: rbb@isid.ac.in August 28, 2013 Abstract: Let L be an n n matrix with zero row

More information

Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis

Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis CS-621 Theory Gems October 18, 2012 Lecture 10 Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis 1 Introduction In this lecture, we will see how one can use random walks to

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

THEORY OF SEARCH ENGINES. CONTENTS 1. INTRODUCTION 1 2. RANKING OF PAGES 2 3. TWO EXAMPLES 4 4. CONCLUSION 5 References 5

THEORY OF SEARCH ENGINES. CONTENTS 1. INTRODUCTION 1 2. RANKING OF PAGES 2 3. TWO EXAMPLES 4 4. CONCLUSION 5 References 5 THEORY OF SEARCH ENGINES K. K. NAMBIAR ABSTRACT. Four different stochastic matrices, useful for ranking the pages of the Web are defined. The theory is illustrated with examples. Keywords Search engines,

More information

The Hierarchical Product of Graphs

The Hierarchical Product of Graphs The Hierarchical Product of Graphs Lali Barrière Francesc Comellas Cristina Dalfó Miquel Àngel Fiol Universitat Politècnica de Catalunya - DMA4 April 8, 2008 Outline 1 Introduction 2 Graphs and matrices

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Strongly Regular Graphs, part 1

Strongly Regular Graphs, part 1 Spectral Graph Theory Lecture 23 Strongly Regular Graphs, part 1 Daniel A. Spielman November 18, 2009 23.1 Introduction In this and the next lecture, I will discuss strongly regular graphs. Strongly regular

More information

M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry

M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry M.A.P. Matrix Algebra Procedures by Mary Donovan, Adrienne Copeland, & Patrick Curry This document provides an easy to follow background and review of basic matrix definitions and algebra. Because population

More information

Inverse Eigenvalue Problems for Two Special Acyclic Matrices

Inverse Eigenvalue Problems for Two Special Acyclic Matrices mathematics Communication Inverse Eigenvalue Problems for Two Special Acyclic Matrices Debashish Sharma, *, and Mausumi Sen, Department of Mathematics, Gurucharan College, College Road, Silchar 788004,

More information

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

CAAM 335 Matrix Analysis

CAAM 335 Matrix Analysis CAAM 335 Matrix Analysis Solutions to Homework 8 Problem (5+5+5=5 points The partial fraction expansion of the resolvent for the matrix B = is given by (si B = s } {{ } =P + s + } {{ } =P + (s (5 points

More information