Laplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron

Size: px
Start display at page:

Download "Laplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron"

Transcription

1 Laplacian eigenvalues and optimality: II. The Laplacian of a graph R. A. Bailey and Peter Cameron London Taught Course Centre, June 2012

2 The Laplacian of a graph This lecture will be about the Laplacian matrix of a graph and its eigenvalues, and their relation to some graph parameters. This is not a complete account of the theory, but concentrates mainly on the things that are most relevant for experimental design. For further reading, we recommend B. Bollobás, Modern Graph Theory, Springer, (especially chapters II and IX) B. Mohar, Some applications of Laplace eigenvalues of graphs, pp in Graph Symmetry: Algebraic Methods and Applications (ed. G. Hahn and G. Sabidussi), Kluwer, Which graph is best? Of course the question is not well defined. But which would you choose for a network, if you were concerned about connectivity, reliability, etc.? What makes a good network? Which graph is best connected? No two vertices should be too far apart. There should be several alternative routes between two vertices. (But should these routes be disjoint?) There should be no bottlenecks. Loss of a small part of the network should not result in disconnection. Of course, we are resource-limited, else we would just put multiple edges between any two nodes. Here are some ways of measuring the connectivity of a graph. How many spanning trees does it have? The more spanning trees, the better connected. The first graph has 2000 spanning trees, the second has 576. Electrical resistance. Imagine that the graph is an electrical network with each edge being a 1-ohm resistor. Now calculate the resistance between each pair of terminals, and sum over all pairs; the lower the total, the better connected. In the first graph, the sum is 33; in the second, it is 206/3. Which graph is best connected? The Laplacian of a graph Isoperimetric number. This is defined to be { } S ι(g) = min : S V(G), 0 < S n/2, S where n = V(G) and for a set S of vertices, S is the set of edges from S to its complement. Large isoperimetric number means that there are many edges out of any set of vertices. The isoperimetric number for the first graph is 1 (there are just five edges between the inner and outer pentagons), that of the second graph is 1/5 (there is just one edge between the top and bottom pieces). Let G be a graph on n vertices. (Multiple edges are allowed but loops are not.) The Laplacian matrix of G is the n n matrix L = L(G) whose (i, i) entry is the number of edges containing vertex i, while for i = j the (i, j) entry is the negative of the number of edges joining vertices i and j. This is a real symmetric matrix; its eigenvalues are the Laplacian eigenvalues of G. Note that its row sums are zero.

3 The weighted Laplacian Relation to classical Laplacian Suppose that we have positive weights w(e) on the edges of G. Then the weighted Laplacian has the (i, i) entry the sum of weights of edges containing i, and whose (i, j) entry for i = j is minus the sum of the weights of edges joining i to j. If the weights are rational, then we may multiply them by the least common multiple of the denominators to make them integers. Then replace an edge of weight w by w edges, to obtain a multigraph with the same Laplacian. For general real weights, replace them first by rational approximations. We will not consider weighted Laplacians. The classical Laplacian is a second-order differential operator defined on functions on a manifold, closely related to potential theory, the wave equation, etc. A manifold can be triangulated, that is, approximated by a graph drawn in it. If we take the weight of an edge to be inversely proportional the square of its length, then the weighted Laplacian of the graph is an approximation to the Laplacian of the manifold. In the other direction, given a graph, we can build a manifold reflecting its structure. Given a d-valent vertex, take a sphere with d holes; glue spheres corresponding to the vertices of an edge together along the corresponding holes. We won t pursue this any further. Adjacency matrix and Laplacian Positive semi-definiteness The usual adjacency matrix A(G) of a graph G on n vertices has rows and columns indexed by vertices; the (i, j) entry is the number of edges connecting i to j. Note that we can allow loops here (though it is not clear whether a loop contributes 1 or 2 to the corresponding diagonal entry!) For the Laplacian, we forbid loops. If G is a regular graph with valency d, then L(G) = di A(G); so the Laplacian eigenvalues are obtained by subtracting the adjacency matrix eigenvalues from d. If G is not regular, there is no such simple relationship between the eigenvalues of the two matrices. The Laplacian of a graph is positive semidefinite. ( ) +1 1 For L is the sum of submatrices, one for each edge 1 +1 (this 2 2 matrix in the positions indexed by the two vertices of the edge, with zeros elsewhere). This matrix is positive semidefinite (its eigenvalues are 2 and 0.) We ll see another argument for this later. It follows that the eigenvalues of L are all non-negative. The multiplicity of zero The multiplicity of 0 as an eigenvalue of L is equal to the number of connected components of the graph. An eigenvector with zero eigenvalue is a function on the vertices whose value at i is the weighted average of its values on the neighbours of i, each neighbour weighted by the number of edges joining it to i. (If you know about harmonic functions, you will recognise this!) Considering a vertex where the maximum modulus is achieved, we see that the same value occurs on all neighbours, so the function is constant on connected components. In particular, if the graph is connected (as we always assume), the zero eigenvalue (called trivial ) has multiplicity 1; the other eigenvalues are nontrivial. The eigenvectors for the trivial eigenvalue are the constant vectors. On average Note that the sum of the eigenvalues is the trace of L, which is the sum of the vertex valencies, or twice the number of edges. So the average of the non-trivial eigenvalues is 2 E(G) /( V(G) 1); it depends just on the numbers of vertices and edges, and the detailed structure of the graph has no effect. We ll see that other means, in particular the geometric and harmonic means, of the non-trivial eigenvalues, give us important information!

4 Examples The Rayleigh principle The Petersen graph is strongly regular; its adjacency matrix A satisfies A 2 + A 2I = J, where J is the all-1 matrix; its eigenvalues are 3, 1 and 2, and so the Laplacian eigenvalues are 0, 2 and 5, with multiplicities 1, 5 and 4 respectively. For the other graph in our introductory example, the Laplacian eigenvalues are 0, 2, 3 (multiplicity 2), 4 (multiplicity 2), 5, and the roots of x 3 9x x 4 (which are approximately , , and ). Recall that eigenvectors corresponding to distinct eigenvalues of a symmetric matrix are orthogonal. Let λ 1, λ 2 be the smallest and second-smallest eigenvalues of the symmetric matrix A, and suppose that λ 1 is a simple eigenvalue with eigenvector u. Let v be any non-zero vector orthogonal to u. Then vav vv λ 2, with equality if and only if v is an eigenvector with eigenvalue λ 2. This is obvious when v is expressed as a linear combination of eigenvectors of A. There is an extension to any eigenvalue. The cutset lemma Let G be a connected graph on n vertices, and E a set of m edges whose removal disconnects G into vertex sets of sizes n 1 and n 2, with n 1 + n 2 = n. Let µ be the smallest non-trivial eigenvalue of L. Then m µn 1 n 2 /(n 1 + n 2 ). For let V 1 and V 2 be the vertex sets in the theorem, and let v be the vector with value n 2 on vertices in V 1, and n 1 in vertices in V 2. These values are chosen so that v is orthogonal to the all-1 vector (the trivial eigenvector). Clearly, vv = n 1 n n 2n 2 1 = (n 1 + n 2 )n 1 n 2. I claim that v Lv = m(n 1 + n 2 ) 2. Recall that L is the sum of submatrices corresponding to edges; we have to add the contributions of these. An edge within one of the parts contributes 0; one between the parts contributes (n 1 + n 2 ) 2. The claim follows. The theorem follows from the Rayleigh principle. Isoperimetric number Let G be a connected graph whose smallest nontrivial Laplacian eigenvalue is µ. Then the isoperimetric number ι(g) is at least µ/2. For let S be a set of at most half the vertices, and let S = n 1, V \ S = n 2, and (S) = m. Then by the cutset lemma, (S) S = m n 1 µn 2 n 1 + n 2 µ 2. So, on one of our criteria, a good network is one whose smallest nontrivial Laplacian eigenvalue is as large as possible. Examples Expanders In our two examples, the smallest nontrivial Laplacian eigenvalues are 2 (for the Petersen graph) and (for the other graph). Note that the Petersen graph has isoperimetric number 1, meeting the bound of half the least non-trivial eigenvalue. So the vector which is +1 on the outer pentagon and 1 on the inner pentagram is an eigenvector. In the other graph, the true value is a bit more than half the smallest eigenvalue. Loosely, an expander is a regular connected graph whose smallest non-trivial eigenvalue is large. The above result shows that expanders have large isoperimetric numbers. More precisely, a sequence (G n : n N) is a sequence of expanders if there is a constant c > 0 such that the smallest non-trivial Laplacian eigenvalue of every graph G n is at least c. It is known that a random regular graph is an expander with high probability; but explicit constructions are more difficult. The first constructions were given by Margulis and by Lubotzky, Phillips and Sarnak, and depend on substantial number-theoretic and group-theoretic background.

5 Cheeger s inequality Incidence matrix Cheeger s inequality is a result about Laplacians of manifolds; it has a discrete analogue. It gives a bound in the other direction between the isoperimetric number and the smallest nontrivial Laplacian eigenvalue. Let G be a connected graph; let be the maximum valency of G, and µ the smallest nontrivial Laplacian eigenvalue. Then ι(g) 2 µ. Mohar improved the upper bound to (2 µ)µ if the graph is connected but not complete. Choose a fixed but arbitrary orientation of the edges of the graph G. Define the vertex-edge incidence matrix Q to have rows indexed by vertices, columns by edges, and (v, e) entry +1 if v is the head of the edge e, 1 if v is the tail of e, and 0 otherwise Incidence matrix and Laplacian Let G have incidence matrix Q and Laplacian L. Then QQ = L. For the (v, v) entry of QQ is the number of edges with either head or tail at v; and the (v, w) entry is the sum of 1 for all edges with head at v and tail at w or vice versa. This shows, again, that L is positive semidefinite. And note that the orientation doesn t matter. The Moore Penrose inverse Let A be a real symmetric matrix. Then we have a spectral decomposition of A: A = λp λ, λ Λ where Λ is the set of eigenvalues of A, and P λ is the orthogonal projector onto the space of eigenvectors with eigenvalue λ. We define the Moore Penrose inverse of A by A = λ 1 P λ. λ =0 In other words, we invert A where we can. The Moore Penrose inverse is a quasi-inverse of A in the sense of ring theory: that is, A AA = A, AA A = A. The Moore Penrose inverse of the Laplacian Electrical networks We will see a lot of the matrix L, where L is the Laplacian of a graph G on n vertices. If G is connected, then the projector onto the trivial eigenspace is J/n, where J is the all-1 matrix. So adding this to L changes the trivial eigenvalue to 1, and subtracting it takes the 1 off again. In other words, L = (L + J/n) 1 J/n. As mentioned earlier, we regard the graph G as an electrical network, where we regard each edge as a one-ohm resistor. Given any two vertices i and j, the effective resistance between i and j is the voltage of a battery which, when connected to the two vertices, causes a current of 1 ampere to flow. Let G be a connected graph with Laplacian L. Then the effective resistance between i and j is L ii + L jj L ij L ji, where L is the Moore Penrose inverse of L.

6 Ohm s Law and Kirchhoff s Laws Proof of the theorem If we apply a voltage between two vertices, the flow of current in edges and the potentials at vertices are governed by three laws: Ohm s Law: the potential drop in each edge is the product of the current and the resistance (and so is equal to the current since we have set all resistances to 1). Kirchhoff s Voltage Law: the sum of the potential drops on any path between vertices i and j is independent of the choice of path. Kirchhoff s Current Law: if vertex i is not connected to the battery, then the sum of the currents flowing into i is equal to the sum of the currents flowing out. Kirchhoff s voltage law and Ohm s Law are taken care of if we take a vector p of potentials with components indexed by vertices, and require that the current on the edge e is equal to the potential difference between its ends. (As before, we take a fixed ordering of each edge, and take the current to be negative if it flows from head to tail of the edge.) Note that p is defined up to adding a constant vector. This is expressed by the requirement that pq is the vector of currents in the edges, where Q is the vertex-edge incidence matrix. pqq = pl is a vector whose ith entry is the sum of the signed currents into the vertex i. So Kirchhoff s current law says that pqq has all entries zero except at the two vertices connected to the battery. If the current is 1 ampere, the entries in pl = pqq are +1 and 1 on these two vertices. Let us write pl = f i f j, where f i is the unit basis vector corresponding to v. Now f i f j is orthogonal to the all-1 vector, so (f i f j )L = p. This gives the vector of potentials. The potential difference between v and w is the dot product of this vector with f i f j, that is, xl x, where x = f i f j. This is the potential difference required to make a current of 1 ampere flow; hence it is the effective resistance between i and j. This can be written The average pairwise resistance One of our criteria for a good network is that the average pairwise resistance between two vertices should be small. The next theorem shows that this is equivalent to maximizing the harmonic mean of the nontrivial Laplacian eigenvalues. The average pairwise resistance is equal to 2 divided by the harmonic mean of the nontrivial Laplacian eigenvalues. R ij = L ii + L jj L ij L ji, as required. Proof of the resistance theorem Examples The sum of the resistances between all ordered pairs of vertices is R ij = 2(n 1) Trace(L ) 2 L ij = 2n Trace(L ), i =j i =j since the sum of all elements of L is zero (as the all-1 vector is an eigenvector with eigenvalue 0). So the average pairwise resistance is 2 Trace(L )/(n 1). Now the trace of L is the sum of the reciprocals of the non-zero eigenvalues of L, and so we are done. For the Petersen graph, the harmonic mean of the non-trivial eigenvalues is (((5 1/2) + (4 1/5))/9) 1 = 30/11, so the average resistance is 11/15. For the other graph, a similar calculation gives 135/103, so the average resistance is 206/135.

7 Example: Petersen graph The Matrix-Tree For the Petersen graph, we can exploit symmetry to calculate the resistance between two terminals. Two vertices equivalent under a symmetry fixing the terminals must be at the same potential, and so edges between them can be neglected. If the terminals i and i are joined, the graph reduces to a pentagon i = i 0, i 1, i 2, i 3, i 4 = j, with one edge from i to j, two from i 0 to i 1 and from i 3 to i 4, and four from i 1 to i 2 and i 2 to i 3. So the resistance of the path (i 0, i 1, i 2, i 3, i 4 ) is 1/2 + 1/4 + 1/4 + 1/2 = 3/2. This is in parallel with a single edge, so the overall resistance is 1/(1 + 2/3) = 3/5. Similar but slightly more complicated arguments give the resistance between non-adjacent terminals as 4/5. So the total is 15 3/ /5 = 33, and the average is 33/45 = 11/15, agreeing with the eigenvalue calculation. Let G be a connected graph on n vertices. Then the following three quantities are equal: 1. the number of spanning trees of G; 2. (λ 2 λ n )/n, where λ 2,..., λ n are the nontrivial Laplacian eigenvalues of G; 3. any cofactor of L(G) (that is, the determinant of the matrix obtained by deleting row i and column j, multiplied by ( 1) i+j ). Since one of our criteria for a good network is a large number of spanning trees, this is equivalent to maximizing the geometric mean of the non-trivial Laplacian eigenvalues. THEOREM OF THE DAY The Matrix Tree (Kirchoff) Let G be a graph with n vertices and let L(G) be the n n matrix whose entry in row i and column j is defined to be (the number of edges joining vertex i to vertex j) if i j, and the number of edges incident with vertex i if i = j. Then the number of spanning trees ofg is given by det L(G)(1 1), where L(G)(1 1) is the matrix obtained by deleting the 1st row and 1st column of L(G). The graph on the far left has 5 edges on n = 4 vertices. Its spanning trees, in the centre, are those subsets of n 1 = 3 edges which contain no cyclic paths. The determinant function, det, yields a single figure from a square matrix or table. It is available in standard spreadsheet applications as the =MDETERM function. As shown here in the OpenOffice Calc package, the calculation will produce the answer 5, since there are exactly 5 spanning trees for the given graph. In fact, any row of L(G), not just the first, and any column, may be deleted in the statement of the theorem without changing the absolute value of the result. This calculation was first devised by Gustav Kirchoff in 1847 as a way of obtaining values of current flow in electrical networks. Matrices were first emerging as a powerful mathematical tool about the same time. Web link: jmartin/mc2004/graph1.pdf Further reading: Introduction to Graph Theory, 5th Ed., by Robin J. Wilson, Prentice Hall, 2010, Chapter 4. The Cauchy Binet formula The proof depends on the Cauchy Binet formula, which says the following: Let A be an m n matrix, and B an n m matrix, where m < n. Then det(ab) = det(a(x)) det(b(x)), X where X ranges over all m-element subsets of {1,..., n}. Here A(X) is the m m matrix whose columns are the columns of A with index in X, and B(X) is the m m matrix whose rows are the rows of B with index in X. From by Robin Whitty. This file hosted by London South Bank University Proof of the Matrix-Tree Let Q be the incidence matrix of G, so that QQ = L. Let i be any vertex of G, and let N = Q i be the matrix obtained by deleting the row of Q indexed by i. It can be shown that, if X is a set of n 1 edges, then det(n(x)) is ±1 if X is the edge set of a spanning tree, and is 0 otherwise. By the Cauchy Binet formula, det(nn ) is equal to the number of spanning trees. But NN is the principal cofactor of L(G) obtained by deleting the row and column indexed by i. Matrices with row and column sum zero To finish the proof, let A be any matrix with row and column sums zero, and let B = A + J, where J is the all-1 matrix. We evaluate det(b). Replace the first row by the sum of all the rows; this makes the entries in the first row n and doesn t change the other entries; the determinant is unchanged. Replace the first column by the sum of all the columns. This makes the first entry n 2, and the other entries in this column n, and doesn t change the other entries of the matrix; the determinant is unchanged. Subtract 1/n times the first row from each other row. The elements of the first column, other than the first, become 0; we subtract 1 from all elements not in the first row or column of B, leaving the entries of A; and the determinant is unchanged.

8 Cayley s formula We conclude det(b) is n 2 times the (1, 1) cofactor of A. It is easily checked that the argument works for any cofactor of A. So all cofactors of A are equal. Finally, the all-1 vector is an eigenvector of B with eigenvalue n, while its other eigenvalues are the same as those of A. Thus det(b) is n times the product of the nontrivial eigenvalues of A. The Matrix-Tree theorem gives us a simple proof of the famous formula of Cayley: The number of spanning trees in the complete graph on n vertices is equal to n n 2. For the Laplacian of the complete graph is ni J, where J is the all-1 matrix; its non-trivial Laplacian eigenvalues are all equal to n, and so the number of spanning trees is n n 1 /n = n n 2. In our two examples, the number of spanning trees are 2000 and 576 respectively. Markov chains Random walks A Markov chain on a finite state space S is a sequence of random variables with values in S which has no memory: the state at time n + 1 depends only on the state at time n. A Markov chain is defined by a transition matrix P, with rows and columns indexed by S, where p ij is the probability of moving from state i to state j in one time step. As usual, the entries of P are non-negative and the row sums are equal to 1. An important example of a Markov chain is the random walk on a graph G. The state space is the vertex set V(G). At time n, if the process is at vertex i, it chooses at random (with equal probabilities) an edge containing i, and at stage n + 1 moves to the other end of this edge. If the graph has no loops, then the probability of moving from i to j is L ij /L ii, where L is the Laplacian. In particular, if the graph is regular with degree d, then P = I L/d. More generally, P = I D 1 L, where D is the diagonal matrix whose (i, i) entry is the number of edges incident with i. Theory of Markov chains If a Markov chain has transition matrix P, then the (i, j) entry of P m is the probability of moving from i to j in m steps. The Markov chain is connected if, for any i and j, there exists m such that (P m ) ij = 0; it is aperiodic if the the greatest common divisors of the values of m for which (P m ) ii = 0 for some i is 1. A random walk on a graph G is connected if and only if G is connected, and is aperiodic if and only if G is not bipartite. A connected aperiodic Markov chain has a unique limiting distribution, to which it converges from any starting distribution. Since the row sums of P are all 1, we see that Pp = p, where p is the all-1 vector; our assumptions imply that the multiplicity of 1 as eigenvalue is 1. Now left and right eigenvalues are equal, so there is a vector q = 0 such that qp = q. It can be shown that the entries of q are non-negative; we can normalise it so that their sum is 1. Then q is a probability distribution which is fixed by P, so it is the unique stationary distribution.

9 Convergence Random walks revisited Suppose that P is symmetric. Then we can write P = λp λ where λ runs over the eigenvalues, and P λ is the projection onto the λ eigenspace. Then P m = λ m P λ. It is also true, by the Perron Frobenius theorem, that every eigenvalue λ satisfies λ 1. If the Markov chain is irreducible and aperiodic, then 1 is a simple eigenvalue, and all other eigenvalues have multiplicities strictly less than 1. Now let x be any non-negative vector whose coordinates sum to 1. We can regard x as the initial probability distribution. Then we have xp m = λ m xp λ xp 1 as m. So xp 1 = q is the limiting distribution, and the convergence to q is like µ m where µ is the second-largest modulus of an eigenvalue. So the convergence is exponential if µ is not close to 1. For a random walk, we have P = I D 1 L. Then D 1/2 PD 1/2 = I D 1/2 LD 1/2. This matrix is symmetric, and is similar to P; so P is indeed diagonalizable. However, the analysis is a bit more complicated, and not given here. Its eigenvalues are 1 λ, where λ is an eigenvalue of the positive semidefinite matrix D 1/2 LD 1/2, so for rapid convergence we require that the smallest positive eigenvalue of this matrix should be as large as possible. Thus the problem is a twisted version of the usual problem about the smallest non-trivial Laplacian eigenvalue. If the graph is regular, so that D = di, then it reduces exactly to the former problem. Other results Summing up The smallest nontrivial Laplacian eigenvalue µ of a graph G is an important parameter which occurs in many other situations. For example, a recent result of Krivelevich and Sudakov asserts that, in a regular graph of valency d on n vertices, if µ is sufficiently large in terms of n and d, then the graph is Hamiltonian. We saw that three important parameters of a connected graph, which are determined by its Laplacian spectrum, are: the harmonic mean of the non-trivial Laplacian eigenvalues, which tells us about the average resistance between pairs of vertices; the geometric mean of the non-trivial Laplacian eigenvalues, which tells us about the number of spanning trees; the smallest non-trivial Laplacian eigenvalue, which is related to the isoperimetric number and the rate of convergence of the random walk on the graph. In the next lecture, we will see that these are also important parameters in experimental design!

Combinatorics of optimal designs

Combinatorics of optimal designs Combinatorics of optimal designs R A Bailey and Peter J Cameron pjcameron@qmulacuk British Combinatorial Conference, St Andrews, July 2009 Mathematicians and statisticians There is a very famous joke about

More information

1 Counting spanning trees: A determinantal formula

1 Counting spanning trees: A determinantal formula Math 374 Matrix Tree Theorem Counting spanning trees: A determinantal formula Recall that a spanning tree of a graph G is a subgraph T so that T is a tree and V (G) = V (T ) Question How many distinct

More information

Root systems and optimal block designs

Root systems and optimal block designs Root systems and optimal block designs Peter J. Cameron School of Mathematical Sciences Queen Mary, University of London Mile End Road London E1 4NS, UK p.j.cameron@qmul.ac.uk Abstract Motivated by a question

More information

Walks, Springs, and Resistor Networks

Walks, Springs, and Resistor Networks Spectral Graph Theory Lecture 12 Walks, Springs, and Resistor Networks Daniel A. Spielman October 8, 2018 12.1 Overview In this lecture we will see how the analysis of random walks, spring networks, and

More information

Lecture: Modeling graphs with electrical networks

Lecture: Modeling graphs with electrical networks Stat260/CS294: Spectral Graph Methods Lecture 16-03/17/2015 Lecture: Modeling graphs with electrical networks Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough.

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

Effective Resistance and Schur Complements

Effective Resistance and Schur Complements Spectral Graph Theory Lecture 8 Effective Resistance and Schur Complements Daniel A Spielman September 28, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in

More information

Eigenvalue comparisons in graph theory

Eigenvalue comparisons in graph theory Eigenvalue comparisons in graph theory Gregory T. Quenell July 1994 1 Introduction A standard technique for estimating the eigenvalues of the Laplacian on a compact Riemannian manifold M with bounded curvature

More information

ORIE 6334 Spectral Graph Theory September 8, Lecture 6. In order to do the first proof, we need to use the following fact.

ORIE 6334 Spectral Graph Theory September 8, Lecture 6. In order to do the first proof, we need to use the following fact. ORIE 6334 Spectral Graph Theory September 8, 2016 Lecture 6 Lecturer: David P. Williamson Scribe: Faisal Alkaabneh 1 The Matrix-Tree Theorem In this lecture, we continue to see the usefulness of the graph

More information

Eigenvalues, random walks and Ramanujan graphs

Eigenvalues, random walks and Ramanujan graphs Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,

More information

Abstract. Using graphs to find optimal block designs. Problem 1: Factorial design. Conference on Design of Experiments, Tianjin, June 2006

Abstract. Using graphs to find optimal block designs. Problem 1: Factorial design. Conference on Design of Experiments, Tianjin, June 2006 Abstract Ching-Shui Cheng was one of the pioneers of using graph theory to prove results about optimal incomplete-block designs. There are actually two graphs associated with an incomplete-block design,

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

Physical Metaphors for Graphs

Physical Metaphors for Graphs Graphs and Networks Lecture 3 Physical Metaphors for Graphs Daniel A. Spielman October 9, 203 3. Overview We will examine physical metaphors for graphs. We begin with a spring model and then discuss resistor

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

Lecture 1 and 2: Random Spanning Trees

Lecture 1 and 2: Random Spanning Trees Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Topics in Graph Theory

Topics in Graph Theory Topics in Graph Theory September 4, 2018 1 Preliminaries A graph is a system G = (V, E) consisting of a set V of vertices and a set E (disjoint from V ) of edges, together with an incidence function End

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

On Hadamard Diagonalizable Graphs

On Hadamard Diagonalizable Graphs On Hadamard Diagonalizable Graphs S. Barik, S. Fallat and S. Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A2 Abstract Of interest here is a characterization

More information

This section is an introduction to the basic themes of the course.

This section is an introduction to the basic themes of the course. Chapter 1 Matrices and Graphs 1.1 The Adjacency Matrix This section is an introduction to the basic themes of the course. Definition 1.1.1. A simple undirected graph G = (V, E) consists of a non-empty

More information

Spectral Graph Theory Lecture 3. Fundamental Graphs. Daniel A. Spielman September 5, 2018

Spectral Graph Theory Lecture 3. Fundamental Graphs. Daniel A. Spielman September 5, 2018 Spectral Graph Theory Lecture 3 Fundamental Graphs Daniel A. Spielman September 5, 2018 3.1 Overview We will bound and derive the eigenvalues of the Laplacian matrices of some fundamental graphs, including

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

Laplacians of Graphs, Spectra and Laplacian polynomials

Laplacians of Graphs, Spectra and Laplacian polynomials Laplacians of Graphs, Spectra and Laplacian polynomials Lector: Alexander Mednykh Sobolev Institute of Mathematics Novosibirsk State University Winter School in Harmonic Functions on Graphs and Combinatorial

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Strongly Regular Graphs, part 1

Strongly Regular Graphs, part 1 Spectral Graph Theory Lecture 23 Strongly Regular Graphs, part 1 Daniel A. Spielman November 18, 2009 23.1 Introduction In this and the next lecture, I will discuss strongly regular graphs. Strongly regular

More information

On some matrices related to a tree with attached graphs

On some matrices related to a tree with attached graphs On some matrices related to a tree with attached graphs R. B. Bapat Indian Statistical Institute New Delhi, 110016, India fax: 91-11-26856779, e-mail: rbb@isid.ac.in Abstract A tree with attached graphs

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

The Adjacency Matrix, Standard Laplacian, and Normalized Laplacian, and Some Eigenvalue Interlacing Results

The Adjacency Matrix, Standard Laplacian, and Normalized Laplacian, and Some Eigenvalue Interlacing Results The Adjacency Matrix, Standard Laplacian, and Normalized Laplacian, and Some Eigenvalue Interlacing Results Frank J. Hall Department of Mathematics and Statistics Georgia State University Atlanta, GA 30303

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

MATH 2030: EIGENVALUES AND EIGENVECTORS

MATH 2030: EIGENVALUES AND EIGENVECTORS MATH 2030: EIGENVALUES AND EIGENVECTORS Determinants Although we are introducing determinants in the context of matrices, the theory of determinants predates matrices by at least two hundred years Their

More information

2.1 Laplacian Variants

2.1 Laplacian Variants -3 MS&E 337: Spectral Graph heory and Algorithmic Applications Spring 2015 Lecturer: Prof. Amin Saberi Lecture 2-3: 4/7/2015 Scribe: Simon Anastasiadis and Nolan Skochdopole Disclaimer: hese notes have

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

The Matrix-Tree Theorem

The Matrix-Tree Theorem The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Specral Graph Theory and its Applications September 7, Lecture 2 L G1,2 = 1 1.

Specral Graph Theory and its Applications September 7, Lecture 2 L G1,2 = 1 1. Specral Graph Theory and its Applications September 7, 2004 Lecturer: Daniel A. Spielman Lecture 2 2.1 The Laplacian, again We ll begin by redefining the Laplacian. First, let G 1,2 be the graph on two

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Laplacian Integral Graphs with Maximum Degree 3

Laplacian Integral Graphs with Maximum Degree 3 Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

Lecture 13: Spectral Graph Theory

Lecture 13: Spectral Graph Theory CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 13: Spectral Graph Theory Lecturer: Shayan Oveis Gharan 11/14/18 Disclaimer: These notes have not been subjected to the usual scrutiny reserved

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Hadamard and conference matrices

Hadamard and conference matrices Hadamard and conference matrices Peter J. Cameron University of St Andrews & Queen Mary University of London Mathematics Study Group with input from Rosemary Bailey, Katarzyna Filipiak, Joachim Kunert,

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

Hadamard and conference matrices

Hadamard and conference matrices Hadamard and conference matrices Peter J. Cameron University of St Andrews & Queen Mary University of London Mathematics Study Group with input from Rosemary Bailey, Katarzyna Filipiak, Joachim Kunert,

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

PRODUCT DISTANCE MATRIX OF A GRAPH AND SQUARED DISTANCE MATRIX OF A TREE. R. B. Bapat and S. Sivasubramanian

PRODUCT DISTANCE MATRIX OF A GRAPH AND SQUARED DISTANCE MATRIX OF A TREE. R. B. Bapat and S. Sivasubramanian PRODUCT DISTANCE MATRIX OF A GRAPH AND SQUARED DISTANCE MATRIX OF A TREE R B Bapat and S Sivasubramanian Let G be a strongly connected, weighted directed graph We define a product distance η(i, j) for

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Lecture 3: graph theory

Lecture 3: graph theory CONTENTS 1 BASIC NOTIONS Lecture 3: graph theory Sonia Martínez October 15, 2014 Abstract The notion of graph is at the core of cooperative control. Essentially, it allows us to model the interaction topology

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

An Algorithmist s Toolkit September 15, Lecture 2

An Algorithmist s Toolkit September 15, Lecture 2 18.409 An Algorithmist s Toolkit September 15, 007 Lecture Lecturer: Jonathan Kelner Scribe: Mergen Nachin 009 1 Administrative Details Signup online for scribing. Review of Lecture 1 All of the following

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality Lecturer: Shayan Oveis Gharan May 4th Scribe: Gabriel Cadamuro Disclaimer:

More information

Markov Chains and Spectral Clustering

Markov Chains and Spectral Clustering Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

Strongly Regular Decompositions of the Complete Graph

Strongly Regular Decompositions of the Complete Graph Journal of Algebraic Combinatorics, 17, 181 201, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Strongly Regular Decompositions of the Complete Graph EDWIN R. VAN DAM Edwin.vanDam@uvt.nl

More information

Inverse Perron values and connectivity of a uniform hypergraph

Inverse Perron values and connectivity of a uniform hypergraph Inverse Perron values and connectivity of a uniform hypergraph Changjiang Bu College of Automation College of Science Harbin Engineering University Harbin, PR China buchangjiang@hrbeu.edu.cn Jiang Zhou

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity: Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Laplacians of Graphs, Spectra and Laplacian polynomials

Laplacians of Graphs, Spectra and Laplacian polynomials Mednykh A. D. (Sobolev Institute of Math) Laplacian for Graphs 27 June - 03 July 2015 1 / 30 Laplacians of Graphs, Spectra and Laplacian polynomials Alexander Mednykh Sobolev Institute of Mathematics Novosibirsk

More information

Final Exam Practice Problems Answers Math 24 Winter 2012

Final Exam Practice Problems Answers Math 24 Winter 2012 Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the

More information

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These

More information

Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering. Some Slides

Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering. Some Slides Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering Some Slides Jean Gallier Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104,

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs

17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs Chapter 17 Graphs and Graph Laplacians 17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs Definition 17.1. A directed graph isa pairg =(V,E), where V = {v

More information

1.10 Matrix Representation of Graphs

1.10 Matrix Representation of Graphs 42 Basic Concepts of Graphs 1.10 Matrix Representation of Graphs Definitions: In this section, we introduce two kinds of matrix representations of a graph, that is, the adjacency matrix and incidence matrix

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Latin squares: Equivalents and equivalence

Latin squares: Equivalents and equivalence Latin squares: Equivalents and equivalence 1 Introduction This essay describes some mathematical structures equivalent to Latin squares and some notions of equivalence of such structures. According to

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Spectral Graph Theory

Spectral Graph Theory Spectral raph Theory to appear in Handbook of Linear Algebra, second edition, CCR Press Steve Butler Fan Chung There are many different ways to associate a matrix with a graph (an introduction of which

More information

An Introduction to Spectral Graph Theory

An Introduction to Spectral Graph Theory An Introduction to Spectral Graph Theory Mackenzie Wheeler Supervisor: Dr. Gary MacGillivray University of Victoria WheelerM@uvic.ca Outline Outline 1. How many walks are there from vertices v i to v j

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

A Bound on the Number of Spanning Trees in Bipartite Graphs

A Bound on the Number of Spanning Trees in Bipartite Graphs A Bound on the Number of Spanning Trees in Bipartite Graphs Cheng Wai Koo Mohamed Omar, Advisor Nicholas Pippenger, Reader Department of Mathematics May, 2016 Copyright 2016 Cheng Wai Koo. The author grants

More information

MAT1302F Mathematical Methods II Lecture 19

MAT1302F Mathematical Methods II Lecture 19 MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

1. In this problem, if the statement is always true, circle T; otherwise, circle F. Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation

More information

Very few Moore Graphs

Very few Moore Graphs Very few Moore Graphs Anurag Bishnoi June 7, 0 Abstract We prove here a well known result in graph theory, originally proved by Hoffman and Singleton, that any non-trivial Moore graph of diameter is regular

More information

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,

More information